Myth: You cannot image from inside cities because the light pollution will make things impossible.
TL;DR: Modern software and filter technology is game-changing for imaging in the city, even for broadband targets!
Long form:
Let’s take a bit of a deep dive into this. For many, many years it has been accepted truth that you must go to an absolutely dark site to image faint objects with any kind of nebulosity. Over time various companies have developed different tools to help combat the light pollution that results from urban sprawl. Companies like IDAS in Japan have long made fantastic filters like the LPS-P2 that I’ve used for decades, literally. It’s an excellent broadband filter that helps cut emissions from high pressure sodium and mercury vapor lamps that were common until the advent of LED lamps. Since that change began, background light pollution has become more and more of a problem. LED lamps emit light across the entire spectrum and have filters put in front of them to change the color emitted. Filters aren’t 100% though so from our perspective LED lamps are broadband sources. This is made worse by the human propensity for blue light over amber, because it feels more like sunlight. As shown by the Sky Quality Meter I’ve been running for a few years, the struggle is real.
Because these are broadband lights, the previous strategy of filtering those wavelengths associated with the offending source no longer works. Interesting targets that contain reflection nebulae are broadband emitters as well so the traditional path has been to go to dark sites to image those objects. While you will still get far better results with that kind of object from a dark sky site, I’d like to show that it’s not *impossible* to image them from urban areas.
First let’s look at where we start from. This image represents an auto stretched image that is a stack of 52 frames of 5 minutes each for a total integration time of 4 hours and 20 minutes. This data was collected under a 75% (waxing) moon on October 30, 2025 from my Bortle 9 backyard in Allen, Texas. The heavily green cast is a combination of what we get through the filter, minus the amber of sodium and yellow-green of mercury. If you squint hard, and read between the lines while holding your tongue at just the right angle you can kind of maybe see the Pleiades star cluster at the center of the field of green soup that is the light pollution around our backyard, amplified several hundred times.

Years ago I would have given up and said “It can’t be done.” Several times I did. But I’m stubborn and lazy. I don’t always want to pack up all my gear, drive four or five hours and overnight someplace just to drive another five hours back the next day. That puts a severe limit on the number of astronomy trips I have taken in recent years. That’s the lazy part. The stubborn part is where I keep trying new filters, new software, new techniques, all with the end goal of being able to image deep sky objects from the convenience of my back yard, even though light pollution is absolutely horrible where we are. In 2019 Optolong released the L-eNhance dual-band light pollution filter. This filter knocks down better than 99.9% of the common light pollution and only lets in those frequencies of light associated with Hydrogen-alpha emissions and Oxygen-III along with most of Hydrogen-beta since it’s right next door. Hydrogen-alpha (Ha) is strongly red where Oxygen-III (O3) is kind of teal and Hydrogen-beta (Hb) is slightly more blue than O3. This was absolutely a game changer that enabled imaging from urban areas of a large portion of the sky, particularly in the fall through the spring. Objects like the Dumbbell Nebula, the Ring Nebula, the Rosette Nebula, the Eagle Nebula, even the Great Orion Nebula. They’re strongly emission nebulae that can benefit from this kind of filter. Reflection nebulae, like the Pleiades, the Trifid Nebula, the Iris Nebula; galaxies like the Whirlpool Galaxy and the Andromeda Galaxy and other broadband objects benefit much less from this kind of filter. Reason being that the light reflected off the dust is the same light from the neighboring stars which will be broadband light, rather than the very specific frequencies emitted from excited gasses. That makes imaging reflection nebulae significantly more challenging from urban areas.
These are some of the reflection nebulae and galaxies that are particularly challenging from urban areas. You’re still likely to get the best images with the least amount of work by visiting a dark sky site. But don’t let that discourage you from imaging them inside the city! These images came from a Bortle 2 (very, very dark) site near Eldorado, Texas in 2020.




Starting just a couple of years ago, there have been a series of advancements in software that have enabled me to begin imaging more frequently from the back yard, even in my Bortle 9 slice of heaven. Filters such as the L-eNhance and the newly release L-Synergy mark a huge improvement over traditional broadband RGB imaging. They enable false color and Hubble Palette narrowband imaging from inside the city but as mentioned previously, this leaves reflection nebulae largely out of the picture.
Enter a couple of new-ish software tools. Tools and plugins for the best astrophotography processing software I’ve used, PixInsight, like GraXpert and Russell Croman’s RC-Tools suite along with processing techniques I learned from reading articles and watching videos by premier astrophotographer Adam Block available freely on YouTube (link to Adam Block’s tutorial on his YouTube channel) sent me down a rabbit hole that I have yet to come out of. I’ve used a ton of different stacking and processing software packages in the past. The list includes but is not limited to DeepSkyStacker, Iris, Siril, AstroPixelProcessor, and PixInsight. The free tools will certainly get the job done at the expense of your time. When I had a $0 budget these were absolutely indispensable. Do not underestimate their power! Also don’t underestimate how much time (and frustration) you will have to spend to get the result you want with these tools. They lack the polish that regularly maintained, commercially developed software has but they’re free or very inexpensive. PixInsight is easily the most expensive of the tools but also the most capable and best supported. If you pay close attention to the press releases from NASA when they’re debuting images from observatories like Hubble, James Webb, and the newly commissioned Vera C. Rubin Observatory, you’re likely to notice that NASA itself uses PixInsight to process that data to make pretty pictures for the adoring public. Also for scientists, I guess. But if I’m honest, I get inspired by the pretty pictures. Which is kind of the point of the NASA Outreach group.
These tools, in combination with processing techniques I’ve learned over the years, help strip away the noise from the signal. This, in turn, enables us to subtract nearly all of the light pollution from images we collect. Ideally we’ll be able to subtract 100% of the light pollution created nearby so we can focus completely on the ancient photons that took thousands or even millions of years to travel to our telescopes. We’re starting to get tantalizingly close to that. Will we get better results if we go to dark sky sites? Absolutely, without any doubt or reservation. Is it a royal pain? Yes, also absolutely and without doubt or reservation.
So, what does this process look like? This is where these plugins in combination with PixInsight really makes a huge difference in the amount of time I have to spend processing an image. While I do very much enjoy the hobby, I also very much enjoy instant gratification once the data has been collected. For many years I’ve automated my data collection, first with Sequence Generator Pro and more recently with N.I.N.A. These pieces of software have let me sleep while the telescope collects all the photons for me, which I very much appreciate. This even allows me to start an imaging run, watch some TV (hockey games are usual suspects), and get some sleep while the software handles the drudgery of maintaining focus, changing filters as needed, tracking the sky, changing targets, and closing up at the end of the night. The only thing I have to do is make sure it isn’t going to rain and remember to turn off the irrigation system if I’m imaging that night.
Once the data is collected I’ll go through a normal stacking process which will leave me with the super green image we start with.

The very first step for me is to run the image through GraXpert to remove the super gross gradient and neutralize the background. This gives us the very first look at the quality of the data we’ve had the telescope collect for us while we slept, because automation is awesome. I usually like to take a quick look at what was actually subtracted from the image, just to get an idea of how much data I’ve potentially lost. In this case there’s a little bit of the blue nebulosity in the middle that I’ve lost but it also subtracted a large amount of ambient noise and light pollution. The light pollution is kind of splotchy because the telescope is tracking the sky, so the earth rotates around the edges of the field of view. It also flips from one side of the sensor to the other as the telescope flips at the highest point in the sky, known as the zenith.

That leaves us with a noisy, off color image but also the first glimpse at the data we’ve so carefully collected. This is where I hold my breath and leap.

So, not bad. We know it’s not correctly colored yet but it’s obvious that there’s a good amount of signal present behind all the noise and light pollution. Honestly, this is far better than I could have possibly hoped for just a few years ago given the fact that I’m imaging a broadband target from a heavily light polluted sky near a full moon. So far the only thing I’ve done is stack the data (light frames, dark frames, and flat frames) using the Weighted Batch Preprocessing script in PixInsight, and subtracted the gradient using the free GraXpert plugin for PixInsight. The plugin doesn’t always make the best guesses but it is very fast and very easy to use. If nothing else it is a good approximation and will show you where you need to concentrate your efforts if you want to manually remove the gradients using Dynamic Background Extration (DBE) in PixInsight. That goes against my lazy ethos so I’ve only ever done that once on a particularly challenging image.
Now that we’ve got a good idea that things are going to go our way, let’s start some processing in earnest. I’ve mentioned BlurXTerminator and NoiseXTerminator but what do they do? NoiseXTerminator reads the noise in the image, does some fancy math using your GPU to accelerate the effort, and smooths everything out. Kind of like the soft glamor filters that portrait photographers use to soften everything up and hide small lines. It also takes that smoothed out image and sharpens everything up. It’s kind of cheating but it’s a fair approximation if all you’re wanting to do is make pretty pictures. If you’re doing actual scientific work you’d want to avoid such things. This particular image is a bit more than 4 hours worth of total data collection. If you want a smoother image the key is to collect more data but there is a point of diminishing returns. You’ll find the balance for your system with some experimentation but for me, in the city at least, much more than 12-18 hours worth of data is where I start to get less improvement for my efforts.
BlurXTerminator is a companion plugin that attempts to mathematically correct for distortions in the imaging train, usually caused by the camera being to close or too far away from the corrector or not being perfectly square with the lenses in the telescope. It does this by assuming that all stars are circular then comparing that against the stars in the image. It builds a map of the distortion then corrects the image for that distortion. It’s really black magic, I’m certain animal sacrifice was involved somewhere. The end result of these two GPU accelerated operations looks like this:


Quite a difference, eh?
Next up I’ll want to calibrate the image using the SpectroscopicPhotometriColorCalibration (SPCC) script. Whew, what a mouthful that is! Let’s break it down. The spectrograph is what we get when we break light from a thing into its constituent colors, light shining light through a prism. Each star is unique and has been carefully cataloged. We know where every star in the image is in the celestial map because we’ve had the computer do some clever math and map exactly where everything is against a catalog of literally millions of known stars. Many of these stars have a cataloged color signature that we can use to correct the image as a whole. So we ask the computer to do the needful for us and look up those stars, build a map, and correct the white balance of that image for us. Because we’re lazy and it’s better at math. Once we’ve done that we want to make everything just POP, because to us most stars look white. And that’s just boring. We need to pump up the volume by amplifying the signal as a whole and specifically for color. Kind of like cranking up the contrast and color balance settings on your TV to Maximum Effort. After running SPCC we’ve got the right color proportions to be scientifically accurate but that’s not a pretty picture. Auto-stretching the image isn’t usually ideal at this point. I find it better to sneak up on what looks pleasing to my eye, because its my art and my eye that is beholding it. After a few rounds of HistogramTransformation I’ll arrive at something approaching what will be the final image. Getting to this point with the old software, going through DeepSkyStacker or AstroPixelProcessor, into GIMP, through some curse words, back into GIMP, etc. etc. etc. could literally be days worth of effort for a single image. PixInsight, using primarily the WeightedBatchPreProcessing script, BlurXTerminator, NoiseXTerminator and GraXpert have cut that time down to maybe two hours maximum, often under an hour. This absolutely falls into my Instant Gratification category. Without a doubt it enables my Lazy Button.

A little more tweaking and I’m getting fairly happy with the results given the tiny amount of effort I’ve put in. I’m starting to see hints of the dusty Integrated Flux Nebulosity (IFN). This dusty nebulosity isn’t illuminated by nearby stars but by the combined light of all the stars in the galaxy. It’s the shadowy figure hiding in the shadows in the horror movie you’ve decided not to watch. It is exceedingly faint and often eludes capture. It was only discovered in the last 30 years and documented in 2004. It wasn’t discovered by Hubble or any other space telescope but by observers on the ground using wide field survey telescopes.
So, where are we? Close to the end of the Instant Gratification Lazy Train! A quick edit, export the image and this is what I call a good first pass. From here I’ll collect more data to try and bring out some more of that dusty IFN. If I get some, GREAT! If I don’t, well now I know there are no short cuts to imaging reflection nebulae in the city. Is it worth it? Absolutely. Can you get better images from a dark sky site? Undoubtedly. If nothing else, this kind of exercise helps me build a prioritized list of targets that I can use when I do go to dark sky sites. Imaging from the city is also a great way to practice using your gear so you spend less time troubleshooting and more time collecting ancient photos after you’ve drive four or five hours (or more!) to get to your overnight spot.
Without further adieu, a very rough first pass at Messier 45 from a Bortle 9 backyard in North Texas on a night with a 70% moon. What fun!

