söndag 15 oktober 2017

Optimising offset for the ASI 174MM-Cool

Used last night's cloud cover to do some testing with my cooled CMOS, the ZWO ASI 174MM-Cool.
The gain of CMOS cameras is variable, and to avoid clipping dark pixels, you have to add an offset or pedestal. Basically this means that each pixel value that is read is transformed by an amplifier and analog - digital converter (ADC) according to the formula

ADU = (pv + Offset)/Gain

The odd use of dividing by gain in stead of multiplying is because gain is defined as e/ADU. One would also expect the offset to be in ADU, giving the formula

ADU = pv/Gain + Offset

But I found that at high gain setting, any change in offset is much more critical than at low gain, so I think the proper formula is the first one.
First off, you have to distinguish between gain setting ( for the ASI 174, the gain setting can be varied from 0 to 400) and real gain (about 8 e/ADU at setting 0, 1 e/ADU at setting 189, and decreasing e/ADU the higher the setting goes).
I used this simple method to determine the best offset for various gain settings.
I took one bias frame at the shortest possible exposure time (32 microseconds) for each gain/offset combination. I varied the gain setting from 0 to 400 in 100 unit steps, plus unity gain (gain setting 189). I then varied the offset and used PixInsight image statistics to determine when the lowest pixel value (in ADU) started to increase. For some reason the lowest pixel value was at least 1, so I increased offset until I got >2 - 3 ADU as a minimum pixel value.
I did the tests at -15 degrees Centigrade, because that is the temperature I use for imaging at the moment. Offsets shouldn't be that temperature dependent any way.

The highest offset that my driver (INDI) allows is 240, which wasn't high enough for a gain setting of 400. The highest gain setting that can be used with the offset at 240, is about 365.
This exercise shows that in order to achieve the highest dynamic range in the ASI 174MM, you can optimise the offset for each gain setting to avoid pixel clipping. Using a higher offset than needed to avoid clipping will decrease the dynamic range of the camera.

fredag 18 augusti 2017

That other reason to stack images - increase in bit depth

The second advantage of stacking - increase in bit depth

Stacking astrophotography images will increase signal-to-noise ratio (SNR) by decreasing the noise. The SNR increases as the square root of the number of subexposures that make up the stack. But stacking has a second advantage, that will be especially of use to astrophotographers who use a CMOS camera. These cameras most often have an image output that uses 12 bit of information per channel, as opposed to CCD cameras, which commonly have an image output with 16 bit of information per channel. This means that CMOS cameras represent the entire intensity scale from 0 (lowest) to 1 (highest) in 4096 intensity levels. CCD cameras represent the same intensity scale in 65536 levels. I tturns out that stacking, if done correctly, can increase the bit depth of an image.

Since a continuous intensity scale is represented in discrete value levels, some form of rounding process is needed. In a CMOS camera, fewer levels are used than in a CCD camera, and this can become visible when we stretch an image. In this article I will show how stacking can increase the number of value levels.

For the sake of argument I will use a hypothetical camera which only uses 3 bits of information, or 8 intensity levels (ranging from 0 to 1). While such a camera doesn't seem very practical, we will see that even stacking a relatively low number of images, will dramatically increase the bit depth of the stacked image.

The evidence

So lets start with the scene we are going to image. It's quite a dull one, consisting of a linear gradient that runs from left (darkest) to right (brightest). Our camera has a sensor of 600 x 400 pixels, and can record only 8 levels of intensity, but the scene, of course has a continuous intensity range.

Figure 1: the scene
If we image this scene with our camera, we will get this.

Figure 2: 3 bit image
With this output from our camera, we can stack as many images as we want, but the result will always be the same. The camera records the scene in always the same distribution of 8 intensity levels.

The histogram of this image consists of 8 spikes, representing the 8 intensity levels. Since the zones are all equally wide (except the darkest and brightest), the height of these spikes will be the same.

(The spikes for the darkest and lightest zones are at the very edge of the histogram window, and are difficult to see in this image.)
Figure 3: histogram
But what happens when the camera is noisy, and each image ends up like this?

Figure 4: added noise
This image is the result of adding noise to the noiseless 'staircase' image. The 'width' of this noise is exactly half of the intensity step between each zone. Suddenly, it seems as if the zones have disapeared. So, is this really a 3 bit image? Here's what the histogram looks like.

Figure 5: histogram
But wait, shouldn't the histogram of a noisy image show a wider distribution? In this case, no, since we still have only 8 intensity levels. The spatial distribution of the intensity levels is different because of the added noise, but the image is still only represented by 8 levels, or 3 bits of information.

When we get this far, we can apply the first advantage of stacking. Stacking a number of images will decrease the noise in the final image. And, miraculously, this will not bring the original 8 zones back, but a continuous gradient.

Lets stack 32 of these noisy images. We will calculate the average of all 32 values of each pixel position. There will be no pixel rejection. But the final image will be shown as an 8 bit jpeg image. (By the way, all the other images are also 8 bit jpeg images. But since the data only uses 8 discrete levels, the other 248 levels weren't used. That's why the histogram was so 'spikey'.)

Figure 5: stack of 32 images
Of course, we can't save this image in 3 bit, but that would give us our 'staircase' image back, just a little noisier. But in 8 bit, we have a nice even gradient. To show how even it is, here's the histogram.

Figure 6: histogram of the stacked image
We can see the large number of levels in this image (225 if my calculations are correct), a dramatic increase in bit depth. Ideally, the histogram should be flat, but remember that noise is additive. This means that it is clipped at the low and high end of the histogram. If the noise added to the gradient, gave a negative value, this was clipped to 0 in the camera. And the same happned at the high end. So these areas show a larger number of pixels and a higher peak. The slight unevenness would disappear if we'd used more frames in the stacking process. So how does this work?

The theory

In a noise free world, our 3 bit camera would faithfully register the gray scale of our scene, but only where the intensity of the scene matches the level that the camera can record. In any other position, the intensity is either too low or too high, and the camera electronics will round the value to the nearest digital unit (0 ... 7). That's what makes the staircase. A scene value that falls halfway between two digital units, will either end up rounded up or down. Always in the same manner. However, if we add noise, a pixel value halfway between digital units will sometimes end up being rounded up, and sometimes being rounded down. Depending on the value and how much noise we add, it will end up more in the lower or more in the higher digital unit. If we only have two frames to stack, then an intensity value halfway between two units, can one time be represented by the lower unit, and in the other frame by the higher unit. The average value will then be the average of the two units. If we had a one bit camera, which only can register 0 (black) or 1 (white), an intensity value of 0.5 with some added noise, can end up 0 in one frame, and 1 in the other, with a 50% chance. The average will then be 0.5.

If we have three frames in our stack, the pixel values can consist of two 0 and one 1, or one 0 and two 1. We can register the intermediate values 1/3 and 2/3, as well as the values 0 and 1. We now have 4 intensity levels which we can resolve. If we continue adding frames to our stack, then each added frame will add one more intermediate level of intensity in our stacked image. If two frames add one level, three frames add two levels, and four frames add three levels (1/4, 1/2, 3/4), then 32 frames add 31 levels between each existing level. The new image should have 32\cdot7+1=225 levels. This of course, applies only if our stacked image can hold all those levels of intensity. We need 8 bits to represent each intensity level.

In most cases, the average and median of a data set will follow each other closely, if we have enough data. So let's look at what happens when we use median stacking rather than average stacking. We can actually predict what will happen. In a stack of any odd number of frames, the actual pixel value of the stacked image will be a pixel value of one of the images that went into the stack. This predicts that any pixel value in the stacked image will be one of the seven original levels. There should be no improvement in bit depth if we use median stacking. Well, here's the result of median stacking. The same images that went into the average stack were used.
Figure 7: median stacking
Apart from noisy transitions, we still see eight intensity levels. If we examine the transitions closely, there are some pixels where we actually have recorded a value midway between two levels, but this is due to the way the median is calculated for an even number of samples. If we add or remove one frame, the intermediate levels will dissappear. Here's the histogram of our even numbered stack.

Figure 8: histogram of median stacking
Finally, let's see what happens when the noise in the recorded image not quite obliterates the staircase appearance in each image. We halve the noise in each image, which now looks like this.

Figure 9: half the noise in each image
There's still some of the staircase left, and the stacked (averaged) image looks like this.

Figure 10: average stacking with half the noise
While this certainly is an improvement, there are still annoying bands left in the final image. It seems that for a good result, we actually need enough noise to really obliterate the bands in the original image. The width of the noise should be at least half a digital unit. Anything less will not give us the smooth image we want.


Let's summarise. It is possible to increase the bit depth of an image by stacking noisy images. The noise must be sufficient to obliterate the staircase effect in the images that make up the stack. You have to use average stacking for this to work, and you have to be able to save the result in a format that keeps the higher bit depth. Stacking an 8 bit image, and saving it in an 8 bit format, will not do you any good. In the same way, stacking 16 bit images and saving the result in 16 bit format will not achieve anything. But even a moderate number or images in 12 bit format, can give you a 16 bit result.

This method works, because noise will destroy the banding that occurs in low bit depth images, and stacking will reduce the noise in the final image.

The proof of this is in the proverbial pudding. Here's an image of a little more than a gradient. Added noise and 3 bit depth.

Figure 10: a noisy image in 3 bit
And this is what a stack of 32 of those reveals. Impressive, isn't it? (If you're not impressed, look closely at the lower left corner. Still not impressed?)
Figure 11: 32 images stacked

söndag 4 juni 2017

Handling Fits files in batches - FITSFileManager in PixInsight

Lately I have processed a number of images taken with the Liverpool Telescope on La Palma. The images in their archives are FIT files that have a large header with all the information you'd ever want on the image. When downloaded, the iamges need to be extracted (from tar or tgz formats) and sorted. When I started out with this I would download the images from each filter separately, in order to keep the colour channels apart, but this gets tiresome. Lately I have started to use a script in PixInsight that does the work for me. Here's how I do it.
I download all the images at once and put them in one folder on my computer. Then I extract the images from the compressed tar-files. Next I start the FITSFileManager script in PixInsight (Scripts -> Utilities). I rename the image files, using the FILTER keyword as a prefix to the filename. The file name template is: '&FILTER1;_&filename;'. I move the files to the same directory as where they were stored, so that the new file name replaces the old file name. This way, my files are easily sorted and identified.

Contour mask in PixInsight - an alternative way

If you want to reduce stars in PixInsight, you need a contour star mask. The StarMask process has this option, and for many images this works just fine. But occasionally I find it difficult to make this mask work for both large and medium sized stars. Here's an alternative to the standard method.
Use StarMask to create a standard star mask. Choose the number of layers large enough to include even large stars. To get this to work, you may need to decrease the noise threshold a bit. On most of my images, I start with 6 or 7 for the number of layers, and about 0.35 for the noise threshold. But really it depends entirely on your image and what size stars you have. Keep the structure parameters at their default values (Large = 2, Small = 1, Compensation = 2)
When you're satisfied with the mask, create another one, but decrease the number of layers by one, and set the structure parameters to 0.
Now, use PixelMath to subtract the second mask from the first. If you want to keep the original masks, create a new image. Otherwise use the expression '$T - star_mask2' and apply it to the first mask (with the larger stars). The star mask should now show donuts for stars. If the donuts don't open up (ie still gray or white in the center), you need to increase the intensity of the second mask by using the HistogramTransformation tool. Occasionaly, I have had to use the CloneStamp tool for a tricky star or two. Just take a sample from the black background and clone it in the white star that is reluctant to open up. Just make sure you put it in the right spot, otherwise you may end up with a lopsided star.
When you are satisfied with the contour shapes, you can increase the intensity of the mask with the HistogramTransformation tool, and blur the mask with the Convolution tool.
The advantage of this method is that it creates contour masks for large and small stars alike, while the standard method, sometimes fails to create contours for smaller stars.

måndag 27 februari 2017

star repair in PixInsight - part 2

While reprocessing old data, I came across a very instructive incidence of star over exposure. When imaging under light polluted skies, sky glow is added to any sky signal (stars, nebulae, etc). If the pollution is strong enough to over expose stars, the true star colour data is destroyed. When the sky glow is removed during post processing, this will lead to bright stars having the wrong colour.
For example, say a bright star is slightly blue in colour. Without any sky glow, it would register as RGB 0.8, 0.8, 0.95. Sky glow gives an added 0.3 in blue. This will put the star colour on sensor as RGB 0.8, 0.8, 1.0, since the maximum value for a pixel is 1.0.
When the sky glow is removed during background extraction, the value 0.2 will be subtracted, and we end up with a star's RGB values of 0.8, 0.8, 0.7. The star core has suddenly turned yellow. However, away from the core, the star will still be blue, since these areas weren't over exposed. Therefore the need to correct star colours.
During normal stretching of an image, most of the bright stars will be maximised, and have a final colour of RGB 1.0, 1.0, 1.0. However, if masked stretch is used, the stars will not be saturated, and will have a funny looking core.
Here's an example of an unstretched star, with (left) and without (right) colour repair.

When should star repair be implemented in the workflow?
Of course, before any stretch is applied, the colour must be corrected. However, there is one earlier instance where star colour matters in processing, and that is during colour calibration.
Colour calibration tries to set a white point by looking at all the stars in an image, and apply a calibration scheme that is determined by the average star colour. When the star colour is wrong due to over exposure and background extraction, the white balance after colour calibration will be off. Therefore, there is an argument for applying HSV repair prior to colour calibration. The workflow then becomes as follows:

  • cropping of edges
  • background extraction (ABE or DBE)
  • background neutralisation
  • HSV repair
  • colour calibration
This image shows the effect of doing HSV repair before (left) or after (right) colour calibration. Due to the skyglow adding mainly blue to the image, several star cores had a warmer colour after DBE. This resulted in the colour calibration routine making the blue stars more intense blue. By doing the HSV repair process prior to colour calibration, stars got a more natural blue colour after stretching.

Raspberry Pi DSLR trigger

Here's a small, simple project, a remote trigger for a DSLR.
I control my telescope mount through INDI, but unfortunately I can't control my old DSLR that way. The only way I can do "automated" exposures is if I connect an intervallometer to the camera. The problem with intervallometers is however, that they only run on small batteries, and will stop working when it's cold.
Another problem is, that it's impossible to use dithering with an intervallometer and simultaneous guiding.
Here's a partial solution to these problems. I wrote a simple Python script that will run on the Raspberry Pi, The script sends exposure signals through the RPi's GPIO bus to the remote port of my camera.
To connect the Raspberry Pi to the camera, I made a small optocoupler circuit, that will isolate the camera electronics from the RPi.
The input (left) will connect to a GPIO pin (pin 18, GPIO 24) and ground (pin 20), while the ouput (right) connects to the remote port of the camera.
Here's the code.


import RPi.GPIO as GPIO
import time
import sys

NrExposures = 1
ExposureTime = 30
TimeBetweenExposures = 6
print ' '
print 'Make sure that the camera remote port is connected to pins 18 (signal) and 20 (ground).'
print ' '
if len(sys.argv) == 1 :
   print 'No arguments provided. Will use single 30 sec exposure.'
elif len(sys.argv) == 2 :
   ExposureTime = int(sys.argv[1], 10)
   print 'Single', ExposureTime, 'seconds exposure.'
elif len(sys.argv) == 3 :
   ExposureTime = int(sys.argv[1], 10)
   NrExposures = int(sys.argv[2], 10)
   print NrExposures, 'x', ExposureTime, 'seconds exposures.'
else :
   ExposureTime = int(sys.argv[1], 10)
   NrExposures = int(sys.argv[2], 10)
   TimeBetweenExposures = int(sys.argv[3], 10)
   print NrExposures, 'x', ExposureTime, 'seconds exposures, with', TimeBetweenExposures, 'seconds delay.'

TriggerPin = 24  # Broadcom pin 24 (P1 pin 18)

GPIO.setmode(GPIO.BCM)  # Broadcom pin-numbering scheme
GPIO.setup(TriggerPin, GPIO.OUT)  # trigger pin as output

GPIO.output (TriggerPin, GPIO.LOW)

counter = NrExposures
print ' '
print 'Start : %s' % time.ctime()
while (counter > 0):
   print '  Exposure nr', counter, 'started'
   GPIO.output(TriggerPin, GPIO.HIGH)
   GPIO.output(TriggerPin, GPIO.LOW)
   print '  Exposure nr', counter, 'ended'
   counter = counter - 1
   print ' '

print 'End :  %s' % time.ctime()
GPIO.output(TriggerPin, GPIO.LOW)
print ' '
print 'Goodbye.'

The script is saved as Trigger.py and is made executable:

chmod +x Trigger.py

The script takes up to three command line arguments. The first argument is the single frame exposure time in seconds. The second argument is the number of exposures to take, and the third argument is the wait time between exposures, also in seconds.
For example

./Trigger.py 1 2 3

will take 2 exposures of 1 second each with a 3 seconds wait time after each exposure. The script will take 8 seconds to run.

If only two arguments are given, the wait time will be set to a default value of 6 seconds.
If only one argument is given, this is interpreted as exposure time. Only one exposure will be taken.
If no arguments are given, the script will do a single 30 seconds exposure.

Eventually, I may try to rewrite the INDI CCD driver to control my camera, as this will give me the possibility to dither between exposures. But for now this simple script will do the job.

lördag 14 januari 2017

star repair in PixInsight

When stars start to get over exposed during data collection, their core may become a different colour than the normally exposed outer halo. When the stacked image is stretched using MaskedStretch in PixInsight, stars are stretched less than dimmer parts. This means that stars in the stretched image are not saturated, and can display an odd colour. PixInsight has a script that can correct star cores of partially saturated stars. It is called "Repaired HSV Separation", and can be found under Scripts -> Utilities. The script should be applied just before the first stretch. It will decompose the colour image into H, Sv, and V colour components, and repair the colour values of the stars.
The different components are then to be combined using the ChannelCombination process (which is under ColorSpaces).
The upper part of the dialog box is used to determine which images are to be created, while the lower part is used for repair of the H, Sv, and V channels.For best results, mainly the Repair level parameter needs to be adjusted.
Alejandro Tombolini (who brought this script to my attention), recommends to use the unrepaired V channel when combining the channels. But experiment to find out if the repaired or unrepaired V channel works best. Be sure to select the HSV colour space.
Here's how it affected one of my most recent images, a before and after shot of the Pleiades (M45). Look at the core of the brightes stars. In the original image, the cores arepink in colour, while in the repaired image, they are blue, the same colour as the outer haloes.