tag:blogger.com,1999:blog-61055888571063555572024-03-13T22:59:30.917-07:00Wims place, mainly astronomy and electronics relatedWimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.comBlogger20125tag:blogger.com,1999:blog-6105588857106355557.post-51106563198329771532017-10-15T09:12:00.000-07:002017-10-15T09:12:45.955-07:00Optimising offset for the ASI 174MM-CoolUsed last night's cloud cover to do some testing with my cooled CMOS, the ZWO ASI 174MM-Cool.<br />
The gain of CMOS cameras is variable, and to avoid clipping dark pixels, you have to add an offset or pedestal. Basically this means that each pixel value that is read is transformed by an amplifier and analog - digital converter (ADC) according to the formula<br />
<br />
<div style="text-align: center;">
ADU = (pv + Offset)/Gain</div>
<br />
The odd use of dividing by gain in stead of multiplying is because gain is defined as e/ADU. One would also expect the offset to be in ADU, giving the formula<br />
<br />
<div style="text-align: center;">
ADU = pv/Gain + Offset</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
But I found that at high gain setting, any change in offset is much more critical than at low gain, so I think the proper formula is the first one.</div>
<div style="text-align: left;">
First off, you have to distinguish between gain setting ( for the ASI 174, the gain setting can be varied from 0 to 400) and real gain (about 8 e/ADU at setting 0, 1 e/ADU at setting 189, and decreasing e/ADU the higher the setting goes).</div>
<div style="text-align: left;">
I used this simple method to determine the best offset for various gain settings.</div>
<div style="text-align: left;">
I took one bias frame at the shortest possible exposure time (32 microseconds) for each gain/offset combination. I varied the gain setting from 0 to 400 in 100 unit steps, plus unity gain (gain setting 189). I then varied the offset and used PixInsight image statistics to determine when the lowest pixel value (in ADU) started to increase. For some reason the lowest pixel value was at least 1, so I increased offset until I got >2 - 3 ADU as a minimum pixel value.</div>
<div style="text-align: left;">
I did the tests at -15 degrees Centigrade, because that is the temperature I use for imaging at the moment. Offsets shouldn't be that temperature dependent any way.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE9cXbwxLBACsiEqbmMhPggjJrYuBWqiOnTIQ1KqETdjJ8qFctOLADfCG775V0b1EFaeFBpAs1QFPIb3loU6-Q174SbWD-OYgSuDoNf8pXSjWidDIPJR7_wOPtPc3isZ6LA41kD2EJoBQz/s1600/asi174_gain_offset.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="340" data-original-width="605" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiE9cXbwxLBACsiEqbmMhPggjJrYuBWqiOnTIQ1KqETdjJ8qFctOLADfCG775V0b1EFaeFBpAs1QFPIb3loU6-Q174SbWD-OYgSuDoNf8pXSjWidDIPJR7_wOPtPc3isZ6LA41kD2EJoBQz/s640/asi174_gain_offset.png" width="640" /></a></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The highest offset that my driver (INDI) allows is 240, which wasn't high enough for a gain setting of 400. The highest gain setting that can be used with the offset at 240, is about 365.</div>
<div style="text-align: left;">
This exercise shows that in order to achieve the highest dynamic range in the ASI 174MM, you can optimise the offset for each gain setting to avoid pixel clipping. Using a higher offset than needed to avoid clipping will decrease the dynamic range of the camera.</div>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-72196274129532753262017-08-18T17:06:00.000-07:002017-08-18T17:07:13.705-07:00That other reason to stack images - increase in bit depth<h2>
The second advantage of stacking - increase in bit depth</h2>
<br />
Stacking astrophotography images will increase signal-to-noise ratio (SNR) by decreasing the noise. The SNR increases as the square root of the number of subexposures that make up the stack. But stacking has a second advantage, that will be especially of use to astrophotographers who use a CMOS camera. These cameras most often have an image output that uses 12 bit of information per channel, as opposed to CCD cameras, which commonly have an image output with 16 bit of information per channel. This means that CMOS cameras represent the entire intensity scale from 0 (lowest) to 1 (highest) in 4096 intensity levels. CCD cameras represent the same intensity scale in 65536 levels. I tturns out that stacking, if done correctly, can increase the bit depth of an image.<br />
<br />
Since a continuous intensity scale is represented in discrete value levels, some form of rounding process is needed. In a CMOS camera, fewer levels are used than in a CCD camera, and this can become visible when we stretch an image. In this article I will show how stacking can increase the number of value levels.<br />
<br />
For the sake of argument I will use a hypothetical camera which only uses 3 bits of information, or 8 intensity levels (ranging from 0 to 1). While such a camera doesn't seem very practical, we will see that even stacking a relatively low number of images, will dramatically increase the bit depth of the stacked image.<br />
<br />
<h3>
The evidence</h3>
So lets start with the scene we are going to image. It's quite a dull one, consisting of a linear gradient that runs from left (darkest) to right (brightest). Our camera has a sensor of 600 x 400 pixels, and can record only 8 levels of intensity, but the scene, of course has a continuous intensity range.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7nESlg00zqDY_izyGuObivYwf904-GbQT6xTkTlcFcML6uZeRr_YOD-pYeoFXDYnTUv1_LA5mU0LtS1vqecMT4Xm0WMADs98caLz12NT_QXoyofCrP1W_mbdWEjbHBnUiTfJx9ObYpzYH/s1600/scene.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7nESlg00zqDY_izyGuObivYwf904-GbQT6xTkTlcFcML6uZeRr_YOD-pYeoFXDYnTUv1_LA5mU0LtS1vqecMT4Xm0WMADs98caLz12NT_QXoyofCrP1W_mbdWEjbHBnUiTfJx9ObYpzYH/s400/scene.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: the scene</td></tr>
</tbody></table>
If we image this scene with our camera, we will get this.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlY7HZhyphenhyphenhYQJ8x6yX_iZwQWA7e9fDZv-urV_VxcIIa_NpfTy5vzcZK0Hk-HVCkfn1qmdd6BGu7ZFpNN2kEUqhPKxSbIcxToZtU2nQ9GZcy_azx6zdhmeU86b1j-LmntOiRGY44zotOifM1/s1600/gradient_nonoise_3bit.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlY7HZhyphenhyphenhYQJ8x6yX_iZwQWA7e9fDZv-urV_VxcIIa_NpfTy5vzcZK0Hk-HVCkfn1qmdd6BGu7ZFpNN2kEUqhPKxSbIcxToZtU2nQ9GZcy_azx6zdhmeU86b1j-LmntOiRGY44zotOifM1/s400/gradient_nonoise_3bit.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2: 3 bit image</td></tr>
</tbody></table>
With this output from our camera, we can stack as many images as we want, but the result will always be the same. The camera records the scene in always the same distribution of 8 intensity levels.<br />
<br />
The histogram of this image consists of 8 spikes, representing the 8 intensity levels. Since the zones are all equally wide (except the darkest and brightest), the height of these spikes will be the same.<br />
<br />
(The spikes for the darkest and lightest zones are at the very edge of the histogram window, and are difficult to see in this image.)<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLkbr65sGfJ0AhdpE8p567h6gff8-k7yWGo7nv-1mZh3vVmjuhJ9ndS8ATfu8DovSUNttAxdjma1cfX7BUWJT-Mh0OSrzluHMv9aOTqYwBR7CsTwzvwKeoDOwGEp9gCffenelr15GjXDSM/s1600/PItest_noisy_3bit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="222" data-original-width="643" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLkbr65sGfJ0AhdpE8p567h6gff8-k7yWGo7nv-1mZh3vVmjuhJ9ndS8ATfu8DovSUNttAxdjma1cfX7BUWJT-Mh0OSrzluHMv9aOTqYwBR7CsTwzvwKeoDOwGEp9gCffenelr15GjXDSM/s400/PItest_noisy_3bit.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: histogram</td></tr>
</tbody></table>
But what happens when the camera is noisy, and each image ends up like this?<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigSFWlnvhp36vAzOelW5wFckE0KHh9MNoS4RZJEVo7F8SWksfD2h34WFAYdCxHENqC9wSocCad03LOPy2-7L-BFXVx3wwUs8DIKfMTHaTPURYeDmi3k74TH5WHa8yRuT-lFTBdTAqATFA5/s1600/gradient_noise007_3bit.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigSFWlnvhp36vAzOelW5wFckE0KHh9MNoS4RZJEVo7F8SWksfD2h34WFAYdCxHENqC9wSocCad03LOPy2-7L-BFXVx3wwUs8DIKfMTHaTPURYeDmi3k74TH5WHa8yRuT-lFTBdTAqATFA5/s400/gradient_noise007_3bit.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4: added noise</td></tr>
</tbody></table>
This image is the result of adding noise to the noiseless 'staircase' image. The 'width' of this noise is exactly half of the intensity step between each zone. Suddenly, it seems as if the zones have disapeared. So, is this really a 3 bit image? Here's what the histogram looks like.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuqo2NcVdwPFakKTUUvaJaQ7Y_ucZC5HNS1UmLFXrjis5qqDUMCADIqKsb9ydKWJeTyDacCz1Dcm_Zoaf8PvIjI4clsHS0nyQIUKrsffgAF7xt_Ii4dyUuqUB5Ha9KeNGo0KYB2aemDBNk/s1600/PItest_singleframe_noise007_3bit.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="217" data-original-width="643" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuqo2NcVdwPFakKTUUvaJaQ7Y_ucZC5HNS1UmLFXrjis5qqDUMCADIqKsb9ydKWJeTyDacCz1Dcm_Zoaf8PvIjI4clsHS0nyQIUKrsffgAF7xt_Ii4dyUuqUB5Ha9KeNGo0KYB2aemDBNk/s400/PItest_singleframe_noise007_3bit.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: histogram</td></tr>
</tbody></table>
But wait, shouldn't the histogram of a noisy image show a wider distribution? In this case, no, since we still have only 8 intensity levels. The spatial distribution of the intensity levels is different because of the added noise, but the image is still only represented by 8 levels, or 3 bits of information.<br />
<br />
When we get this far, we can apply the first advantage of stacking. Stacking a number of images will decrease the noise in the final image. And, miraculously, this will not bring the original 8 zones back, but a continuous gradient.<br />
<br />
Lets stack 32 of these noisy images. We will calculate the average of all 32 values of each pixel position. There will be no pixel rejection. But the final image will be shown as an 8 bit jpeg image. (By the way, all the other images are also 8 bit jpeg images. But since the data only uses 8 discrete levels, the other 248 levels weren't used. That's why the histogram was so 'spikey'.)<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiahuTB98w_pUHL2Q4sA9tqBFwkqtevzIi8LjHCV4HjqzXoJ3sk_qugRd2dGmNYsiuJtPx_mgvpy9bhI3a2VDsKRMIXNwKfRgFBd-lLAlhVQoeyUYpy4OpoWJX1Dm-L6Uw21SeqlOImBWW/s1600/average_noise007.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiahuTB98w_pUHL2Q4sA9tqBFwkqtevzIi8LjHCV4HjqzXoJ3sk_qugRd2dGmNYsiuJtPx_mgvpy9bhI3a2VDsKRMIXNwKfRgFBd-lLAlhVQoeyUYpy4OpoWJX1Dm-L6Uw21SeqlOImBWW/s400/average_noise007.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: stack of 32 images</td></tr>
</tbody></table>
Of course, we can't save this image in 3 bit, but that would give us our 'staircase' image back, just a little noisier. But in 8 bit, we have a nice even gradient. To show how even it is, here's the histogram.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOpH74iIX8yBxcqzYuj8nvN-1M8tgbhmsw0jJnSutfy3rfnEFp8xKAIrVFzjSd691-VqibVsFsP1CGfFSgFwek8o_WWurR1IVRYpGc-Zf7ZGfD8o0JZC8_s-EpUaIApm1-_F7eKb7OZTBY/s1600/PItest_average_noise007.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="215" data-original-width="643" height="132" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOpH74iIX8yBxcqzYuj8nvN-1M8tgbhmsw0jJnSutfy3rfnEFp8xKAIrVFzjSd691-VqibVsFsP1CGfFSgFwek8o_WWurR1IVRYpGc-Zf7ZGfD8o0JZC8_s-EpUaIApm1-_F7eKb7OZTBY/s400/PItest_average_noise007.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6: histogram of the stacked image</td></tr>
</tbody></table>
We can see the large number of levels in this image (225 if my calculations are correct), a dramatic increase in bit depth. Ideally, the histogram should be flat, but remember that noise is additive. This means that it is clipped at the low and high end of the histogram. If the noise added to the gradient, gave a negative value, this was clipped to 0 in the camera. And the same happned at the high end. So these areas show a larger number of pixels and a higher peak. The slight unevenness would disappear if we'd used more frames in the stacking process. So how does this work?<br />
<br />
<h3>
The theory</h3>
In a noise free world, our 3 bit camera would faithfully register the gray scale of our scene, but only where the intensity of the scene matches the level that the camera can record. In any other position, the intensity is either too low or too high, and the camera electronics will round the value to the nearest digital unit (0 ... 7). That's what makes the staircase. A scene value that falls halfway between two digital units, will either end up rounded up or down. Always in the same manner. However, if we add noise, a pixel value halfway between digital units will sometimes end up being rounded up, and sometimes being rounded down. Depending on the value and how much noise we add, it will end up more in the lower or more in the higher digital unit. If we only have two frames to stack, then an intensity value halfway between two units, can one time be represented by the lower unit, and in the other frame by the higher unit. The average value will then be the average of the two units. If we had a one bit camera, which only can register 0 (black) or 1 (white), an intensity value of 0.5 with some added noise, can end up 0 in one frame, and 1 in the other, with a 50% chance. The average will then be 0.5.<br />
<br />
If we have three frames in our stack, the pixel values can consist of two 0 and one 1, or one 0 and two 1. We can register the intermediate values 1/3 and 2/3, as well as the values 0 and 1. We now have 4 intensity levels which we can resolve. If we continue adding frames to our stack, then each added frame will add one more intermediate level of intensity in our stacked image. If two frames add one level, three frames add two levels, and four frames add three levels (1/4, 1/2, 3/4), then 32 frames add 31 levels between each existing level. The new image should have 32\cdot7+1=225 levels. This of course, applies only if our stacked image can hold all those levels of intensity. We need 8 bits to represent each intensity level.<br />
<br />
In most cases, the average and median of a data set will follow each other closely, if we have enough data. So let's look at what happens when we use median stacking rather than average stacking. We can actually predict what will happen. In a stack of any odd number of frames, the actual pixel value of the stacked image will be a pixel value of one of the images that went into the stack. This predicts that any pixel value in the stacked image will be one of the seven original levels. There should be no improvement in bit depth if we use median stacking. Well, here's the result of median stacking. The same images that went into the average stack were used.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ6HII2t7V2_oSxXEZALOwYgipDbPvigt70BzniVaWuuiFcDE1moUUsEf5foR7DFXRQkZdHu-6r-VAq1o4wc538XGVBQDxmHnadR6dtz1ao7c5IH6WJ89y4r2gE6jhuhAwT_ztFf1MPwC5/s1600/median_noise007.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZ6HII2t7V2_oSxXEZALOwYgipDbPvigt70BzniVaWuuiFcDE1moUUsEf5foR7DFXRQkZdHu-6r-VAq1o4wc538XGVBQDxmHnadR6dtz1ao7c5IH6WJ89y4r2gE6jhuhAwT_ztFf1MPwC5/s400/median_noise007.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 7: median stacking</td></tr>
</tbody></table>
Apart from noisy transitions, we still see eight intensity levels. If we examine the transitions closely, there are some pixels where we actually have recorded a value midway between two levels, but this is due to the way the median is calculated for an even number of samples. If we add or remove one frame, the intermediate levels will dissappear. Here's the histogram of our even numbered stack.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgry9OlGPWP5qNtueMinvFneXLZ6xHXbayczQkmK_-vYu3dtLSgqVDAskHdOyav3eJ2OccqMm4fkik7KBBdXzZxWhrkozENvjeWh-Uyti3Xg1u-K_-fNWoSyC-kPHZH5xy7Wlje_p5Abh5n/s1600/PItest_median_noise007.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="218" data-original-width="643" height="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgry9OlGPWP5qNtueMinvFneXLZ6xHXbayczQkmK_-vYu3dtLSgqVDAskHdOyav3eJ2OccqMm4fkik7KBBdXzZxWhrkozENvjeWh-Uyti3Xg1u-K_-fNWoSyC-kPHZH5xy7Wlje_p5Abh5n/s400/PItest_median_noise007.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 8: histogram of median stacking</td></tr>
</tbody></table>
Finally, let's see what happens when the noise in the recorded image not quite obliterates the staircase appearance in each image. We halve the noise in each image, which now looks like this.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRCa1Lw5cQZ6v4IpLhRThThMfT8BBMTuPKX3dq2d-MLLdhMohjxWwyNvsPRDkPvQwDUMn_SLycVUQ_VEflycyzw60vOwDCcPQR6XTyvCPsU8CPmkOZ3fxOpvzrl8LAWTRe-wpVXK6IOEbZ/s1600/gradient_noise0035_3bit.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRCa1Lw5cQZ6v4IpLhRThThMfT8BBMTuPKX3dq2d-MLLdhMohjxWwyNvsPRDkPvQwDUMn_SLycVUQ_VEflycyzw60vOwDCcPQR6XTyvCPsU8CPmkOZ3fxOpvzrl8LAWTRe-wpVXK6IOEbZ/s400/gradient_noise0035_3bit.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 9: half the noise in each image</td></tr>
</tbody></table>
There's still some of the staircase left, and the stacked (averaged) image looks like this.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8btvVJZ0gEc8WyEpktMRb3QDWTjhpU8-OakRG9nTftzpI5VqyWF8sOEuZxSmm0govwzuOCB3jY7VJhAsYV3ffztgyg1UU6S_Uq3VLez8RY9dan9pxBEcqHRPpHmExK3zNPfh85DOlsfRX/s1600/average_noise0035.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8btvVJZ0gEc8WyEpktMRb3QDWTjhpU8-OakRG9nTftzpI5VqyWF8sOEuZxSmm0govwzuOCB3jY7VJhAsYV3ffztgyg1UU6S_Uq3VLez8RY9dan9pxBEcqHRPpHmExK3zNPfh85DOlsfRX/s400/average_noise0035.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 10: average stacking with half the noise</td></tr>
</tbody></table>
While this certainly is an improvement, there are still annoying bands left in the final image. It seems that for a good result, we actually need enough noise to really obliterate the bands in the original image. The width of the noise should be at least half a digital unit. Anything less will not give us the smooth image we want.<br />
<br />
<h3>
Summary</h3>
Let's summarise. It is possible to increase the bit depth of an image by stacking noisy images. The noise must be sufficient to obliterate the staircase effect in the images that make up the stack. You have to use average stacking for this to work, and you have to be able to save the result in a format that keeps the higher bit depth. Stacking an 8 bit image, and saving it in an 8 bit format, will not do you any good. In the same way, stacking 16 bit images and saving the result in 16 bit format will not achieve anything. But even a moderate number or images in 12 bit format, can give you a 16 bit result.<br />
<br />
This method works, because noise will destroy the banding that occurs in low bit depth images, and stacking will reduce the noise in the final image.<br />
<br />
The proof of this is in the proverbial pudding. Here's an image of a little more than a gradient. Added noise and 3 bit depth.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKMQrOeSzrzYMl1SwSMxYpx5vmPoLwQR4lh4ea3kPLZNjVayGY7D8nwLwTGqa7jBcJc_dXFN_AuA9zHwEmuY8cAri1lkX7baTgRrTMpwXEL-s8-RDIwcCdSs3ccAvQaPp36p_fxOu8yYKA/s1600/Gradient_circle_text_noise007.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKMQrOeSzrzYMl1SwSMxYpx5vmPoLwQR4lh4ea3kPLZNjVayGY7D8nwLwTGqa7jBcJc_dXFN_AuA9zHwEmuY8cAri1lkX7baTgRrTMpwXEL-s8-RDIwcCdSs3ccAvQaPp36p_fxOu8yYKA/s640/Gradient_circle_text_noise007.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 10: a noisy image in 3 bit</td></tr>
</tbody></table>
And this is what a stack of 32 of those reveals. Impressive, isn't it? (If you're not impressed, look closely at the lower left corner. Still not impressed?)<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjddKQtq7A0O8Ym4TscigKCPGoAuarztKpvncxsleG-F5z5qFlJCWnt2Kkw4t7t_8CzspV3n0uI2nSBOTxcGYqgd-0z4rDNl3eT4aEZzyiXUh92YgNhKPDimZqeBpDJ7xZJ1l43nieI9WQe/s1600/average_circle_text_noise007.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="600" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjddKQtq7A0O8Ym4TscigKCPGoAuarztKpvncxsleG-F5z5qFlJCWnt2Kkw4t7t_8CzspV3n0uI2nSBOTxcGYqgd-0z4rDNl3eT4aEZzyiXUh92YgNhKPDimZqeBpDJ7xZJ1l43nieI9WQe/s640/average_circle_text_noise007.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 11: 32 images stacked</td></tr>
</tbody></table>
<br />Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-79327140543325571292017-06-04T03:32:00.000-07:002017-06-04T03:32:41.944-07:00Handling Fits files in batches - FITSFileManager in PixInsightLately I have processed a number of images taken with the <a href="http://telescope.livjm.ac.uk/" target="_blank">Liverpool Telescope</a> on La Palma. The images in their archives are FIT files that have a large header with all the information you'd ever want on the image. When downloaded, the iamges need to be extracted (from tar or tgz formats) and sorted. When I started out with this I would download the images from each filter separately, in order to keep the colour channels apart, but this gets tiresome. Lately I have started to use a script in PixInsight that does the work for me. Here's how I do it.<br />
I download all the images at once and put them in one folder on my computer. Then I extract the images from the compressed tar-files. Next I start the FITSFileManager script in PixInsight (Scripts -> Utilities). I rename the image files, using the FILTER keyword as a prefix to the filename. The file name template is: '<span style="font-family: Verdana, sans-serif; font-size: x-small;">&FILTER1;_&filename;</span>'. I move the files to the same directory as where they were stored, so that the new file name replaces the old file name. This way, my files are easily sorted and identified.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-23130359791370416672017-06-04T03:12:00.002-07:002017-06-04T03:37:08.143-07:00Contour mask in PixInsight - an alternative wayIf you want to reduce stars in PixInsight, you need a contour star mask. The StarMask process has this option, and for many images this works just fine. But occasionally I find it difficult to make this mask work for both large and medium sized stars. Here's an alternative to the standard method.<br />
Use StarMask to create a standard star mask. Choose the number of layers large enough to include even large stars. To get this to work, you may need to decrease the noise threshold a bit. On most of my images, I start with 6 or 7 for the number of layers, and about 0.35 for the noise threshold. But really it depends entirely on your image and what size stars you have. Keep the structure parameters at their default values (Large = 2, Small = 1, Compensation = 2)<br />
When you're satisfied with the mask, create another one, but decrease the number of layers by one, and set the structure parameters to 0.<br />
Now, use PixelMath to subtract the second mask from the first. If you want to keep the original masks, create a new image. Otherwise use the expression '$T - star_mask2' and apply it to the first mask (with the larger stars). The star mask should now show donuts for stars. If the donuts don't open up (ie still gray or white in the center), you need to increase the intensity of the second mask by using the HistogramTransformation tool. Occasionaly, I have had to use the CloneStamp tool for a tricky star or two. Just take a sample from the black background and clone it in the white star that is reluctant to open up. Just make sure you put it in the right spot, otherwise you may end up with a lopsided star.<br />
When you are satisfied with the contour shapes, you can increase the intensity of the mask with the HistogramTransformation tool, and blur the mask with the Convolution tool.<br />
The advantage of this method is that it creates contour masks for large and small stars alike, while the standard method, sometimes fails to create contours for smaller stars.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-43584063945145710342017-02-27T15:56:00.003-08:002017-02-27T15:56:58.522-08:00star repair in PixInsight - part 2While reprocessing old data, I came across a very instructive incidence of star over exposure. When imaging under light polluted skies, sky glow is added to any sky signal (stars, nebulae, etc). If the pollution is strong enough to over expose stars, the true star colour data is destroyed. When the sky glow is removed during post processing, this will lead to bright stars having the wrong colour.<br />
For example, say a bright star is slightly blue in colour. Without any sky glow, it would register as RGB 0.8, 0.8, 0.95. Sky glow gives an added 0.3 in blue. This will put the star colour on sensor as RGB 0.8, 0.8, 1.0, since the maximum value for a pixel is 1.0.<br />
When the sky glow is removed during background extraction, the value 0.2 will be subtracted, and we end up with a star's RGB values of 0.8, 0.8, 0.7. The star core has suddenly turned yellow. However, away from the core, the star will still be blue, since these areas weren't over exposed. Therefore the need to correct star colours.<br />
During normal stretching of an image, most of the bright stars will be maximised, and have a final colour of RGB 1.0, 1.0, 1.0. However, if masked stretch is used, the stars will not be saturated, and will have a funny looking core.<br />
Here's an example of an unstretched star, with (left) and without (right) colour repair.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb1u7Wo_WpNAUsEh4yTVQpQ0P1RiUfw_TDnU67K1mhx1B2QpJ59YGLxlUkmkR0vp-YwCGDQzgKcxJgPnwjG5HnTVzQT6yAq6v7-p_IDH0G-iM5WbvGLprBhiGsvXsnpkwfAq9h5is1hh7S/s1600/HSV_Repair_s.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb1u7Wo_WpNAUsEh4yTVQpQ0P1RiUfw_TDnU67K1mhx1B2QpJ59YGLxlUkmkR0vp-YwCGDQzgKcxJgPnwjG5HnTVzQT6yAq6v7-p_IDH0G-iM5WbvGLprBhiGsvXsnpkwfAq9h5is1hh7S/s640/HSV_Repair_s.jpg" width="640" /></a></div>
<br />
When should star repair be implemented in the workflow?<br />
Of course, before any stretch is applied, the colour must be corrected. However, there is one earlier instance where star colour matters in processing, and that is during colour calibration.<br />
Colour calibration tries to set a white point by looking at all the stars in an image, and apply a calibration scheme that is determined by the average star colour. When the star colour is wrong due to over exposure and background extraction, the white balance after colour calibration will be off. Therefore, there is an argument for applying HSV repair prior to colour calibration. The workflow then becomes as follows:<br />
<br />
<ul>
<li>cropping of edges</li>
<li>background extraction (ABE or DBE)</li>
<li>background neutralisation</li>
<li>HSV repair</li>
<li>colour calibration</li>
</ul>
This image shows the effect of doing HSV repair before (left) or after (right) colour calibration. Due to the skyglow adding mainly blue to the image, several star cores had a warmer colour after DBE. This resulted in the colour calibration routine making the blue stars more intense blue. By doing the HSV repair process prior to colour calibration, stars got a more natural blue colour after stretching.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2Fzo6AOzzeHxkQONr6JG5z5v7wV7f4Ab7M-2ZoaUSN4hWI3sMZZphT_kYvaE9ZhM3r4yfvzJ9ZkKmES2aWr_MpoZp82viJJMRkRg1f3Xs22Ok3NhYGQw0BSPqZVZZQlonwW7eAqIH7IOp/s1600/CC_afterL_beforeR_HSVRepair.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2Fzo6AOzzeHxkQONr6JG5z5v7wV7f4Ab7M-2ZoaUSN4hWI3sMZZphT_kYvaE9ZhM3r4yfvzJ9ZkKmES2aWr_MpoZp82viJJMRkRg1f3Xs22Ok3NhYGQw0BSPqZVZZQlonwW7eAqIH7IOp/s640/CC_afterL_beforeR_HSVRepair.jpg" width="640" /></a></div>
<br />Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-51244303118793739222017-02-27T15:25:00.000-08:002017-02-27T15:34:03.163-08:00Raspberry Pi DSLR triggerHere's a small, simple project, a remote trigger for a DSLR.<br />
I control my telescope mount through INDI, but unfortunately I can't control my old DSLR that way. The only way I can do "automated" exposures is if I connect an intervallometer to the camera. The problem with intervallometers is however, that they only run on small batteries, and will stop working when it's cold.<br />
Another problem is, that it's impossible to use dithering with an intervallometer and simultaneous guiding.<br />
Here's a partial solution to these problems. I wrote a simple Python script that will run on the Raspberry Pi, The script sends exposure signals through the RPi's GPIO bus to the remote port of my camera.<br />
To connect the Raspberry Pi to the camera, I made a small optocoupler circuit, that will isolate the camera electronics from the RPi.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCul7ikw_syD01i4b0My2rApbmh54CrzXai-gE4YcpKciv4jvnyYV_4bOW8M0tQKk4arh1fjyf6EkV6WxtQW_LYQ7laEuPldQ1bMx52nPJDEA-GsGKOe5UlelpscTFBOg_DuogP1aax_4v/s1600/PentaxTrigger_schem.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCul7ikw_syD01i4b0My2rApbmh54CrzXai-gE4YcpKciv4jvnyYV_4bOW8M0tQKk4arh1fjyf6EkV6WxtQW_LYQ7laEuPldQ1bMx52nPJDEA-GsGKOe5UlelpscTFBOg_DuogP1aax_4v/s320/PentaxTrigger_schem.png" width="320" /></a></div>
The input (left) will connect to a GPIO pin (pin 18, GPIO 24) and ground (pin 20), while the ouput (right) connects to the remote port of the camera.<br />
Here's the code.<br />
_____________________________<br />
<br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">#!/usr/bin/python</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">import RPi.GPIO as GPIO</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">import time</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">import sys</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">NrExposures = 1</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">ExposureTime = 30</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">TimeBetweenExposures = 6</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print ' '</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print 'Make sure that the camera remote port is connected to pins 18 (signal) and 20 (ground).'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print ' '</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">if len(sys.argv) == 1 :</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print 'No arguments provided. Will use single 30 sec exposure.'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">elif len(sys.argv) == 2 :</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> ExposureTime = int(sys.argv[1], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print 'Single', ExposureTime, 'seconds exposure.'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">elif len(sys.argv) == 3 :</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> ExposureTime = int(sys.argv[1], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> NrExposures = int(sys.argv[2], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print NrExposures, 'x', ExposureTime, 'seconds exposures.'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">else :</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> ExposureTime = int(sys.argv[1], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> NrExposures = int(sys.argv[2], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> TimeBetweenExposures = int(sys.argv[3], 10)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print NrExposures, 'x', ExposureTime, 'seconds exposures, with', TimeBetweenExposures, 'seconds delay.'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">TriggerPin = 24 # Broadcom pin 24 (P1 pin 18)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">GPIO.setmode(GPIO.BCM) # Broadcom pin-numbering scheme</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">GPIO.setup(TriggerPin, GPIO.OUT) # trigger pin as output</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">GPIO.output (TriggerPin, GPIO.LOW)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">time.sleep(1)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">counter = NrExposures</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print ' '</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print 'Start : %s' % time.ctime()</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">while (counter > 0):</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print ' Exposure nr', counter, 'started'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> GPIO.output(TriggerPin, GPIO.HIGH)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> time.sleep(ExposureTime)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> GPIO.output(TriggerPin, GPIO.LOW)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print ' Exposure nr', counter, 'ended'</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> counter = counter - 1</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> time.sleep(TimeBetweenExposures)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"> print ' '</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span></i>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print 'End : %s' % time.ctime()</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">GPIO.output(TriggerPin, GPIO.LOW)</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">GPIO.cleanup()</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print ' '</span></i><br />
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">print 'Goodbye.'</span></i><br />
______________________________<br />
<br />
The script is saved as Trigger.py and is made executable:<br />
<span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;"><br /></span>
<i><span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">chmod +x Trigger.py</span></i><br />
<br />
The script takes up to three command line arguments. The first argument is the single frame exposure time in seconds. The second argument is the number of exposures to take, and the third argument is the wait time between exposures, also in seconds.<br />
For example<br />
<br />
./Trigger.py 1 2 3<br />
<br />
will take 2 exposures of 1 second each with a 3 seconds wait time after each exposure. The script will take 8 seconds to run.<br />
<br />
If only two arguments are given, the wait time will be set to a default value of 6 seconds.<br />
If only one argument is given, this is interpreted as exposure time. Only one exposure will be taken.<br />
If no arguments are given, the script will do a single 30 seconds exposure.<br />
<br />
Eventually, I may try to rewrite the INDI CCD driver to control my camera, as this will give me the possibility to dither between exposures. But for now this simple script will do the job.<br />
<br />
<div>
<br /></div>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-1624109046880675032017-01-14T17:18:00.000-08:002017-02-27T15:59:58.264-08:00star repair in PixInsightWhen stars start to get over exposed during data collection, their core may become a different colour than the normally exposed outer halo. When the stacked image is stretched using MaskedStretch in PixInsight, stars are stretched less than dimmer parts. This means that stars in the stretched image are not saturated, and can display an odd colour. PixInsight has a script that can correct star cores of partially saturated stars. It is called "Repaired HSV Separation", and can be found under Scripts -> Utilities. The script should be applied just before the first stretch. It will decompose the colour image into H, Sv, and V colour components, and repair the colour values of the stars.<br />
The different components are then to be combined using the ChannelCombination process (which is under ColorSpaces).<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1d3dkNVcVePmdwslWKJ-0vjGXCg3kppRNl_e2aNaYZaHDky5az6eiCxhvlRLBMBFjk_MqGa_2PDJ1FwkO-kW_j-3osc0tgf0PHMioaPKaiw19cxvXl2HjEfiAsJa-D2izWTj3y7_Van7S/s1600/Repaired+HSV+Separation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="476" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1d3dkNVcVePmdwslWKJ-0vjGXCg3kppRNl_e2aNaYZaHDky5az6eiCxhvlRLBMBFjk_MqGa_2PDJ1FwkO-kW_j-3osc0tgf0PHMioaPKaiw19cxvXl2HjEfiAsJa-D2izWTj3y7_Van7S/s640/Repaired+HSV+Separation.png" width="640" /></a></div>
The upper part of the dialog box is used to determine which images are to be created, while the lower part is used for repair of the H, Sv, and V channels.For best results, mainly the Repair level parameter needs to be adjusted.<br />
<a href="http://pixinsight.com.ar/" target="_blank">Alejandro Tombolini</a> (who brought this script to my attention), recommends to use the unrepaired V channel when combining the channels. But experiment to find out if the repaired or unrepaired V channel works best. Be sure to select the HSV colour space.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvz6k1P1TICoztXt7PtMufHXbS6mf6HNYtozjW_a0Qbc9vYPZOPrFOcRPHahnRMwhzao1KQ1tgLVdrKbJa8DAn2BTJRre-tHv0pOePlamf0xDmIrujtW9v7h0dyf7FHzEaMeAoiGR5Hsiq/s1600/ChannelCombination.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="204" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvz6k1P1TICoztXt7PtMufHXbS6mf6HNYtozjW_a0Qbc9vYPZOPrFOcRPHahnRMwhzao1KQ1tgLVdrKbJa8DAn2BTJRre-tHv0pOePlamf0xDmIrujtW9v7h0dyf7FHzEaMeAoiGR5Hsiq/s640/ChannelCombination.png" width="640" /></a></div>
Here's how it affected one of my most recent images, a before and after shot of the Pleiades (M45). Look at the core of the brightes stars. In the original image, the cores arepink in colour, while in the repaired image, they are blue, the same colour as the outer haloes.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfDMzRbWdHSRduvo4NdEX3qQAOrJi8vmp8pP95ufFXoloiINHLRvjiBexCZrT0IfYqTH4o2UybZbhPNGJhwqyy4zmucPU0N6LaVuBORVqkMk2rhoEd2b1stzZy732Gp-gdvtXTkmN_foe5/s1600/Before_After_Repaired.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfDMzRbWdHSRduvo4NdEX3qQAOrJi8vmp8pP95ufFXoloiINHLRvjiBexCZrT0IfYqTH4o2UybZbhPNGJhwqyy4zmucPU0N6LaVuBORVqkMk2rhoEd2b1stzZy732Gp-gdvtXTkmN_foe5/s640/Before_After_Repaired.jpg" width="640" /></a></div>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-73701950708883538482017-01-03T13:56:00.000-08:002017-01-03T13:56:17.424-08:00Correcting dark lines in dslr astro imagesMy Pentax DSLR suffers from dark horizontal lines when I photograph bright stars. I'm not sure about the cause of this, but it may be some reverse blooming or ADC related issue. This issue isn't uncommon for digital cameras, but it sure is a nuisance.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOinZk6paQISX_DLI96kI7_ybyHIOIx8J5WGy6eQqkmMKUG-bJtesBIjmYtC5YDdEAB5zR4trECYuLtCiyV7mpJyQ9cWRvweWyzQML2PxIZvphKB2zl9nLPdpdeb4tzMAAnlDIjxXAB_GH/s1600/original.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOinZk6paQISX_DLI96kI7_ybyHIOIx8J5WGy6eQqkmMKUG-bJtesBIjmYtC5YDdEAB5zR4trECYuLtCiyV7mpJyQ9cWRvweWyzQML2PxIZvphKB2zl9nLPdpdeb4tzMAAnlDIjxXAB_GH/s1600/original.jpg" /></a></div>
<br />
<a href="http://pixinsight.com.ar/" target="_blank">Alejandro Tombolini</a> showed in one of his processing examples how to deal with these lines. Here's my adaptation of his process.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR7pxWphSX7bluHvJPv5CjPPY8bG8auehRQeul7FU0MLKxTQOoUzfUmbCeUUdojjuw9H1HVJ9MBRzNHIWQoYFH-XYWg89dQug5n9Gdk6tBIdJIW1kul8TpY1vsHQ-Btlzd-sDYcX9zoEj-/s1600/antibloomdetail.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR7pxWphSX7bluHvJPv5CjPPY8bG8auehRQeul7FU0MLKxTQOoUzfUmbCeUUdojjuw9H1HVJ9MBRzNHIWQoYFH-XYWg89dQug5n9Gdk6tBIdJIW1kul8TpY1vsHQ-Btlzd-sDYcX9zoEj-/s640/antibloomdetail.jpg" width="640" /></a></div>
<br />
I will use the CanonBandingReduction script to correct the lines. This script works only on entire images, and can introduce an uneven background and other artefacts when used on images which do not have bands across the entire width.<br />
I therefore start with making a preview that contains the area I want to correct. I leave some margin, because later on I will clone the preview and shrink the clone.<br />
By dragging the preview onto the workspace, I create a new image, which I apply the CBR script on.<br />
The next step is to make this preview the same size as the original image. For this I use the crop tool.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhis0mGP0Pb-Bv8g_3zho4XVjIZck1B6EoDJswuP5Hv5DXNAibMeoLe2TzRLPdFBqVxA5dKqCMYRtwmuPOUxm4Rk6kbz5qldkka6ku11_GQ4l41ySly_C1LxAk7fUdsH0Ml9VDvIWvokA5w/s1600/gradientmergemosaic_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="516" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhis0mGP0Pb-Bv8g_3zho4XVjIZck1B6EoDJswuP5Hv5DXNAibMeoLe2TzRLPdFBqVxA5dKqCMYRtwmuPOUxm4Rk6kbz5qldkka6ku11_GQ4l41ySly_C1LxAk7fUdsH0Ml9VDvIWvokA5w/s640/gradientmergemosaic_2.png" width="640" /></a></div>
<br />
Set the margins such that the image becomes the correct size, with the corrected image now in the same place as the preview in the original. If the result is ok, the image is saved as xisf file.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjPGVWc4-wFUmVLj-90qDYnu_HUo4L4CjJDMkTn2Sf9J171qnNZ9sdPuRRE-2oiL5Vp8YHpis6ZDGz7OctVkr8b8DFLE7A4O_bHI2XKXGNXdgB2YvhrBXsFzNhrslA5IfzluXn2JllhOH8/s1600/preview2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="476" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjPGVWc4-wFUmVLj-90qDYnu_HUo4L4CjJDMkTn2Sf9J171qnNZ9sdPuRRE-2oiL5Vp8YHpis6ZDGz7OctVkr8b8DFLE7A4O_bHI2XKXGNXdgB2YvhrBXsFzNhrslA5IfzluXn2JllhOH8/s640/preview2.jpg" width="640" /></a></div>
<br />
Next I shrink the preview in the original image and create a hole in the image where the preview is. For this I use pixelmath.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYoiE5Ibl55soG5mJybhkANd1xLKgkj33g2rWFl3irPnj-c2pNuttrv1L6cfC0-OGW7VXnHEozmulbfug918tCxqfkjCT7cQokOm7bkYmAj6YX1l_P0Wtdg4-Vid8AiYVRFRDSa3xNnFsM/s1600/PixelMatchinrect.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="598" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYoiE5Ibl55soG5mJybhkANd1xLKgkj33g2rWFl3irPnj-c2pNuttrv1L6cfC0-OGW7VXnHEozmulbfug918tCxqfkjCT7cQokOm7bkYmAj6YX1l_P0Wtdg4-Vid8AiYVRFRDSa3xNnFsM/s640/PixelMatchinrect.png" width="640" /></a></div>
<br />
A new image is created with a black patch where the (smaller) preview was.<br />
This image is also saved.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnfhzl-BTPkocawnaxliyuZFUfv7ERan5qvSS_5O8zbXNJG_WJWV8vQsR0gWq7uUnqWkE54KodbeCcEPCNwLUSRRe6mdTqxZoXvnE3nQ6l9kbEiUIn1__pRPVxznyk2Cgwsb4YrKLfoZfg/s1600/integrationwhole2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="476" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnfhzl-BTPkocawnaxliyuZFUfv7ERan5qvSS_5O8zbXNJG_WJWV8vQsR0gWq7uUnqWkE54KodbeCcEPCNwLUSRRe6mdTqxZoXvnE3nQ6l9kbEiUIn1__pRPVxznyk2Cgwsb4YrKLfoZfg/s640/integrationwhole2.jpg" width="640" /></a></div>
<br />
Finally the two saved images (the corrected preview, and the uncorrected image with the black patch) are merged using GradientMergeMosaic.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYKJE-dzzRNWT_Dsa_sO90YcydxLiaDXjdhfRcPN56LiSGXUmRWOTdP0sDpIf9s5hizxLswJnxRHNXByDbZB4llmz6y2q9mHWXLIh0JO5KfgMazEEjICfjv6uQEEErRHQ0-cKwX9cATNvB/s1600/GradientMergeMosaic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="598" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYKJE-dzzRNWT_Dsa_sO90YcydxLiaDXjdhfRcPN56LiSGXUmRWOTdP0sDpIf9s5hizxLswJnxRHNXByDbZB4llmz6y2q9mHWXLIh0JO5KfgMazEEjICfjv6uQEEErRHQ0-cKwX9cATNvB/s640/GradientMergeMosaic.png" width="640" /></a></div>
And this is the corrected image<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfydG5r7aGk_io6aLFOQIiRkrQ8E2sQaMxDWdVTnAmb7WZyexnQ5lJ6bT3ey5GGfV0j2_Xm0SXqpEReB-B6fLRBpOoBE58_DdfBAFz04R1cc6cglzlrwtorFFvd3gWetTycsn8OuZH1BNf/s1600/Corrected.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfydG5r7aGk_io6aLFOQIiRkrQ8E2sQaMxDWdVTnAmb7WZyexnQ5lJ6bT3ey5GGfV0j2_Xm0SXqpEReB-B6fLRBpOoBE58_DdfBAFz04R1cc6cglzlrwtorFFvd3gWetTycsn8OuZH1BNf/s1600/Corrected.jpg" /></a></div>
<br />Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-44339767147926082212016-11-27T07:00:00.000-08:002016-11-27T07:00:03.770-08:00Removing hot pixels in a stacked imageSometimes even an aggressive hot pixel filter won't remove all hot pixels. Here's a technique that can remove any residual hot pixels in a final stacked image. I use PixInsight's Morphological Transformation with a starmask to remove these nuisances.<br />
Here's a crop of an image, showing what I'm talking about. The image was taken with a DSLR and consists of a stack of 10 sub frames exposed for 15 minutes each at ISO 800. My camera, a Pentax K20D, is getting old, and I always have lots of hot pixels in my images. Calibration removes most, but frequently a number remain after image integration. The technique which I describe here will dim the remaining pixels.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-TjsePPvwwaCrgLot6hBl6eAYfpZkFWEbWr5qF7TMSHbplUTO9l9B-1LCvqFXEUnvkBrB7NLtedahYZNzNFAnnbxA_tweYDE2Dy6iLTdpwac-VoDQlboFmBc8lyOEnXRvQzHVTEsDdHhI/s1600/integration_whotpixels.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="488" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-TjsePPvwwaCrgLot6hBl6eAYfpZkFWEbWr5qF7TMSHbplUTO9l9B-1LCvqFXEUnvkBrB7NLtedahYZNzNFAnnbxA_tweYDE2Dy6iLTdpwac-VoDQlboFmBc8lyOEnXRvQzHVTEsDdHhI/s640/integration_whotpixels.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">hot pixels after stacking</td></tr>
</tbody></table>
I start with making a Luminance copy of the image in its linear state, and apply STF to this grayscale image. Then I use the StarMask tool with a low value for Scale (typically 3 works ok) and a noise threshold of 0.5 (to be experimented with). I decrease large-scale, small-scale and compensation (1, 0, 1) and smoothness (about 6 - 8). Then apply the mask tool to the luminance copy. It may be necessary to tweak the parameters. No stars should be in the "Star-Mask" that is created.<br />
When I'm satisfied, I apply the mask to the original colour image.<br />
For pixel removal I use Morphological Transformation with Morphological Median as operator. Amount to about 0.5, iterations to 4 - 5, and Structuring Element to 9 pixels with a circular pattern.<br />
Apply the tool to the image. If hot pixels of a certain colour remain, I split the RGB channels and use the channel that has the remaining hot pixels to repeat the process. The result is this.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQM5DCk-1ua4a96IqTmvC3GW-GH_87qhKGm1T_7QU4I4G8awdmUKxS8CsfJW1jW2C8xrEVPEWgXhSdffWipMB_NclWVKz0FLy_Dxcfpqd0M4Pl3lFaphuc8vS4Gm7H_odUBeXQfQCdBdEc/s1600/integration_wohotpixels.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="488" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQM5DCk-1ua4a96IqTmvC3GW-GH_87qhKGm1T_7QU4I4G8awdmUKxS8CsfJW1jW2C8xrEVPEWgXhSdffWipMB_NclWVKz0FLy_Dxcfpqd0M4Pl3lFaphuc8vS4Gm7H_odUBeXQfQCdBdEc/s640/integration_wohotpixels.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Same crop after hot pixel removal</td></tr>
</tbody></table>
Further tweaking of the star mask and morphology parameters can improve this result even more, of course.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-85972130116074065662016-11-20T09:02:00.002-08:002016-11-20T09:02:51.901-08:00First steps in guidingFinally I have taken the plunge and invested in a guiding setup. I decided for the SkyWatcher ST80 scope with ZWO ASI120MM camera. The camera is the older USB2 version.<br />
As I don't want to take my laptop out in the field, I intend to use a RaspberryPi as a guiding computer.<br />
The last couple of days and nights, I have been trying to get this to work. My configuration at the moment is this:<br />
ASI120MM connected to RaspberryPi, running Ubuntu Mate as an operating system.<br />
The Pi also holds an INDI server and the lin_guider software. The camera connects to the Pi and receives guiding pulses which it sends on to the mount (SW AZ-EQ6 GT) via the ST4 port.<br />
Installation was quite straightforward, despite warnings that the camera driver may not be stable. Setting the exposure time to 1 sec in Lin_guider seems to work fine though.<br />
Last night, despite partial cloud cover, I was able to test the guiding, and it worked fine.<br />
Lin_guider connected to the camera, and frames started to flow in. Focussing was a bit of a hassle. I had to take my laptop out (despite the dew), and because there is no live view, it took a while to get focus right. In the end I had my setup guiding on Vega (which was grossly overexposed at any gain setting), and later on a nearby much fainter star. This worked fine until the stars disappeared behind my neighbour's trees and clouds rolled in.<br />
I haven't tried imaging yet, and I still have to figure out the best settings for PID gain, but so far so good.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-54191498389856276212016-09-03T05:41:00.001-07:002016-09-03T05:43:51.219-07:00Creating a customized "Batch Process" in PixInsightSome processes in PixInsight are adapted for large batches of images. But sometimes you want to do a sequence of process steps, for which there is no batch process, on several images. Opening each image and applying a number of processes is quite tedious.<br />
Fortunately, PixInsight has a solution for this. It involves an image container and a process container.<br />
For any process in PI, if you drag the small triangle in the lower left corner to an image, it will apply that process to the image. This can also be applied to a collection of images, if these are in an image container. And the process doesn't have to be a single process, it can be any number of processes that are in a process container. How is this done?<br />
<h3>
Prepare the process container.</h3>
Open an image and apply the processes you want to batch to that and other images.<br />
Now open the image's history explorer, which should be located on the left edge of the workspace. Drag the small triangle at the bottom left to an open area in the workspace. This will create an instance of the process history of that image as a process container in the workspace.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_jUOGnFvd5mrZbKOgtsu0DPUEHnVy3j7t0nY-arcDxrjSYwfYbYYiuGFru6787ruTa_TJJ-2Q3VyTEaHAUObWxzImzD2qumkXeLL4NrU4pFHe6_Hh8ser7IckvDb6o9guTNCmT9pYXqLM/s1600/PI_History.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_jUOGnFvd5mrZbKOgtsu0DPUEHnVy3j7t0nY-arcDxrjSYwfYbYYiuGFru6787ruTa_TJJ-2Q3VyTEaHAUObWxzImzD2qumkXeLL4NrU4pFHe6_Hh8ser7IckvDb6o9guTNCmT9pYXqLM/s1600/PI_History.png" /></a></div>
Now you can close the image without saving.<br />
<h3>
Create an image container</h3>
Next create an image container by right clicking anywhere in the workspace, or press Ctrl+Alt+I.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi04xSmgOV_AEgrXFaWpvTXzjexPz8tyCngYD0Ewf5s7hi6k2lKv03EdSn8_G9s0WsafMcRToqcn06PHnZOdyevD_4jS8x2ce1uBAAdIw_d3jxFh026tGS1Gve9L30iSe_t8VfE46g5HJu5/s1600/PI_Workspacex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi04xSmgOV_AEgrXFaWpvTXzjexPz8tyCngYD0Ewf5s7hi6k2lKv03EdSn8_G9s0WsafMcRToqcn06PHnZOdyevD_4jS8x2ce1uBAAdIw_d3jxFh026tGS1Gve9L30iSe_t8VfE46g5HJu5/s1600/PI_Workspacex.png" /></a></div>
This will create an image container in the workspace. Open the container and add the image files you want to batch process. Also supply a name for the output directory where you want the processed images to be saved. Finish by dragging the small triangle to an empty spot in the workspace. This will create a new instance of your image container, with all the images in it.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiA2gyJpfNBJxQX4i4Ha0-8qiDH63vodz1Rw37OHsfK1suGmCeTSXvtBsSOaNmORbABWqipN58iNGs0LkS1ieQ_EaRktyU9eFbiF9RWCNzlghR6CSzHZyFer59g8yRSbB2wRdDiVEdhAZwX/s1600/PI_ImageContainer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiA2gyJpfNBJxQX4i4Ha0-8qiDH63vodz1Rw37OHsfK1suGmCeTSXvtBsSOaNmORbABWqipN58iNGs0LkS1ieQ_EaRktyU9eFbiF9RWCNzlghR6CSzHZyFer59g8yRSbB2wRdDiVEdhAZwX/s1600/PI_ImageContainer.png" /></a></div>
Apply the processes in the process container by simply dragging the process container onto the image container that contains the images.<br />
That's it. You've just applied several process to a batch of images.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-71816553247656757042016-08-16T15:05:00.000-07:002016-08-16T15:05:01.238-07:00Vibration damping the EQ3 aluminium tripodThe EQ3 mount with the aluminium tripod is generally considered not to be suitable for astrophotography. Still, it's a nice, portable mount that, under the right circumstances, can produce relatively good images.<br />
There is an <a href="http://www.cloudynights.com/page/articles/cat/articles/beefing-up-hollow-aluminum-tripod-legs-r3016" rel="nofollow" target="_blank">article on cloudynights.com</a> that describes how the tripod can be beefed up. The author of that article increases the weight of the tripod by putting rebar in the upper legs, and a rectangular wooden dowel in the lower legs.<br />
The problem with the tripod is not just a weight problem, but rather a vibration problem.<br />
Filling up the hollow legs with dowels and rebar, doesn't necessarily improve the vibration characteristics of this mount. A person commenting on the cloudynights article, suggested that the legs can also be filled with sand. This will result in both a heavier tripod and different vibration characteristics.<br />
I decided to modify my tripod by inserting wooden dowels in the upper and lower legs. But I also secured these dowels to the plastic and aluminium structure. Hopefully, this will improve the vibration damping of the tripod, without it becoming too heavy.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnONlZh7WW_R9FxVKyv-8c_lr7h1iYM7KVlC1HvobgMSAB2ezRa2xcRZOxo5c-gURhwrDFcy3NZ-BQ3yvEd9J9qEmf3Q1c5q1aorL1o4sTv4my8JZB4t2ZaVoTgMyluUjNeBQTlwTLIo44/s1600/lowerleg.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnONlZh7WW_R9FxVKyv-8c_lr7h1iYM7KVlC1HvobgMSAB2ezRa2xcRZOxo5c-gURhwrDFcy3NZ-BQ3yvEd9J9qEmf3Q1c5q1aorL1o4sTv4my8JZB4t2ZaVoTgMyluUjNeBQTlwTLIo44/s640/lowerleg.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Wooden dowel cut to size, ready to be inserted in the lower leg</td></tr>
</tbody></table>
Starting with the lower parts of the legs, I removed all the plastic parts and inserted oak dowels into the aluminium tubes. I noticed that the plastic feet of the tripod are hollow and extend a bit up into the legs. By making the dowels somewhat thinner, and drilling a hole, where the hole in the plastic is, I could fasten the wooden dowel to the plastic foot and later the aluminium leg, and even the top lid of the leg.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPAcNZH2XpgPeM2QjAbb6E6QmXQlYkn_STvCRFaYPi7m2qc2nQ6PXQXapRK971aUCyr8jXs2UmMcfdATpHl1FBkS-YQ8yMbv0HR7-LrHQL0_wxTGB5OHEAd2RRwByh8DHGdpYYSD2eWsrF/s1600/lowerlegtop.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a> </div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzHg1zUrAUT4dFjW2Pa3ZCkD4Cuosnu_oNbgtfzTQPg6iWKZimet2Ag602AV5_w4PUzagHw-OOiTXTprtoBNioMzBCqPY8TOP6kWhTiOlrV1NLElN-Gn__HWqakUENA0AF0QxO0KoHaLmy/s1600/lowerlegdetail.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzHg1zUrAUT4dFjW2Pa3ZCkD4Cuosnu_oNbgtfzTQPg6iWKZimet2Ag602AV5_w4PUzagHw-OOiTXTprtoBNioMzBCqPY8TOP6kWhTiOlrV1NLElN-Gn__HWqakUENA0AF0QxO0KoHaLmy/s640/lowerlegdetail.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Wooden dowel will be secured to the plastic fott and the aluminium leg</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPAcNZH2XpgPeM2QjAbb6E6QmXQlYkn_STvCRFaYPi7m2qc2nQ6PXQXapRK971aUCyr8jXs2UmMcfdATpHl1FBkS-YQ8yMbv0HR7-LrHQL0_wxTGB5OHEAd2RRwByh8DHGdpYYSD2eWsrF/s1600/lowerlegtop.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPAcNZH2XpgPeM2QjAbb6E6QmXQlYkn_STvCRFaYPi7m2qc2nQ6PXQXapRK971aUCyr8jXs2UmMcfdATpHl1FBkS-YQ8yMbv0HR7-LrHQL0_wxTGB5OHEAd2RRwByh8DHGdpYYSD2eWsrF/s640/lowerlegtop.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Top part of the lower leg</td></tr>
</tbody></table>
I then inserted two round beech dowels (12 mm diameter) into the lower parts of the legs, making sure there was a tight fit at either end. Unfortunately, it's not possible to fasten these dowels, other than through a tight fix and the small screws that hold the leg spreader in place.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeeyMBNp3UcVUi7ffDKZO29KjYiKhSM8a9MMuVUAmVBpbfrTxiGoISCXfDjbtKONSAxD9RWhO7MRVREwDEynHTFH8fQQpzyVoFmBEnSFZ7L6Fqner31Sd7YZoR68PZz4E76vSSS1-wzL_G/s1600/upperleg.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeeyMBNp3UcVUi7ffDKZO29KjYiKhSM8a9MMuVUAmVBpbfrTxiGoISCXfDjbtKONSAxD9RWhO7MRVREwDEynHTFH8fQQpzyVoFmBEnSFZ7L6Fqner31Sd7YZoR68PZz4E76vSSS1-wzL_G/s640/upperleg.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">One half of an upper leg</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsXDvSouW43yFbIVOEWCjCYV-XzIOArdjaygb5Y67A5mF8p8pvT-llGiS6wSDcPs1bXacgpNpldluwO3sWxwz_XIFJu6bS_8y8xlnhZx0_x7nJTQ2Asxxf2fZYu_5Nf89xrDOXleMBMytC/s1600/upperlegdetail.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="484" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsXDvSouW43yFbIVOEWCjCYV-XzIOArdjaygb5Y67A5mF8p8pvT-llGiS6wSDcPs1bXacgpNpldluwO3sWxwz_XIFJu6bS_8y8xlnhZx0_x7nJTQ2Asxxf2fZYu_5Nf89xrDOXleMBMytC/s640/upperlegdetail.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Dowels inside the upper leg</td></tr>
</tbody></table>
It doesn't take long to get all three legs done.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjL_sgAN2xMnilVS0ACLGeVJJuspFwORRbs69TH5Dww2T8ucpoV7YvrfCj2cp3ON5akdcgTfhV4dtsK3cINtmRfFA-35l6k8eVRdW_0-m3WPuie3QLRRQGCjlfXEsZR27_bFaYbutXq3WwC/s1600/tripodlegs.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjL_sgAN2xMnilVS0ACLGeVJJuspFwORRbs69TH5Dww2T8ucpoV7YvrfCj2cp3ON5akdcgTfhV4dtsK3cINtmRfFA-35l6k8eVRdW_0-m3WPuie3QLRRQGCjlfXEsZR27_bFaYbutXq3WwC/s640/tripodlegs.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">All three legs completed. Time for reassembly</td></tr>
</tbody></table>
Finally, reassembling the tripod, it looks as before.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdQWgO2CjF_ZQ7BsKY2s5WuDTgD_LaUVV2uSnnPOwLgMcRb5qjCLq-CV7z82PEdcqYPMn5fqWTO80dOo5YKkKOLRFBdWqyB6s0wJ5pdi5k7t6ZiDB-sCr_Ma0LKOF_-n6fwyEmwC5aW74F/s1600/eq3_tripod.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdQWgO2CjF_ZQ7BsKY2s5WuDTgD_LaUVV2uSnnPOwLgMcRb5qjCLq-CV7z82PEdcqYPMn5fqWTO80dOo5YKkKOLRFBdWqyB6s0wJ5pdi5k7t6ZiDB-sCr_Ma0LKOF_-n6fwyEmwC5aW74F/s640/eq3_tripod.jpg" width="480" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Done!</td></tr>
</tbody></table>
The tripod now weighs 3.6 kg, not much more than before, but it feels steadier.<br />
For a short while I also thought of filling the tripod with sand. I found out that the tripod legs are not sealed at the lower ends, and most of the sand will run out after a while. Filling the tripod, will also make it much heavier. Hopefully, the wooden dowels will improve the damping.<br />
Now all that remains is a clear night to test the tripod.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-44961171179596305742016-08-13T15:30:00.003-07:002016-08-14T08:58:51.574-07:00First experience with INDI on Raspberry Pi - part 2Last week , when I tried to control my mount through indi on a Raspberry Pi, I managed to install the server and connect from my laptop to the indi server on the RPi. However, the mount didn't respond. It turned out that the USB serial cable didn't work anymore.<br />
Yesterday I received an EQDIR cable from FLO, and connected it to the mount. After some adjustment of the parameters in linux and the Indi client, it all worked perfectly.<br />
Now I can control my mount from PixInsight or any client that can run the indi protocol.<br />
The next step will be to install and test servers.<br />
Short recap of the installation so far.<br />
<ol>
<li>Install an Ubuntu Mate image on a SD card for the RPi</li>
<li>Connect the RPi to the home Wifi network and set parameters to connect to PuTTY</li>
<li>Connect to the indi repository, download and install the indi server</li>
<li>Set $USER for dialout permission</li>
<li>Create a permanent USB entry for the connector</li>
<li>Start the server</li>
<li>Start the client and connect to the server</li>
<li>Configure the site and the mount in the client</li>
</ol>
So far PixInsight can connect to the mount and send goto commands. With the search capability, I can just search for say, M27 and the mount will slew to it.<br />
Of course, this assumes that the mount is aligned, and so far PixInsight can't do a 2-star alignment.<br />
I just hope that this will be implemented soon.<br />
For the time being, my intended workflow is as follows.<br />
<ol>
<li>Haul out the mount and set up</li>
<li>Level mount</li>
<li>Start mount with SynScan</li>
<li>Do a polar and a 3-star alignment</li>
<li>Park the mount and power off</li>
<li>Disconnect the SynScan</li>
<li>Connect the RPi and boot</li>
<li>Connect the client</li>
</ol>
Further testing is delayed by clouds :-(<br />
<br />
<ol>
</ol>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-36531972206537524972016-08-03T15:36:00.000-07:002016-08-03T15:50:05.720-07:00Note on Dynamic Background ExtractionAstroimages allmost always have a background gradient that needs to be removed. Gradients can have two basic causes; either they are due to limitations of the optical system (vignetting), or to uneven illumination of the night sky. Most of us live and photograph in light polluted environments, and our astroimages incorporate stray light from street lamps or city lights. Even when photographing from a dark site, there is the inevitable sky glow. Whatever the cause of an uneven background, it is seldom something we want incorporated in our images.<br />
PixInsight has two processes for gradient removal; Automatic Background Extraction (ABE) and Dynamic Background Extraction (DBE). These two processes work slightly different from each other, so it is a good thing to know them both. ABE is an automatic process, that does most of the work for you, especially the more laborious part of placing samples in the image. DBE on the other hand, allows for more user control.<br />
In this article, I intend to give my experience of the DBE process, and how I use the various settings in the DBE control window.<br />
Shortly, what you do with DBE is take samples of the background in your image and create a model of the image background, based on those samples. (Note that I assume you are working with an RGB colour image.)<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5gz7bnDGXo0Bw7_0bd14FpUKnfigjIxyZjo7Xg_mf-hTiILIE4l9OJv4KYHQVT8clIJgj2mN7za7h9qwM1gJGSLqUyCzihctFlKt3Z-dEuI8yTpFK4o3Qhktpm63R7TTQEnLGnNS-CfSf/s1600/DBE1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5gz7bnDGXo0Bw7_0bd14FpUKnfigjIxyZjo7Xg_mf-hTiILIE4l9OJv4KYHQVT8clIJgj2mN7za7h9qwM1gJGSLqUyCzihctFlKt3Z-dEuI8yTpFK4o3Qhktpm63R7TTQEnLGnNS-CfSf/s1600/DBE1.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Dynamic Background Extraction</td></tr>
</tbody></table>
When you open the DBE process (Process | BackgroundModelization | DynamicBackgroundExtraction), you start with connecting it to an image, the target, in your workspace. This is done by either clicking in the image you wish to connect to, or clicking the reset icon at the bottom right of the control window (the four arrows pointing inwards). The latter option will also reset all settings in the control window. The active image is now linked to the process and it shows the symmetry lines that can be used by DBE. More on the symmetry lines in a moment.<br />
<h3>
Target View</h3>
Each time you click in the target window, a new sample will be created at that position. In the target view you will see how individual pixel values will be used in the creation of the background model. Each sample has a position (anchor x, y) and a size (radius). The square field in the target view panel shows how each pixel is used in the model. This field should ideally consist of only bright pixels. If the pixels have a colour, than the pixel will only be used in the calculation of the model for that colour. The three values Wr, Wg, Wb are the weights in red, green and blue for the combined pixels in the sample. They determine how much this sample will contribute to the background model. In this view you can also determine if symmetries are to be used. If you have an image which you know has a symmetrical background (vignetting for example), then you can create samples in one place where the background is visible, and use those samples in other parts of the image, even if the background there is not visible. When you click on one of the boxes (H for horizontal, V for vertical, D for diametrical), a line will show where the sample will be used. Not that you can control the symmetry for each individual sample. Use with care.<br />
<h3>
Model Parameters</h3>
In this panel you will set how strict your model is going to be. The most important value is Tolerance. Increase this if you find that too many samples are rejected. The default is 0.5, but expect to use values up to 2.5 regularly, and in extreme cases even higher than 5 - 7. But try to keep this value as low as possible. Once you have created all your samples, and are satisfied with where you placed them, you can decrease this value somewhat and recalculate the samples, until samples are being rejected. Choose the lowest value you can get away with, as this will result in a better approximation of the true background.<br />
Smoothing factor determines how smooth your model is going to be. If you set this to 0.0 then the background will follow your samples very strictly. Increase this value to get a smoother background model if you see artefacts in the model.<br />
<h3>
Sample Generation</h3>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrHvbU9QGn8Pnjzhll9MHj5FIfExZVs6HPetqW7Kv4wJWU65xK9rkfI1SCYNz4jdnjKsPESbGXto8N6tNcYnop6QkhTYJMFXZVAjU0D2a51aTwpxDcYYSCBh9N3Mz866jZ4nxizDyCXc4y/s1600/PI_DBE_Sample_generation.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrHvbU9QGn8Pnjzhll9MHj5FIfExZVs6HPetqW7Kv4wJWU65xK9rkfI1SCYNz4jdnjKsPESbGXto8N6tNcYnop6QkhTYJMFXZVAjU0D2a51aTwpxDcYYSCBh9N3Mz866jZ4nxizDyCXc4y/s1600/PI_DBE_Sample_generation.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">DBE Sample Generation</td></tr>
</tbody></table>
DBE lets you create your own samples, which is great if you have an image with lots of stars or nebulosity, but it can also create samples for you.<br />
The first parameter sets the size of the samples. The samples will be squares with "sample size" number of pixels on either side. Use the largest samples that will not cover any stars. Obviously, if you have an image of the milky way, you will need to keep this value small, or you won't be able to position samples without covering stars.<br />
Number of samples determines the number of samples that will be created across the image. It is generally best to use more samples. If you use to few samples, your background model may not represent your true background. Even if you have a linear background, you can model it with many samples. On the other hand, if you have a more complicated background, you can't model it with say three samples.<br />
Minimum sample weight is only important if you let the process create samples. If you know that you have a strong gradient in the background, you should decrease its value to maybe 0.5 in order to create more samples. This parameter is used with Tolerance, to create samples in areas with more gradient.<br />
<h3>
Model Image</h3>
This is where you can set how your model background will be represented as an image. This is probably the least important panel. No comments on this panel.<br />
<h3>
Target Image Correction</h3>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8dgNgMZH5-0hpE8syAcCju88esvsRlXEfE7W_lBnbSHkgPlcTdLJ5gAMaXOz0IEhWT0OmkIhpRIgRtG0u9yEitppy2JNoN_osNlRwA-kwRsyLhFQPtvKbFd8szvpowwP_YVejFyGnfOdT/s1600/PI_DBE_TargetCorrection.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8dgNgMZH5-0hpE8syAcCju88esvsRlXEfE7W_lBnbSHkgPlcTdLJ5gAMaXOz0IEhWT0OmkIhpRIgRtG0u9yEitppy2JNoN_osNlRwA-kwRsyLhFQPtvKbFd8szvpowwP_YVejFyGnfOdT/s1600/PI_DBE_TargetCorrection.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">DBE Target Correction</td></tr>
</tbody></table>
This is probably the most important panel, as it is here you determine which type of gradient you want to remove. There are three options for gradient removal; none, which you would use to test settings without applying the process to your image; subtraction, which is used to remove gradients from light pollution or sky light; and division, which is used to remove gradients caused by the optical system.<br />
Examine your image and determine the most likely cause of the gradients. If you find that you have gradients due to both vignetting and light pollution, you may have to apply the DBE process twice, but in many cases once is enough. If you need to apply DBE twice, it seems most logical to get rid of vignetting first, since it has affected all light entering your imaging setup. You would then first apply division as your correction method, and secondly apply subtraction with a new DBE process.<br />
You can choose to view your background model, or to discard it. I always leave this option unchecked, since I want to examine my model. This is handy in case you want to refine your samples and settings. If you find that the model looks complicated, blotchy and with several colours, then you are probably overcorrecting. This may result in the loss of colour in nebulas. Make it a habit to check the background model before you discard it.<br />
You can also choose to replace your image with the corrected version, or to create a new image. If you choose to create a new image, then that will not have any history. On the other hand, if you replace your original image, you keep its entire history. This can be handy.<br />
<h3>
How stars are handled in DBE</h3>
(This is the way I understand it works, which may be wrong)<br />
If you place a sample over a star, you will notice that the sample will show a hole (= black) at the star position, with probably a colours band around this hole. This means that the pixels that represent the star, have a weight = 0, and will not be considered in the background model. However, the coloured band can be a halo or chromatic abberation, and the pixels will be taken into account for the background model. To avoid this, it is always better not to place samples over stars. If you can't avoid this, then at least examine the sample carefully, and try to place it such that it's effect is minimized. Also note that since the position of the star is not taken into account, the sample consists of fewer pixels, and each pixel will have a larger contribution for the background model.<br />
<h3>
On the size and number of samples</h3>
The samples you create should represent true background. If your image has large patches of background, you can have larger samples. If on the other hand, your image has lots of nebulosity or lots of small stars, then the background will only truly be covered by small samples. Examine your image and set sample size accordingly.<br />
Should you use few or many samples?<br />
It seems that some people like to use few samples in an image, while others use smaller but many samples.<br />
There is a danger that if you use many samples, some will cover nebulosity. When the correction is applied, this will lead to destruction of the target.<br />
On the other hand, if you only place a few samples, these may not pick up the variation of the background properly.<br />
As usual, the number of samples that you should use must depend on the image.<br />
Theoretically, if you have a linear gradient in an image, creating just two samples would be enough to model the background. But any mistake in either of the samples will have a severe effect on the accuracy of the background model. If you use a larger amount of samples, then each individual sample will have less effect on the background model. This generally results in a better model than using just a few samples.<br />
I have had success with using a large number of samples (20 - 25 per row, or some 400+ samples) in my images. It does however, take quite a while to place all these samples. Even if I automatically generate the samples, I still have to make sure that they don't cover stars or part of my target.<br />
One method that I have found helpfull is to create a clone of the image that is then stretched. This allows me to see where samples can be placed, and where they should be avoided. I then place the samples on this clone, but do not apply the correction.<br />
After placing the samples, I create a process instance on the workspace and delete the open instance. I then apply the process on the unstretched original image.<br />
<h3>
What to look for after background extraction</h3>
As I already mentioned, I always keep the extracted background image. I examine this, and if I find that the background contains traces from nebulosity, I generally undo the extraction and change the samples in my image.<br />
I also examine the corrected image for artefacts. If samples are too close to a target or a star, there is a chance that DBE creates a dark region around this target or star. Even in this case I undo the operation and move or remove samples.<br />
I repeat this process until there are no dark patches left where they shouldn't be, and the background looks smooth while nebulosity has been preserved.<br />
It can take quite a while to get the extraction right, but it will make further processing easier if you spend more time on this step.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-53949789068261664232016-08-03T14:59:00.003-07:002016-08-03T15:09:31.198-07:00first experiences with INDI on Raspberry PiNow that I have invested in a proper mount, I'm also looking into remote (15 meters) operation of it.<br />
I don't want to drag my laptop out into the garden just to have it covered with dew, and I like the size of Raspberry Pi. This, and the fact that PixInsight is moving into the direction of hardware control through the INDI protocol, made me decide to look into the INDI solution, rather than EQMOD.<br />
So, last weekend I erased my Pi memory card and installed Ubuntu Mate. This OS was recommended on the INDI website (<a href="http://www.indilib.org/" rel="nofollow" target="_blank">indilib.org</a> ).<br />
Now, I have very little experience with linux, and for most of the things I do, I need to follow a tutorial or google my way around. The following is probably not the best way to do it, but these are my experiences.<br />
<br />
Installing the OS wasn't much of a problem; download and extract the image. Then use Win32DiskImager to write the OS image onto the memory card.<br />
Started the OS, and managed to connect it to PuTTY, but in the beginning I mainly used the desktop and a terminal window in the desktop.<br />
Installing the INDI library took some time. For some reason I couldn't register or connect to the INDI repository (mutlaqja ppa), and the desktop on several occasions reported an internal error. Finally (don't aks me how) I managed to connect to the repository and install INDI. To get this far took quite a while so I read the OS image back to windows. I figured that if I ever need to go back and reinstall the OS, at least I won't need to do it from scratch.<br />
I managed to get INDI server up and running, and decided to rename the USB port for permanent reference. Some googling gave the answer, and some more tapping away on my keyboard (now I don't use Mate anymore, but am connected through PuTTY and WiFi).<br />
I then connected the mount through Synscan's serial cable and a serial/usb interface.<br />
I managed to connect from PixInsights INDI client, but the program crashed a few times. Again, don't ask me why. I have never been able to crash PixInsight, but during the past few days I managed it twice. (Mind you, I have managed to bring it to it's knees by integrating some 200+ 14 Mpixels drizzled images. But that's a different story.)<br />
It seems that there isn't a "hello world" application that lets you test a partial setup. There isn't even a proper tutorial that covers a complete setup. It takes some googling and looking around the INDI website to get ideas and suggestions for solutions.<br />
Anyway, I also tried connecting through Stellarium, which didn't protest and connected to the server.<br />
Both the PI and Stellarium connections worked fine, as the server kept responding to slew requests. However, the mount didn't budge an arcsecond.<br />
After a long time installing, uninstalling and reinstalling various things and starting and stopping the server, rebooting the RPi, etc, etc, I finally called it a night, not having moved the mount remotely at all.<br />
I dismantled the RPi, cables, and the mount (I'm doing this more or less in the family living room), and just as I was about to disconnect the serial cable, I noticed that neither of the LEDs was lighting or blinking.<br />
It appears that my serial/USB connecter isn't working anymore. So now I'm waiting for the HITECH EQDIR Synscan/USB interface to arrive from Firstlightoptics.<br />
Since everything else worked fine, just plugging in the connector should make the remote setup work. Something tells me though, that it will not work from the start, even with a new cable.<br />
<br />
The setup sofar:<br />
RPi 2 with Ubuntu Mate, connected to PuTTY on Windows.<br />
sudo apt-add-repository ppa:mutlaqja/ppa (works after a few tries and reboots)<br />
sudo apt-get install indi-full<br />
sudo adduser $USER dialout (so I don't have to be root user to use indi)<br />
create a rules file to rename the mounts usb port, using udevadm<br />
indiserver -m 100 -v indi_eqmod_telescope<br />
several reboots along the way.<br />
<br />
To do next:<br />
Make sure that the new connector works (without the Synscan)<br />
Make sure that the setup works (mount connected to RPi without the Synscan inbetween; indisverver controlled by Stellarium on Windows machine)<br />
Make sure that indiserver starts up automatically after booting the RPi.<br />
Find and install a client that lets me control the mount and will replace the Synscan.<br />
<br />
To be continued, I guess.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-10627130501613971592016-07-27T12:07:00.000-07:002016-07-27T12:07:08.459-07:00PixInsight process iconsWhen you process an image, it's always handy to copy the processes you use to the workspace. You can do this by dragging the little triangle in the lower left corner onto the workspace. This will preserve the settings for that instance.<br />
The only problem with this is that all process icons get a name ProcessXX, where XX is a number.<br />
It is easy to change this name to something more sensible:<br />
Click on the small N on the right hand side of the icon. This will open a dialog box where you can change the icon name. If you want to add a description, you click on the small D. Use this for example to write which mask you used for the process.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmsDZOP_k8kdhPZiTwqLy1PYj9c8CrtXb6CWZc-w_KoI0-AaVMOHHXh6a8s3DKUcdyWysmGFLek7uyBVqm8T9QsAiIrC69Zi7BtRtNEPwzPKCJpPK5HmM37U9pK8RqqQGarlfcC1AuGUta/s1600/PI_icons.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmsDZOP_k8kdhPZiTwqLy1PYj9c8CrtXb6CWZc-w_KoI0-AaVMOHHXh6a8s3DKUcdyWysmGFLek7uyBVqm8T9QsAiIrC69Zi7BtRtNEPwzPKCJpPK5HmM37U9pK8RqqQGarlfcC1AuGUta/s1600/PI_icons.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">process and image icons</td></tr>
</tbody></table>
Image icons have no description, but you can change their name. Also note that process icons are full rectangles, while image icons have one corner missing.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-52757582588228568182016-07-21T16:13:00.001-07:002016-07-27T12:55:04.464-07:00The effect of ditheringSome time ago I wrote about how dithering can improve the quality of raw images.<br />
If you control your camera and mount from a computer, you can use software to apply small mount movements between exposures. Some programs use random movements of the RA and DEC axes to avoid patterns in your stacked images.<br />
Unfortunately, almost all camera control software is written for either Canon or Nikon cameras. Since I have an old Pentax camera, which has a quirky usb connector, I can't control it from my computer.<br />
I've written about my ditherbox earlier. Here's an example of how it works.<br />
This short video shows the effect of dithering. M45 was the target, and some 46 images were taken and registered. Before registering, the target is placed on different parts of the sensor according to the dithering pattern. After registering, the target is stationary, and the noise pattern moves against the dithering pattern. This is clearly seen in the video.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/EXStNx-gcqw" width="560"></iframe><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-40681100078518975992016-07-13T11:39:00.003-07:002016-07-27T12:09:16.574-07:00Noise reduction for DSLR astro images<br />
Astro images taken with a DSLR at a high ISO setting are noisy, and the best way to decrease the noise level is of course to take lots of images and stack these. But even then, some sort of noise reduction is necessary.<br />
Noise in DSLR images manifests itself as intensity noise and colour noise. Think of it this way; noise is a random variation in pixel values. Pixel values can either vary in intensity, more or less intensity of the same colour, or in colour, same intensity but another colour. Both these variations have to be addressed by a noise reduction process.<br />
Here I will show you my procedure for DSLR images.<br />
First I apply noise reduction to the luminance or lightness (colour intensity) of the image, and then a very aggresive noise reduction to the chrominance (colour variation).<br />
One of the most efficient luminance noise reduction methods in PixInsight is TGVdenoise. This method is especially good at reducing high frequency (or small scale) intensity noise, and is based on a diffusion algorithm. This means that it detects variation in pixel values and pushes these variation outwards, away from the pixel. As in any diffusion process, the longer you let the process continue, the stronger the spreading process will be. In the case of TGVdenoise this means that you let the process go through many iterations.<br />
<br />
One of the best ways to use TGVdenoise was devised by Philippe Bernard. A presentation in french can be found on his <a href="http://www.astroccd.eu/" rel="nofollow" target="_blank">website</a>; <a href="http://pixinsight.astroccd.eu/tgvdenoise.html" rel="nofollow" target="_blank">TGVDenoise</a>. A slightly updated version was presented on the PixInsight forum by member <a href="http://pixinsight.com/forum/index.php?topic=8942.msg57635#msg57635" rel="nofollow" target="_blank">Tromat</a>.<br />
(Always have a STF stretch applied to the image. This keeps the image in its linear state, but allows you to see on screen what the image looks like. Also, always test the settings on a small preview that contains both background and a weak signal you want to preserve.)<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii_E2CS3cI22j3loJBSPAHcwvKzUGLCYAanmAmiq-23cSqFnvswKgcaQfaNiEqkPeXquJP9adKi-jBhIZUtbU9SrY1U2CI_PZFe5T1besdtLLfy1gzuMtFVhC0NAxNCW8z5wfBNTrkEz_v/s1600/tgvdenoise.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii_E2CS3cI22j3loJBSPAHcwvKzUGLCYAanmAmiq-23cSqFnvswKgcaQfaNiEqkPeXquJP9adKi-jBhIZUtbU9SrY1U2CI_PZFe5T1besdtLLfy1gzuMtFVhC0NAxNCW8z5wfBNTrkEz_v/s640/tgvdenoise.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Before and after luminance noise reduction, using TGVdenoise</td></tr>
</tbody></table>
<br />
The second stage of noise reduction is to reduce colour noise, or chrominance noise. For this I use the MultiscaleMedianTransform.<br />
In the noise reduced image, you will probably still see colour variation in the background. MMT will take care of this noise.<br />
First you will need to create a mask that will protect the stars and target.<br />
For this, make sure no mask is applied to your image. Extract a luminance layer (CIE L*) from the image, using the channel extraction process. Apply a histogram stretch to this channel. Make the background as dark as possible, and the stars and target as bright as possible. Don't care if pixels on either end are clipped. Then open the MultiscaleLinearTransform process and set the number of wavelet layers to one. Double click on the first layer to turn it off, and apply the process to the luminance layer. This will blur the image. If you want more blurring, undo the process; set the number of wavelet layers to two and turn both layers off. Then apply again.<br />
<br />
Apply the luminance mask to the image and invert it. The target and stars are now masked, while the background is revealed for noise reduction.<br />
Open the MultiscaleMedianTransform process and choose 7 wavelet layers. Set the mode to Chrominance (Restore CIE Y).<br />
Enable noise reduction on only the first layer. Set strength to 5 and leave the other parameters as they are. Apply to a small preview.<br />
You should see a lot of the small scale noise dissapear, but there is still a lot coarser noise left.<br />
Increase the strength parameter to 7 and apply to the preview. Better? If you still want more noise reduction, increase strength to 10 and apply.<br />
If you are satisfied, do the same for wavelet layer number 2.<br />
Generally, you will need most noise reduction on the first layer (fine, single pixel scale detail), and less on higher number layers. Just test one layer at a time, until you are satisfied.<br />
Then apply to the entire image.<br />
<br />
MMT takes a while to run for the first time, but you will notice that it is much faster after this first time. This is because the process needs to calculate the wavelet layers. As long as you do not change the number of wavelet layers, it only does this once. The process is independent of preview size. This also means that once you've found the best settings, the process is very fast on the whole image.<br />
Don't forget to remove the mask once you're done.<br />
Here's a before and after image.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiClJRut9pDXtPiYkBi68qEv4lbPLqIupsuhYV8sVbFrS2wO40JiH6ouefO4xtnNsQXf02O_FM60opFuYExUuc9zq1ljtyt4QYZ8aHZJnEh540b8XSHHEZHsJO6ryyBDkF9DLEln6V2zFuu/s1600/mmt.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiClJRut9pDXtPiYkBi68qEv4lbPLqIupsuhYV8sVbFrS2wO40JiH6ouefO4xtnNsQXf02O_FM60opFuYExUuc9zq1ljtyt4QYZ8aHZJnEh540b8XSHHEZHsJO6ryyBDkF9DLEln6V2zFuu/s640/mmt.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Before and after chrominance noise reduction, using MMT</td></tr>
</tbody></table>
<br />
Tip: don't delete masks, because that will break the links in the process history. Just minimise them and move to one side.<br />
Note that in the example image, the streaks are created by residual hot pixels during the stacking process. <a href="http://wimvberlo.blogspot.se/2016/07/dithering-in-hardware_7.html" target="_blank">Dithering</a> will eliminate this effect.<br />
<br />
BTW, here's the final image.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIqlB2QssBYyaHCdRYZe4g_dNEizK1l-TrVQn3VZb_MYhxAeTXUSY9k-2VR9b_IGFmBnsNJfvfKqyPHAPmKOq1eKDX-cOEYhyphenhyphenHIBjVqKWlyaFpy8p-6J2yVb27To5LKa3UTufP4XipesGj/s1600/M31_160130c.jpg" imageanchor="1"><img border="0" height="409" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIqlB2QssBYyaHCdRYZe4g_dNEizK1l-TrVQn3VZb_MYhxAeTXUSY9k-2VR9b_IGFmBnsNJfvfKqyPHAPmKOq1eKDX-cOEYhyphenhyphenHIBjVqKWlyaFpy8p-6J2yVb27To5LKa3UTufP4XipesGj/s640/M31_160130c.jpg" width="640" /></a>Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-41355397615050681052016-07-07T04:43:00.001-07:002016-07-13T11:37:35.652-07:00Dithering in hardwareA common source of noise in astro images taken with a DSLR is hot pixels that were not removed in the calibration process.<br />
When light frames are registered and integrated, and the tracking wasn't spot on, these hot pixels end up as streaks or "rain" in the final image.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqrg_IT6RJbeKqaWWrNzd5b-lMDuZmhdw7GrWua6alTlnBmaER6ofx94p-rej8QZ5m9xi0nD7ubjBp7mUK3s6SGHcPpnrU0hp-ogXoxzAKWdVpyaiii0kHmFaBEgacQfLA63mfPcQIjMIQ/s1600/streaks.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqrg_IT6RJbeKqaWWrNzd5b-lMDuZmhdw7GrWua6alTlnBmaER6ofx94p-rej8QZ5m9xi0nD7ubjBp7mUK3s6SGHcPpnrU0hp-ogXoxzAKWdVpyaiii0kHmFaBEgacQfLA63mfPcQIjMIQ/s400/streaks.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Extreme crop of an unprocessed (but stretched) integrated image</td></tr>
</tbody></table>
Normally, a dark master frame is supposed to suppress hot pixels in the light frames before registration and integration. For a non cooled DSLR, it is very difficult to match the master dark to the light frames. Therefore, faulty pixels remain after the calibration process.<br />
There are various processing tools that can be used on the raw image frames to remove hot pixels. These tools rely on filters that remove intensity values from either the individual frames, or from the stack of images that are to be integrated. By careful use of these tools, most of the noise can be reduced. The noise that remains in the final master image, can be further reduced during post processing.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCf6obUD14-VHWTUD7Yb4dq6Rm8ol3lXDuNpX7ikdoLSJdWQSlS98DDQBAE94S2cSMApXDs4I7H-MHB5NAHO_Kf3eBiAzq9ATKAbhGAMVLnNozQe54qINVmbXLr68D1wNge4X4nQSFnSvG/s1600/streaks_processed.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="352" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCf6obUD14-VHWTUD7Yb4dq6Rm8ol3lXDuNpX7ikdoLSJdWQSlS98DDQBAE94S2cSMApXDs4I7H-MHB5NAHO_Kf3eBiAzq9ATKAbhGAMVLnNozQe54qINVmbXLr68D1wNge4X4nQSFnSvG/s400/streaks_processed.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Crop of the same area after processing</td></tr>
</tbody></table>
While the hot pixels can not be removed during data collection, the pattern they form after integration can be altered. The streaks in the first example were caused by tracking issues. If tracking had been spot on (e.g. through guiding), the hot pixels would not have formed a pattern, but be visible as bright points in the final integrated image.<br />
If the camera is moved a few pixels in a random way, no streaks will form and the noise will not be visible. This technique is called dithering and was suggested by astrophotographer <a href="https://www.youtube.com/watch?v=PZoCJBLAYEs" rel="nofollow" target="_blank">Tony Hallas</a>.<br />
Cameras are usually controlled by software running on a laptop, and many of these have the option to apply dithering.<br />
However, most software is written for either Canon or Nikon cameras, and may not work with other brands. I use a Pentax K20D for all my astro work, and this camera has issues when trying to connect to a computer, so software controlled dithering is not possible for me.<br />
The only way for me to use dithering was to sit next to the camera and manually move the camera in RA or DEC between exposures. Doing this during winter, trying to get 50+ exposures, was not my idea of a fun time.<br />
The solution to this problem was to build my own hardware device for camera control; a ditherbox.<br />
<br />
The device intercepts the trigger signal from the intervalometer and sends it on to the camera. In between exposures, it also tells the mount to move in either RA or DEC. It does nothing else. I have to figure out at what speed to move the mount (and program this into the handcontroller), and for how long. The box only sends a "move" signal for the whole time between exposures. This time is set in the intervalometer and is determined by the number of pixels to be moved, the pixel size, and the focal length of the lens or scope.<br />
The heart of the ditherbox is a small microcontroller from Atmel, the ATtiny 84. All inputs and outputs are isolated through optocouplers, and it receives its power from the SynScan handcontroller. So there are no extra batteries or power cables involved.<br />
More information on the device can be found on <a href="https://stargazerslounge.com/topic/266554-ditherbox/" rel="nofollow" target="_blank">Stargazers Lounge</a>. The software can be found on my <a href="https://github.com/wberlo/AutoDither" rel="nofollow" target="_blank">Github site</a>.<br />
And here is an example of the benefits of dithering.<br />
(NB: this image, while showing the same area of the sky, was taken with another focal length and under different conditions.)<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj19vvo3UxLuu29llbfvm8JlyDSS1c0VVerxpMEhi4Y8BcopB69y1PChNRL16S0KVWpXTWq0nsJfNxkUdeJLCnRfb_AuuWA6VEIVbXHgqWCsIuyQdDH9m6UVC2UHj-3DD4cTlvGYl_b0GQv/s1600/dithered.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="331" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj19vvo3UxLuu29llbfvm8JlyDSS1c0VVerxpMEhi4Y8BcopB69y1PChNRL16S0KVWpXTWq0nsJfNxkUdeJLCnRfb_AuuWA6VEIVbXHgqWCsIuyQdDH9m6UVC2UHj-3DD4cTlvGYl_b0GQv/s400/dithered.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Same area in the sky, imaged using dithering;<br />
unprocessed (stretched) integrated image</td></tr>
</tbody></table>
Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0tag:blogger.com,1999:blog-6105588857106355557.post-61225990170604775062013-05-06T11:52:00.000-07:002013-05-06T11:52:30.595-07:00Dual motordrive for ArduinoRecently I started tinkering with the Arduino microcontroller. I plan to use the controller for a simple robot. Since I have some simple DC motors lying around, I designed a motor control circuit using an L293D H-bridge.
The designs I found on the internet usually take two logical output ports from the Arduino, plus one pwm port per motor. As I eventually will want to free my Arduino for other projects, the controller will be replaced by a single chip MCU, most likely an ATtiny. Hence the need to use as few output ports as possible for motor control. With the design I came up with, two DC motors can be controlled using only two digital IO ports and two pwm ports. This means that a single ATtinyX5 could, in principle, control the robot. This would leave one port free for sensor input. Most likely I will end up using an ATtinyX4 which has more IO ports.
Anyway I'd like to share my design.<br />
For circuit design I used <a href="http://tinycad.sourceforge.net/">TinyCAD</a>. I imported the net-file into <a href="http://veecad.com/index.html">VeeCAD</a> for board layout.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2MYFWOwkMiYxqziHBQIg-_Zqv0I3woTtUkkRuY6Vl5pEdrRrUa3MWqYMJ73vfKwhBAgoG6r1lhq05FE0UBydSh9bw9m0tdaqSnmysp5HcolSiCcLC5uVX3KDkOfp6qUYyaqNlGWP8mQwe/s1600/robotdrive.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2MYFWOwkMiYxqziHBQIg-_Zqv0I3woTtUkkRuY6Vl5pEdrRrUa3MWqYMJ73vfKwhBAgoG6r1lhq05FE0UBydSh9bw9m0tdaqSnmysp5HcolSiCcLC5uVX3KDkOfp6qUYyaqNlGWP8mQwe/s320/robotdrive.png" width="320" /></a></div>
<br />
The way the controller works.<br />
Motor 1 is connected to JM1. This motor is controlled by a pwm signal on pin 1 of J1 plus a logic signal on pin 2. A HIGH signal on pin2 will drive the motor in one direction, and a LOW signal will drive it in the opposite direction. The pwm signal determines the speed of the motor (including full stop). Motor 2 is connected to JM2 and controlled by pins 3 (pwm) and 4 (direction) of J1. Pins 6, 7 and 8 of J1 are Vcc, ground and motor voltage.<br />
I used the 74LS04 chip in such a way that it can be replaced by other inverting gates in the same series. This means that it can be replaced by e.g. a 74LS00 (quad nand gates) if you connect both inputs of each gate so that it works as an inverter.<br />
As I couldn't find a circuit symbol for the H-bridge, I ended up setting up a new library with part in TinyCAD. This library is included with the design files.Wimhttp://www.blogger.com/profile/17094719461162793219noreply@blogger.com0