tisdag 16 augusti 2016

Vibration damping the EQ3 aluminium tripod

The EQ3 mount with the aluminium tripod is generally considered not to be suitable for astrophotography. Still, it's a nice, portable mount that, under the right circumstances, can produce relatively good images.
There is an article on cloudynights.com that describes how the tripod can be beefed up. The author of that article increases the weight of the tripod by putting rebar in the upper legs, and a rectangular wooden dowel in the lower legs.
The problem with the tripod is not just a weight problem, but rather a vibration problem.
Filling up the hollow legs with dowels and rebar, doesn't necessarily improve the vibration characteristics of this mount. A person commenting on the cloudynights article, suggested that the legs can also be filled with sand. This will result in both a heavier tripod and different vibration characteristics.
I decided to modify my tripod by inserting wooden dowels in the upper and lower legs. But I also secured these dowels to the plastic and aluminium structure. Hopefully, this will improve the vibration damping of the tripod, without it becoming too heavy.
Wooden dowel cut to size, ready to be inserted in the lower leg
Starting with the lower parts of the legs, I removed all the plastic parts and inserted oak dowels into the aluminium tubes. I noticed that the plastic feet of the tripod are hollow and extend a bit up into the legs. By making the dowels somewhat thinner, and drilling a hole, where the hole in the plastic is, I could fasten the wooden dowel to the plastic foot and later the aluminium leg, and even the top lid of the leg.
Wooden dowel will be secured to the plastic fott and the aluminium leg
Top part of the lower leg
I then inserted two round beech dowels (12 mm diameter) into the lower parts of the legs, making sure there was a tight fit at either end. Unfortunately, it's not possible to fasten these dowels, other than through a tight fix and the small screws that hold the leg spreader in place.
One half of an upper leg

Dowels inside the upper leg
It doesn't take long to get all three legs done.
All three legs completed. Time for reassembly
Finally, reassembling the tripod, it looks as before.
The tripod now weighs 3.6 kg, not much more than before, but it feels steadier.
For a short while I also thought of filling the tripod with sand. I found out that the tripod legs are not sealed at the lower ends, and most of the sand will run out after a while. Filling the tripod, will also make it much heavier. Hopefully, the wooden dowels will improve the damping.
Now all that remains is a clear night to test the tripod.

lördag 13 augusti 2016

First experience with INDI on Raspberry Pi - part 2

Last week , when I tried to control my mount through indi on a Raspberry Pi, I managed to install the server and connect from my laptop to the indi server on the RPi. However, the mount didn't respond. It turned out that the USB serial cable didn't work anymore.
Yesterday I received an EQDIR cable from FLO, and connected it to the mount. After some adjustment of the parameters in linux and the Indi client, it all worked perfectly.
Now I can control my mount from PixInsight or any client that can run the indi protocol.
The next step will be to install and test servers.
Short recap of the installation so far.
  1. Install an Ubuntu Mate image on a SD card for the RPi
  2. Connect the RPi to the home Wifi network and set parameters to connect to PuTTY
  3. Connect to the indi repository, download and install the indi server
  4. Set $USER for dialout permission
  5. Create a permanent USB entry for the connector
  6. Start the server
  7. Start the client and connect to the server
  8. Configure the site and the mount in the client
So far PixInsight can connect to the mount and send goto commands. With the search capability, I can just search for say, M27 and the mount will slew to it.
Of course, this assumes that the mount is aligned, and so far PixInsight can't do a 2-star alignment.
I just hope that this will be implemented soon.
For the time being, my intended workflow is as follows.
  1. Haul out the mount and set up
  2. Level mount
  3. Start mount with SynScan
  4. Do a polar and a 3-star alignment
  5. Park the mount and power off
  6. Disconnect the SynScan
  7. Connect the RPi and boot
  8. Connect the client
Further testing is delayed by clouds :-(

onsdag 3 augusti 2016

Note on Dynamic Background Extraction

Astroimages allmost always have a background gradient that needs to be removed. Gradients can have two basic causes; either they are due to limitations of the optical system (vignetting), or to uneven illumination of the night sky. Most of us live and photograph in light polluted environments, and our astroimages incorporate stray light from street lamps or city lights. Even when photographing from a dark site, there is the inevitable sky glow. Whatever the cause of an uneven background, it is seldom something we want incorporated in our images.
PixInsight has two processes for gradient removal; Automatic Background Extraction (ABE) and Dynamic Background Extraction (DBE). These two processes work slightly different from each other, so it is a good thing to know them both. ABE is an automatic process, that does most of the work for you, especially the more laborious part of placing samples in the image. DBE on the other hand, allows for more user control.
In this article, I intend to give my experience of the DBE process, and how I use the various settings in the DBE control window.
Shortly, what you do with DBE is take samples of the background in your image and create a model of the image background, based on those samples. (Note that I assume you are working with an RGB colour image.)

Dynamic Background Extraction
When you open the DBE process (Process | BackgroundModelization | DynamicBackgroundExtraction), you start with connecting it to an image, the target,  in your workspace. This is done by either clicking in the image you wish to connect to, or clicking the reset icon at the bottom right of the control window (the four arrows pointing inwards). The latter option will also reset all settings in the control window. The active image is now linked to the process and it shows the symmetry lines that can be used by DBE. More on the symmetry lines in a moment.

Target View

Each time you click in the target window, a new sample will be created at that position. In the target view you will see how individual pixel values will be used in the creation of the background model. Each sample has a position (anchor x, y) and a size (radius). The square field in the target view panel shows how each pixel is used in the model. This field should ideally consist of only bright pixels. If the pixels have a colour, than the pixel will only be used in the calculation of the model for that colour. The three values Wr, Wg, Wb are the weights in red, green and blue for the combined pixels in the sample. They determine how much this sample will contribute to the background model. In this view you can also determine if symmetries are to be used. If you have an image which you know has a symmetrical background (vignetting for example), then you can create samples in one place where the background is visible, and use those samples in other parts of the image, even if the background there is not visible. When you click on one of the boxes (H for horizontal, V for vertical, D for diametrical), a line will show where the sample will be used. Not that you can control the symmetry for each individual sample. Use with care.

Model Parameters

In this panel you will set how strict your model is going to be. The most important value is Tolerance. Increase this if you find that too many samples are rejected. The default is 0.5, but expect to use values up to 2.5 regularly, and in extreme cases even higher than 5 - 7. But try to keep this value as low as possible. Once you have created all your samples, and are satisfied with where you placed them, you can decrease this value somewhat and recalculate the samples, until samples are being rejected. Choose the lowest value you can get away with, as this will result in a better approximation of the true background.
Smoothing factor determines how smooth your model is going to be. If you set this to 0.0 then the background will follow your samples very strictly. Increase this value to get a smoother background model if you see artefacts in the model.

Sample Generation

DBE Sample Generation
DBE lets you create your own samples, which is great if you have an image with lots of stars or nebulosity, but it can also create samples for you.
The first parameter sets the size of the samples. The samples will be squares with "sample size" number of pixels on either side. Use the largest samples that will not cover any stars. Obviously, if you have an image of the milky way, you will need to keep this value small, or you won't be able to position samples without covering stars.
Number of samples determines the number of samples that will be created across the image. It is generally best to use more samples. If you use to few samples, your background model may not represent your true background. Even if you have a linear background, you can model it with many samples. On the other hand, if you have a more complicated background, you can't model it with say three samples.
Minimum sample weight is only important if you let the process create samples. If you know that you have a strong gradient in the background, you should decrease its value to maybe 0.5 in order to create more samples. This parameter is used with Tolerance, to create samples in areas with more gradient.

Model Image

This is where you can set how your model background will be represented as an image. This is probably the least important panel. No comments on this panel.

Target Image Correction

DBE Target Correction
This is probably the most important panel, as it is here you determine which type of gradient you want to remove. There are three options for gradient removal; none, which you would use to test settings without applying the process to your image; subtraction, which is used to remove gradients from light pollution or sky light; and division, which is used to remove gradients caused by the optical system.
Examine your image and determine the most likely cause of the gradients. If you find that you have gradients due to both vignetting and light pollution, you may have to apply the DBE process twice, but in many cases once is enough. If you need to apply DBE twice, it seems most logical to get rid of vignetting first, since it has affected all light entering your imaging setup. You would then first apply division as your correction method, and secondly apply subtraction with a new DBE process.
You can choose to view your background model, or to discard it. I always leave this option unchecked, since I want to examine my model. This is handy in case you want to refine your samples and settings. If you find that the model looks complicated, blotchy and with several colours, then you are probably overcorrecting. This may result in the loss of colour in nebulas. Make it a habit to check the background model before you discard it.
You can also choose to replace your image with the corrected version, or to create a new image. If you choose to create a new image, then that will not have any history. On the other hand, if you replace your original image, you keep its entire history. This can be handy.

How stars are handled in DBE

(This is the way I understand it works, which may be wrong)
If you place a sample over a star, you will notice that the sample will show a hole (= black) at the star position, with probably a colours band around this hole. This means that the pixels that represent the star, have a weight = 0, and will not be considered in the background model. However, the coloured band can be a halo or chromatic abberation, and the pixels will  be taken into account for the background model. To avoid this, it is always better not to place samples over stars. If you can't avoid this, then at least examine the sample carefully, and try to place it such that it's effect is minimized. Also note that since the position of the star is not taken into account, the sample consists of fewer pixels, and each pixel will have a larger contribution for the background model.

On the size and number of samples

The samples you create should represent true background. If your image has large patches of background, you can have larger samples. If on the other hand, your image has lots of nebulosity or lots of small stars, then the background will only truly be covered by small samples. Examine your image and set sample size accordingly.
Should you use few or many samples?
It seems that some people like to use few samples in an image, while others use smaller but many samples.
There is a danger that if you use many samples, some will cover nebulosity. When the correction is applied, this will lead to destruction of the target.
On the other hand, if you only place a few samples, these may not pick up the variation of the background properly.
As usual, the number of samples that you should use must depend on the image.
Theoretically, if you have a linear gradient in an image, creating just two samples would be enough to model the background. But any mistake in either of the samples will have a severe effect on the accuracy of the background model. If you use a larger amount of samples, then each individual sample will have less effect on the background model. This generally results in a better model than using just a few samples.
I have had success with using a large number of samples (20 - 25 per row, or some 400+ samples) in my images. It does however, take quite a while to place all these samples. Even if I automatically generate the samples, I still have to make sure that they don't cover stars or part of my target.
One method that I have found helpfull is to create a clone of the image that is then stretched. This allows me to see where samples can be placed, and where they should be avoided. I then place the samples on this clone, but do not apply the correction.
After placing the samples, I create a process instance on the workspace and delete the open instance. I then apply the process on the unstretched original image.

What to look for after background extraction

As I already mentioned, I always keep the extracted background image. I examine this, and if I find that the background contains traces from nebulosity, I generally undo the extraction and change the samples in my image.
I also examine the corrected image for artefacts. If samples are too close to a target or a star, there is a chance that DBE creates a dark region around this target or star. Even in this case I undo the operation and move or remove samples.
I repeat this process until there are no dark patches left where they shouldn't be, and the background looks smooth while nebulosity has been preserved.
It can take quite a while to get the extraction right, but it will make further processing easier if you spend more time on this step.

first experiences with INDI on Raspberry Pi

Now that I have invested in a proper mount, I'm also looking into remote (15 meters) operation of it.
I don't want to drag my laptop out into the garden just to have it covered with dew, and I like the size of Raspberry Pi. This, and the fact that PixInsight is moving into the direction of hardware control through the INDI protocol, made me decide to look into the INDI solution, rather than EQMOD.
So, last weekend I erased my Pi memory card and installed Ubuntu Mate. This OS was recommended on the INDI website (indilib.org ).
Now, I have very little experience with linux, and for most of the things I do, I need to follow a tutorial or google my way around. The following is probably not the best way to do it, but these are my experiences.

Installing the OS wasn't much of a problem; download and extract the image. Then use Win32DiskImager to write the OS image onto the memory card.
Started the OS, and managed to connect it to PuTTY, but in the beginning I mainly used the desktop and a terminal window in the desktop.
Installing the INDI library took some time. For some reason I couldn't register or connect to the INDI repository (mutlaqja ppa), and the desktop on several occasions reported an internal error. Finally (don't aks me how) I managed to connect to the repository and install INDI. To get this far took quite a while so I read the OS image back to windows. I figured that if I ever need to go back and reinstall the OS, at least I won't need to do it from scratch.
I managed to get INDI server up and running, and decided to rename the USB port for permanent reference. Some googling gave the answer, and some more tapping away on my keyboard (now I don't use Mate anymore, but am connected through PuTTY and WiFi).
I then connected the mount through Synscan's serial cable and a serial/usb interface.
I managed to connect from PixInsights INDI client, but the program crashed a few times. Again, don't ask me why. I have never been able to crash PixInsight, but during the past few days I managed it twice. (Mind you, I have managed to bring it to it's knees by integrating some 200+ 14 Mpixels drizzled images. But that's a different story.)
It seems that there isn't a "hello world" application that lets you test a partial setup. There isn't even a proper tutorial that covers a complete setup. It takes some googling and looking around the INDI website to get ideas and suggestions for solutions.
Anyway, I also tried connecting through Stellarium, which didn't protest and connected to the server.
Both the PI and Stellarium connections worked fine, as the server kept responding to slew requests. However, the mount didn't budge an arcsecond.
After a long time installing, uninstalling and reinstalling various things and starting and stopping the server, rebooting the RPi, etc, etc, I finally called it a night, not having moved the mount remotely at all.
I dismantled the RPi, cables, and the mount (I'm doing this more or less in the family living room), and just as I was about to disconnect the serial cable, I noticed that neither of the LEDs was lighting or blinking.
It appears that my serial/USB connecter isn't working anymore. So now I'm waiting for the HITECH EQDIR Synscan/USB interface to arrive from Firstlightoptics.
Since everything else worked fine, just plugging in the connector should make the remote setup work. Something tells me though, that it will not work from the start, even with a new cable.

The setup sofar:
RPi 2 with Ubuntu Mate, connected to PuTTY on Windows.
sudo apt-add-repository ppa:mutlaqja/ppa (works after a few tries and reboots)
sudo apt-get install indi-full
sudo adduser $USER dialout (so I don't have to be root user to use indi)
create a rules file to rename the mounts usb port, using udevadm
indiserver -m 100 -v indi_eqmod_telescope
several reboots along the way.

To do next:
Make sure that the new connector works (without the Synscan)
Make sure that the setup works (mount connected to RPi without the Synscan inbetween; indisverver controlled by Stellarium on Windows machine)
Make sure that indiserver starts up automatically after booting the RPi.
Find and install a client that lets me control the mount and will replace the Synscan.

To be continued, I guess.