When the skies are bright, plan for darkness

Gotten a bit quiet here, hasn’t it? Well, here in the UK, it’s wonderfully sunny and bright. We don’t get proper darkness, and the planets are in an awful position, so imaging deep-space objects is a bit of a non-starter, or at least challenging. We’ve also had a run of crap weather, just to drive the point home.

I’ve been using the time instead to plan out my next astro-related project (though I may well push the execution out to 2020, just to make sure I have the cash to get it done right) – a fully automated roll-off roof observatory. The logic behind this is simple – my next “improvements” to my imaging system that I can make are:

  1. Upgrade the camera – already got a pretty good camera, so this means something quite high-end (>£3-4k), and would just get me more sensitivity/bigger pixels/larger field of view
  2. Upgrade the telescope – already got a decent Newtonian so a meaningful upgrade means either a high-end Newtonian/R-C astrograph (£3-4k) or a decent large-aperture apochromatic refractor (£4-5k)
  3. Add a second telescope – to do planetary imaging I could add a SCT or long-focal-length scope of some other sort, but the planets will be too low for the next couple of years for serious imaging, and it’d still be £2-3k of investment
  4. Update my telescope’s other parts (focuser, focus controller) and invest in tools (collimation, etc) – more reasonable investment (£1-2k) but just gets me slightly better images – this is my favourite option if I don’t do the observatory this year
  5. Build an automated observatory – easily doubles the number of images I can capture with my existing kit, thus acting as a massive force multiplier for my previous investments – but £4-5k at least!

So the biggest “bang for buck” is definitely the observatory, but only if it is fully automated. I’ve lost track of the number of nights where the sky was beautiful and clear, the clouds nowhere to be seen, ground and ambient temperatures low enough to make seeing incredibly clear – and I’ve been packing away the telescope at midnight because I’ve got work tomorrow, despite the further 7 or 8 hours of imaging I could have. And then there’s all the “well, it might be good enough, but…” nights – nights where the forecast says it won’t be good enough, but you might get lucky; often this involves going out frequently to stare at the sky, setting up if I feel optimistic, and usually being disappointed – but often not.

With a fully automated and remotely driven set-up the setup time is nil, as is the tear-down time. With the scope set up permanently, with the camera and other components mounted, there’s much more scope (no pun intended) for tweaking and tuning in advance of an imaging night, and fine tuning on cold-but-cloudy nights that just isn’t possible when you’re stripping the whole thing down each night. Being able to work in the dry and the day has a lot of appeal.

System-wise, full automation is pretty simple – you need a box with relays to drive motors and read sensors, a proper cloud/rain sensor (hard-wired to the relay box, so if any computers fail there’s a pretty dumb box responsible for shutting the roof when it rains), and a system capable of automating the selection of targets (what’s good tonight?), acquisition of images (frame the target, autofocus, guide and image), and the observatory start-up/shut-down. I’m most of the way here – I need the relay box and auto-focuser. The rest is already ready – I’ve been using INDI/Ekos/KStars for a while which can do all of this. The main INDI instance for the observatory will run on a 1U server in the observatory, with an INDI server on a Raspberry Pi 4 strapped to the telescope doing actual image acquisition and telescope equipment control. This makes the pier-to-desk cables simple – 12V for power, USB for the mount, and an Ethernet cable for the rest, with just 12V and Ethernet onto the telescope itself.

Making a plan

So, the objectives of this build are:

  • Full automation – but at a minimum, a roll-off roof which can open and close under all circumstances for safety – so I can program the observatory to image opportunistically
  • Imaging-stable pier, with room to expand – just the one pier, but room to set up a second non-isolated pier for a small solar/planetary telescope (isolation is less critical for these applications)
  • “Warm” room with enough room for a server rack, desk, chair and a little storage – somewhere I can sit while setting up
  • Good visibility down to ~30 degrees everywhere
  • Strong enough to resist opportunistic forced entry and 100mph wind when closed

Beyond this – it’s basically a shed! So I’ve started by getting a bunch of books on shed design and construction and reading them. My day job at the moment is (mostly) telling people how to properly build a fibre optic network, so I know a reasonable amount about concrete, aggregates, rebar, admixtures and slab design. Making a good solid observatory is mostly about mass, just like in acoustic isolation design, and I’ll be using almost an entire ready-mix concrete truck worth of C40 low-moisture concrete to pour the base slab and the (isolated) pier. The framing and design of walls, floors and doors is all fairly simple, though benefits from careful planning to make sure all the services will work and the structure remains rot-and-rat free for a few decades.

Some basic renders of the general layout – working floor-up. Note the duct from pier to warm room to allow for cables to reach the telescope safely

The tricky bit is the roll-off roof – I need to keep this building rodent-proof and ideally near-airtight to aid in humidity/temperature control. I will use forced, filtered airflow for cooling with a positive pressure maintained to minimise dust ingress. Active cooling with the roof shut will help cool-down times and avoid any kit getting too hot in summer. This means the roof needs to seal well onto the frame when shut. I also need to be able to shut the roof at any time – that means any internal rafters need to be minimal or non-existent, so the telescope doesn’t have to be “pointed down” to let the roof pass. This means when the mount fails or is unsure of its position the roof can still shut safely to keep the rain out. The roof needs to roll back enough to give good visibility, so the whole thing has to roll onto rails that extend beyond the back of the warm room. To further improve visibility and keep rain off the rails, some of the side walls will be mounted on the roof so the walls “lower” as the roof rolls off. There’s a lot of complexity in this (and it has to be something I can build), so this is taking some time to work out.

I’ve started designing in detail in Autodesk Fusion 360 – while I’ve used Sketchup for this sort of thing in the past, Fusion 360 in Direct Modelling (non-parametric mode) is about as user-friendly and can produce much prettier outputs as well as decent engineering drawings.

An early rendering of the pier and shuttering for the initial concrete pour
An early drawing with some detail/section views to show the base layout and design – the deep, chunky base should help isolate the pier from surface vibrations/movement, and the really deep and heavy pier root should by virtue of being heavy do the rest

I’ve also reconstructed my current telescope and mount with photogrammetry so I can build a digital model and check the motion all works – I haven’t gotten around to tidying up the mesh into some simpler models, but it’s a great reference for getting the dimensions and motion right.

Location, location, location

The other question is where to put this – I dithered quite a bit and in the end took a lot of level photos around the garden at twilight with a Ricoh Theta S 360 degree camera at roughly my telescope’s aperture height. With the moon visible in each photo and knowing where and when I took the photos, I could align the photos to north with a fairly simple Python script which spat out a nice set of data for horizon plotting.

Plotting horizons straight out of images. Probably should release the code for this…

It turns out there’s only a few places where I don’t enjoy visibility to 30 degrees pretty much everywhere, so I decided to plug the panorama for my favoured location into Stellarium – this turns out to just involve having a panorama with a transparent sky and a small .ini file to set north properly.

My observatory’s home, Stellarium-ready
… and loaded into Stellarium, so I can see how things will look – spending all the time with Photoshop’s background eraser to get the trees properly semitransparent makes a big impact on the visuals of this (though in summer they’re somewhat more opaque!)

The chosen location makes power and network connectivity simple enough – with 25 metres of mains cable and single-mode fibre I can connect to proper mains and Ethernet, only one switch hop away from my storage arrays.

Security is a concern – that field is adjacent to a footpath, though set back from the road, and there have been break-ins in the area. Other than making the building fairly secure against “opportunistic” crooks – reinforcing the door, lack of windows, and a solid lock – there’s not a lot that can be done. PIR sensors externally won’t work due to the abundant wildlife, so a combination of internal sensors and an alarm to make a racket if someone does force the door or climb in through the open roof will have to do. CCTV around the perimeter might work but could work just as well as an attractant as a deterrent, and wildlife would probably again make alarming impossible. I’m also planning on using a worm geared or lead screw based roof mechanism, which should be very hard to force open.

Making plans

I took the view early on in this that I wanted to build this myself. I’m still not 100% sure about this, but I think it’s a reasonable project and something I should be able to do! I am budgeting for some help, though, and will have to hire kit in regardless – a mini digger for the groundwork, compactor to pack down aggregates, concrete vibrator to settle concrete in the forms, etc.

I also need planning permission. I started with a footprint that wouldn’t normally need it, so long as the building isn’t tall – but I’m in a conservation area, which means “permitted development” doesn’t really apply. I’m not concerned about getting planning permission – it’s a small building in an otherwise empty field (except for a shed we’re going to remove) and will blend in just fine. Having to go through planning permission also means I can relax around some of the limits that I’d otherwise be avoiding.

Working through the material costs there’s easily £2k, maybe £3k of materials – labour would be another £1-2k atop that if not more. That’s quite an investment, and I’m really keen to make sure that everything about this is right – giving up power to a third party feels risky. It may be that when I get the design done I sit down with some local builders that I trust and see what they say.

The first step remains the plan and design, which is taking time – but I think time invested here is time well spent. I may not start until later in the year, or even early next year – one more winter without it wouldn’t be the end of the world. It’s going to be a fun project if I can get the plan right!

MORE DOMES

Fans of domes will be wondering why I haven’t just dropped £3k on a nice big Pulsar/insert-vendor-here dome. The answer is simple:

  • It’s not £3k, it’s £7k by the time you’ve automated it
  • It’s impossible to insulate the roof nicely – you end up slapping neoprene sheets up with glue just to stop condensation build-up raining on your scope
  • They’re relatively small and uncomfortable to work in unless you get big ones which are even more money
  • They only allow for a single telescope
  • They’re definitely harder to get through conservation area planning permission committees

I’ve looked at a few other dome designs and while there’s some good contenders they all have similar problems. I did consider making a “clever” geodesic dome – something I could build pretty cheaply but which would still have decent wind resistance – but automation remains the problem. Ground-level domes (where the whole structure rotates, rather than using a rotating section on a cylinder) make the construction simpler, but the bearing and rotation mechanism have to cope with increased gravity load and all of the wind loading. Cylinder-style observatories have similar problems.

The round/dodecahedral designs of these structures also make literally everything harder. Want to bolt a light to a wall? It’s not flat, so if you want it level/flat you now get to make a bracket… weatherproofing, insulation, and more all get more complicated. Having four flat walls which never move makes life simple – mounting insulation, cable entry glands, coolers, dehumidifiers, fans/filters, lights, shelves, etc is all so much simpler.

So – no dome here for now.

And another thing…

While we’re building a light-shielded box in a quiet location with power and networking, what else could we do? I’m also going to include infrastructure to support a small ground-level dish and motors for radioastronomy, as well as some mounts for meteor spotting cameras, an all-sky camera, and a weather station. I won’t have all this on day one, but putting a little extra concrete in now is way easier than doing it again later, and it means I can put in cable ducts to make wiring it up simpler. The cost of the pads, etc is tiny and turns those future projects from a pain into something much simpler.

Adventures in Differential Flexure

How’s that for a thrilling title? But this topic really does encapsulate a lot of what I love about astrophotography, despite the substantial annoyance it’s caused me lately…

Long exposure of M51 in Hydrogen Alpha – 900s

My quest for really nice photos of galaxies has, inevitably, driven me towards narrowband imaging, which can help bring out detail in galaxies and minimise light pollution. I bought a hydrogen alpha filter not long ago – a filter that removes all of the light except from a hydrogen emission line, a deep red narrow band of light. This filter has the unfortunate side effect of reducing the total amount of light hitting the sensor, meaning that long exposures are really required to drive the signal far above the noise floor. In the single frame above, the huge glow from the right is amplifier glow – an issue with the camera that grows worse the longer my exposures. Typically, this gets removed by taking dozens of dark frames with a lens cap on and subtracting the fixed amplifier glow from the frames, a process called calibration. The end result is fairly clean – but what about these unfortunate stars?

Oblong stars are a problem – they show that the telescope failed to accurately track the target for the entire period. Each pixel in this image (and you can see pixels here, in the hot pixels that appear as noise in the close-up) equates to 0.5″ of sky (0.5 arc-seconds). This is about two to four times my seeing limit (the amount of wobble introduced by the atmosphere) on a really good night, meaning I’m over-sampling nicely (Nyquist says we should be oversampling 2x to resolve all details). My stars are oblong by a huge amount – 6-8″, if not more!

My guide system – the PHD2 software package, an ASI120MC camera and a 60mm guidescope – reported no worse than 0.5″ tracking all night, meaning I should’ve seen perfectly round stars. So what went wrong?

The most likely culprit is a slightly loose screw on my guidescope’s guiding rings, which I found after being pointed at a thing called “differential flexure” by a fantastic chap on the Stargazer’s Lounge forums (more on that later). But this is merely a quite extreme example of a real problem that can occur, and a nice insight into the tolerances and required precision of astronomical telescopes for high-resolution imaging. As I’m aiming for 0.5″ pixel accuracy, but practically won’t get better seeing than 1-2″, my guiding needs to be fairly good. The mount, with excellent guiding, is mechanically capable of 0.6-0.7″ accuracy; this is actually really great, especially for a fairly low-cost mount (<£1200). You can easily pay upwards of £10,000 for a mount, and not get much better performance.

Without guiding though it’s not terribly capable – mechanical tolerances aren’t perfect in a cheap mount, and periodic error from the rotation of worm gears creeps in. While you can program the mount to correct for this it won’t be perfect. So we have to guide the mount. While the imaging camera takes long, 5-10 minute exposures, the guiding camera takes short 3-5 second exposures and feeds software (in my case, PHD2) which tracks a star’s centre over time, using the changes in that centre to generate a correction impulse which is sent to the mount’s control software (in my case, INDI and the EQmod driver). This gets us down to the required stability over time.

My Primaluce Lab 60mm guidescope and ASI120MC guide camera on the “bench”, in PLL 80mm guidescope rings on ADM dovetails

The reason why my long exposures sucked, despite all this, is simple – my guide camera was not always changing its orientation as the imaging camera was. That is to say, when the mount moved a little bit, or failed to move, while the imaging camera was affected the guiding camera was not. This is called differential flexure – the difference in movement between two optical systems. Fundamentally, this is because my guidescope is a completely separate optical system to my main telescope – if it doesn’t move when my main scope does, the guiding system doesn’t know to correct! The inverse applies, too – maybe the guidescope moves and overcorrects for an imaging system that hasn’t moved at all.

With a refractor telescope, if you just secure your guidescope really well to the main telescope, all is (generally) well. That is the only practical potential source of error, outside of focuser wobble. In a Newtonian such as the one I use, though, there’s plenty of other sources. At the end of a Newtonian telescope is a large mirror – 200mm across, in my case. This is supported by a mirror cell – pinching the mirror can cause huge deviation (dozens or hundreds of nanometers, which is unacceptable), so just clamping it up isn’t practical. This means that as the telescope moves the mirror can move a little bit – not much, but enough to move the image slightly on the sensor. While moving the mount isn’t an ideal way to fix this movement – better mirror cells reduce this movement – it’s better than doing nothing at all. The secondary mirror has similar problems. The tube itself can also expand or contract, being quite large – carbon fibre tubes minimise this but are expensive. Refractors have, broadly, all their lenses securely held in place without issue and so don’t suffer these problems.

And so the answer seems to be a solution called “Off Axis Guiding”. In this system, rather than using a separate guide scope, you use a small prism inserted in the optical train (after the focuser but before the camera) to “tap” a bit of the light off – usually the sensor is a rectangle in a circular light path meaning this is pretty easy to achieve without any impact to the light that the sensor receives. This light is bounced into a camera mounted at 90 degrees to the optical train, which performs the guiding function. There are issues with this approach – you have a narrower (and hard to move) field of view, and you need a more sensitive guide camera to find stars – but the resolution is naturally far higher (0.7″ rather than 2.5″) due to the longer focal length and so the potential accuracy of guide corrections improves. But more importantly, your guiding light shares fate with the imaging light – you use the same mirrors, tube, and so on. If your imaging light shifts, so does the guiding light, optically entwined.

The off-axis guiding route is appealing, but complex. I’ll undoubtedly explore it – I want to improve my guide camera regardless, and the OAG prism is “only” £110 or thereabouts. The guide camera is the brunt of the cost – weighing in at around £500-700 for a quality high-sensitivity guide camera.

But in the immediate future my budgets don’t allow for either of these solutions and so I’ve done what I can to minimise the flexure of the guidescope relative to the main telescope. This has focused on the screws used to hold the guidescope in place – they’re really poorly machined, along with the threads in the guidescope rings, and the plastic tips can lead to flexure.

Before and after – plastic-tipped screws

I’ve cut the tips almost back to the metal to minimise the amount of movement in compression, and used Loctite to secure two of the three screws in each ring. The coarse focus tube and helical focuser on the Primaluce guide scope also have some grub screws which I’ve adjusted – this has helped considerably in reducing the ability for the camera to move.

Hopefully that’ll help for now! I’m also going to ask a friend with access to CNC machines about machining some more solid tube rings for the guidescope; that would radically improve things, and shouldn’t cost much. However, practically the OAG route is going to be favourite for a Newtonian setup – so that’s going to be the best route in the long run.

Despite all this I managed a pretty good stab at M51, the Whirlpool Galaxy. I wasn’t suffering from differential flexure so much on these exposures – it’s probably a case of the pointing of the scope being different and so not hitting the same issue. I had two good nights of really good seeing, and captured a few hours of light. This image does well to highlight the benefits of the Newtonian setup – with a 1000mm focal length with a fast focal ratio, paired with my high-resolution camera, I can achieve some great detail in a short period of time.

M51, imaged over two nights at the end of March
Detail, showing some slightly overzealous deconvolution of stars and some interesting features

Alongside my telescope debugging, I’m working on developing my observatory plans into a detailed, budgeted design – more on that later. I’ve also been tinkering with some CCDinspector-inspired Python scripts to analyse star sharpness across a large number of images and in doing so highlight any potential issues with the optical train or telescope in terms of flatness, tilt, and so on. So far this tinkering hasn’t lead anywhere interesting, which either suggests my setup is near perfect (which I’m sure it isn’t) or I’m missing something – more tinkering to be done!

Map of sharpness across 50 or so luminance frames, showing a broadly even distribution and no systemic sharpness deviance

How to fail at astrophotography

This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.

So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.

So I did some research, and concluded that:

  • Astrophotography has the potential to be done cheaply but some bits do demand some investment
  • Wide-field is cheapest to do, since a telescope isn’t needed; planetary is way cheaper than deep-sky (depending on the planet) to kit out for, but to get really good planetary images is hard
  • Good telescopes are seriously expensive, but pretty good telescopes are accessibly cheap, and produce pretty good results
  • Newtonians (Dobsonians, for visual) give the absolute best aperture-to-cash return
  • Having a good mount that can track accurately is absolutely key
  • You can spend a hell of a lot of cash on this hobby if you’re not careful, and spending too little is the fastest path there…

So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.

Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.

As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.

The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.

Jupiter; the great red spot is just about visible. If you really squint you can see a few pixels that are, I swear, moons.

This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.

Working out what questions to ask

The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.

M31 Andromeda; repaired a bit in PixInsight from my original, still kinda terrible

Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…

Raw stack from an evening of longer-exposure imaging of NGC891; the noise is the sensor error. I hadn’t quite cracked image processing at this point.

It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.

My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.

M31/M110 Andromeda, wider field shot, Nikon D1x – first light, processed with DeepSkyStacker and StarTools

Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.

M33 Triangulum, D1x, processed with DeepSkyStacker and PixInsight

I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!

Stepping on up to mono

And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.

By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.

The scope and guidescope, preparing for an evening of imaging on a rare weekend clear night
M33 Triangulum; first light with the ASI183MM-PRO. A weird light leak artefact can be seen clearly in the middle of the image, near the top of the frame

Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.

The current finished imaging train – Baader MPCC coma corrector, Baader VariLock T2 spacer, ZWO mini filter wheel, ASI183MM-PRO

But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.

I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.

I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.

It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.

To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.

Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.

It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).

… okay, maybe I’d still like to get a proper flat field generator, but the “t-shirt at dusk” method works pretty well and only cost £10 for a white t-shirt

Settling in to new digs

Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.

Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.

M33 Triangulum, showing clearly now the various small nebulas and colourful objects around the main galaxy. The first image I was genuinely gleeful to produce and share as widely as I could.
The Heart Nebula, not quite centred up; the detail in the nebulosity, even with this wideband image, is helped tremendously by the pixel oversampling I achieve with my setup (0.5 arcseconds per pixel)

But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.

So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.