How to fail at astrophotography

This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.

So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.

So I did some research, and concluded that:

  • Astrophotography has the potential to be done cheaply but some bits do demand some investment
  • Wide-field is cheapest to do, since a telescope isn’t needed; planetary is way cheaper than deep-sky (depending on the planet) to kit out for, but to get really good planetary images is hard
  • Good telescopes are seriously expensive, but pretty good telescopes are accessibly cheap, and produce pretty good results
  • Newtonians (Dobsonians, for visual) give the absolute best aperture-to-cash return
  • Having a good mount that can track accurately is absolutely key
  • You can spend a hell of a lot of cash on this hobby if you’re not careful, and spending too little is the fastest path there…

So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.

Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.

As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.

The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.

Jupiter; the great red spot is just about visible. If you really squint you can see a few pixels that are, I swear, moons.

This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.

Working out what questions to ask

The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.

M31 Andromeda; repaired a bit in PixInsight from my original, still kinda terrible

Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…

Raw stack from an evening of longer-exposure imaging of NGC891; the noise is the sensor error. I hadn’t quite cracked image processing at this point.

It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.

My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.

M31/M110 Andromeda, wider field shot, Nikon D1x – first light, processed with DeepSkyStacker and StarTools

Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.

M33 Triangulum, D1x, processed with DeepSkyStacker and PixInsight

I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!

Stepping on up to mono

And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.

By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.

The scope and guidescope, preparing for an evening of imaging on a rare weekend clear night
M33 Triangulum; first light with the ASI183MM-PRO. A weird light leak artefact can be seen clearly in the middle of the image, near the top of the frame

Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.

The current finished imaging train – Baader MPCC coma corrector, Baader VariLock T2 spacer, ZWO mini filter wheel, ASI183MM-PRO

But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.

I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.

I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.

It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.

To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.

Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.

It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).

… okay, maybe I’d still like to get a proper flat field generator, but the “t-shirt at dusk” method works pretty well and only cost £10 for a white t-shirt

Settling in to new digs

Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.

Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.

M33 Triangulum, showing clearly now the various small nebulas and colourful objects around the main galaxy. The first image I was genuinely gleeful to produce and share as widely as I could.
The Heart Nebula, not quite centred up; the detail in the nebulosity, even with this wideband image, is helped tremendously by the pixel oversampling I achieve with my setup (0.5 arcseconds per pixel)

But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.

So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.

A New Chapter

It’s been almost three years since I last wrote a real long-form blog post (past documentation of LiDAR data aside). Given that, particularly for the last two years, long-form writing has been the bulk of my day job, it’s with a wry smile I wander back to this forlorn medium. How dated it feels, in the age of Twitter and instant 140/280-character gratification! And yet such a reflection of my own mental state, in many ways.

I’ve been working at Gigaclear for about as long – three years – as my absence from blogging; this is no coincidence. My work at BBC R&D was conducted in a sufficiently calm atmosphere to permit me the occasional hobby, and the mental energy to engage with them on fair terms. I spent large chunks of it writing imageboard software; that particular project I consider a success – not only has it been taken on by others technically and organisationally, it’s now hosting almost 2 million images, 10 million comments and has around a quarter of a million users. Not too bad for something I hacked together on long coach journeys and my evenings. I tinkered with drones on the side, building a few and writing software for controlling them.

At Gigaclear – still a startup, at heart – success and survival has demanded my full attention; it is in part a function of working for an organisation that has scaled in the span of three years in staff by over 150%, in live customers by 400%, in built network by 600%. We’ve cycled senior leadership teams almost annually and gone through an investor buyout recently. It is not a calm organisation, and I am lucky (or unlucky, depending on your view) enough to have been close enough to the pointy end of things to feel some of the brunt of it. It has been an incredible few years, but not an easy few years.

I am a workaholic, and presented with an endless stream of work, I find it difficult to move on. The drones have sat idle and gathered dust; my electronics workbench in constant disarray, PCBs scattered. Even for my personal projects, I’ve written barely any code; the largest project I’ve managed lately has been a system to manage a greenhouse heater and temperature sensors (named Boothby), amounting to a few hundred lines of C and Python. My evenings have involved scrawling design diagrams and organisational charts, endless Powerpoint drafts and revisions, hundreds of pages of documentation, too much alcohol, curry, and stress. Given that part of my motivation for moving from R&D to Gigaclear was health (6 hours a day commuting into London was fairly brutal on my mental and physical health) it’s ironic that I’ve barely moved the needle on that front. Clearly, I needed something to allow me to refocus my energy at home away from work, lest work simply consume me.

A friend having a look at the moon in daylight – first light with the new telescope and mount, May 2017

As a kid – back in the late 90s – my father bought a telescope. It was what we could afford – a cheap Celestron branded Newtonian reflector tube on a manual tripod. But it was enough to see Jupiter, Saturn’s rings, and the moon. The tube is still sat in the garage – it was left outside overnight once, wet, in freezing temperatures, and the focuser was damaged in another incident, and it sits idle now, practically unusable. But it is probably part of why today I am so obsessed with space, other than the incredible engineering and beautiful science that goes into the domain. My current bedside reading is a detailed history of the Deep Space Network; a recent book on liquid propellant development is a definite recommendation for those interested in the area. Similar books litter my bookshelves, alongside space operas and books on software and companies.

M31, the Triangulum galaxy

I always felt a bit bad about ruining the telescope (because it was of course me who left it out in the rain) and proposed that for our birthday (my father and I share a birthday, making things much more convenient) we should remedy the lack of a proper telescope in the family; I had been reading various astrophotography subreddits and forums for a while and been astounded by the images terrestrial astrophotographers managed to acquire, so pitched in the bulk of the cash to get an astrophotography-quality mount, the most important bit to spend money on (I had discovered). And so we had a new telescope in the family. Nothing spectacular – a Skywatcher 200mm Newtonian reflector – but on a solid mount, a Skywatcher EQ6-R Pro. Enough to start with a little bit of astrophotography (and get some fabulous visual views on the way).

M81, Bode’s Galaxy

Of course, once one has a telescope, the natural inclination in today’s day and age is to share; and as I shared, I was encouraged to try more. And of course, I then discovered just how expensive astrophotography is as a hobby…

An early shot of Jupiter; I later opted to focus on deep-sky objects

But here it is – a new hobby, and one that I have managed to engage with with aplomb. The images in this post are all mine; they’re not perfect, but I’m proud of them. That I have discovered a love for something that taps directly into my passion for space is perhaps no surprise. Gigaclear is calming down a little as the organisation matures, but making proper time for my hobby has been helpful to settle my own nerves a little.

The scope we bought back in April of 2017; now, in Feb 2019, I think I have what I would consider a “competent” astrophotography rig for deep space objects, albeit only small ones. That particular rabbit hole is worth a few more posts, I think – and therein lies the reason why I have penned this prose.

The Heart Nebula, slightly off-piste due to a mount aiming error

Twitter is a poor medium for detailed discussion of why. Look, here’s this fabulous new filter wheel! Here’s a cool picture of a nebula! But explaining how such things are accomplished, and why I have decided to buy specific things or do particular things and the thought processes around them are not things that Twitter can accommodate. And so, the blog re-emerges.

An early shot of the core of Andromeda, before I had really realised how big Andromeda is and how narrow my field of view was… and before I got a real camera!

I’ve got a fair bit to write about (as my partner will attest to – that I can talk about her publicly is another welcome milestone since my last blog posts) and a blog feels like the right forum for it. And so I will rekindle this strange, isolated world – an entire website for one person, an absurd indulgence – to share my new renewed passion in astrophotography. Hopefully to add to a corpus the parts I feel are missing – the rich documentation of mistakes and errors, as well as celebrations of the successes.

And who knows – maybe that’ll help get my brain back on track, too. Because at the end of the day, working all day long isn’t good for your employer or for your own brain; but if you’re a workaholic, not working takes work!

Mapping Electromagnetic Field

This is part blog post, part prelude and part documentation.

At Electromagnetic Field (EMFCamp, being held later this month) I will be giving a talk on mobile mapping technologies, what the current state of the art looks like, precise location and some open source tools. We use mobile mapping and some of the tools I’ll discuss at my work, Gigaclear, to survey large areas of the rural UK for our fibre-to-the-home network build, which is how I’ve been able to wrangle a quick drive around the EMFCamp site at Eastnor from the survey vehicle.

That vehicle is equipped with fairly standard mobile mapping hardware, using a Ladybug5 camera for panoramic 30MP images (which I can’t distribute for privacy reasons) and a Riegl VUX-1HA scanner for LiDAR scanning. The Riegl captures 1 million points each second and rotates its scan head 250 times every second.


Words of caution and apology

LiDAR data is sometimes a pain to work with. Even with the best kit in the world, and a bunch of time spent processing, without control points and lots of manual marrying up of points in overlapping passes of the scanner, there’s noise and variation in the output. This isn’t a project that Gigaclear have done in our usual manner – I’ve had no such time in preparing this in my evenings, and so this dataset is presented as a “best effort” dataset, likely riddled with all sorts of errors and inaccuracies that we wouldn’t usually accept and which professional users will, rightly, sneer at!

In absolute terms the x/y accuracy of this dataset is pretty good, and an upper bound of 5cm RMS error from OSGB36 (the British National Grid) can be expected throughout most of the scan. Within the scanner output the accuracy is around 3mm between points – but only within the same pass. This dataset contains multiple overlapping and automatically aligned passes (you can see these as point source ID in the LAS file), and so there are some errors and anomalies. On top of this, the colour in this dataset comes from the overlaying of images on the points, using a calibration file and alignment – and I know the alignment I used wasn’t great. And the drivers didn’t go down the middle of the campsite, so there’s a bit of a void there. So, expectations set!


Sensible scale

Often, very dense point clouds can be counterproductive. In the case of our initial dataset there were over 1 billion points returned. Most of the subsequent processing was done on this dataset, thinned to a 5mm grid (still about a billion points). This dataset is about 32 gigabytes and is a real pain to work with.

Intensity view – the infrared brightness of the reflection from the laser

What I’m publishing here is therefore a reduced dataset; it is the same dataset, thinned using simple decimation (taking 1 in every 10 points), making it about 3.2 gigabytes in size and containing 92 million points – something that will fit in RAM on most modern PCs. In terms of detail, it’s still pretty fantastic for many uses. It’s a LAS 1.4 file, georeferenced to the UK National Grid (OSTN15 flavour, for those who care) with some fairly imprecise classifications, raw intensity and RGB data per point.

RGB colours – taking photo data and laying it onto the point cloud

This data can be post-processed for your needs, desires and interest. If you’ve never worked with LiDAR data before, CloudCompare is a great tool to start with – you’ll need the alpha version for liblas LAS 1.4 support. If you fancy generating rasters or generating filtered versions of the data (or writing your own Python code to work with it) then PDAL is a great tool.

Hillshade maps are easily produced by asking PDAL to write a GeoTIFF with the Z dimension


… interesting stuff, right?

If you do think this sort of stuff is downright fascinating from a technology standpoint, I’ll be doing a talk on the underlying technology at EMFcamp, whenever the schedule computer deems it so. Come along and find out more!

I’m personally really excited to see what comes of giving a gathering like EMFcamp this sort of data, and I’ve already heard some great ideas – let me know what you make with it!

And if you fancy a job working on software that works with this sort of stuff, and solving similar interesting problems in the geospatial world, drop me a line or check our website.

The Data!

Eastnor Deer Park – LAS 1.4 – Version 1, 1:10 Decimated – 3.2GB – Download here

This dataset is also available for online consumption here, but if you’re going to do anything interesting or serve it to many people please don’t do it off this server. The online version was produced with PotreeConverter and uses the excellent Potree web based renderer.

As the creator of this dataset, I license this dataset under a Creative Commons BY-SA license. The dataset may be used for any purpose, so long as it is attributed in some way and any derivative works are shared alike.

Creative Commons License
Eastnor Park LiDAR Survey is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.