Adventures in Differential Flexure

How’s that for a thrilling title? But this topic really does encapsulate a lot of what I love about astrophotography, despite the substantial annoyance it’s caused me lately…

Long exposure of M51 in Hydrogen Alpha – 900s

My quest for really nice photos of galaxies has, inevitably, driven me towards narrowband imaging, which can help bring out detail in galaxies and minimise light pollution. I bought a hydrogen alpha filter not long ago – a filter that removes all of the light except from a hydrogen emission line, a deep red narrow band of light. This filter has the unfortunate side effect of reducing the total amount of light hitting the sensor, meaning that long exposures are really required to drive the signal far above the noise floor. In the single frame above, the huge glow from the right is amplifier glow – an issue with the camera that grows worse the longer my exposures. Typically, this gets removed by taking dozens of dark frames with a lens cap on and subtracting the fixed amplifier glow from the frames, a process called calibration. The end result is fairly clean – but what about these unfortunate stars?

Oblong stars are a problem – they show that the telescope failed to accurately track the target for the entire period. Each pixel in this image (and you can see pixels here, in the hot pixels that appear as noise in the close-up) equates to 0.5″ of sky (0.5 arc-seconds). This is about two to four times my seeing limit (the amount of wobble introduced by the atmosphere) on a really good night, meaning I’m over-sampling nicely (Nyquist says we should be oversampling 2x to resolve all details). My stars are oblong by a huge amount – 6-8″, if not more!

My guide system – the PHD2 software package, an ASI120MC camera and a 60mm guidescope – reported no worse than 0.5″ tracking all night, meaning I should’ve seen perfectly round stars. So what went wrong?

The most likely culprit is a slightly loose screw on my guidescope’s guiding rings, which I found after being pointed at a thing called “differential flexure” by a fantastic chap on the Stargazer’s Lounge forums (more on that later). But this is merely a quite extreme example of a real problem that can occur, and a nice insight into the tolerances and required precision of astronomical telescopes for high-resolution imaging. As I’m aiming for 0.5″ pixel accuracy, but practically won’t get better seeing than 1-2″, my guiding needs to be fairly good. The mount, with excellent guiding, is mechanically capable of 0.6-0.7″ accuracy; this is actually really great, especially for a fairly low-cost mount (<£1200). You can easily pay upwards of £10,000 for a mount, and not get much better performance.

Without guiding though it’s not terribly capable – mechanical tolerances aren’t perfect in a cheap mount, and periodic error from the rotation of worm gears creeps in. While you can program the mount to correct for this it won’t be perfect. So we have to guide the mount. While the imaging camera takes long, 5-10 minute exposures, the guiding camera takes short 3-5 second exposures and feeds software (in my case, PHD2) which tracks a star’s centre over time, using the changes in that centre to generate a correction impulse which is sent to the mount’s control software (in my case, INDI and the EQmod driver). This gets us down to the required stability over time.

My Primaluce Lab 60mm guidescope and ASI120MC guide camera on the “bench”, in PLL 80mm guidescope rings on ADM dovetails

The reason why my long exposures sucked, despite all this, is simple – my guide camera was not always changing its orientation as the imaging camera was. That is to say, when the mount moved a little bit, or failed to move, while the imaging camera was affected the guiding camera was not. This is called differential flexure – the difference in movement between two optical systems. Fundamentally, this is because my guidescope is a completely separate optical system to my main telescope – if it doesn’t move when my main scope does, the guiding system doesn’t know to correct! The inverse applies, too – maybe the guidescope moves and overcorrects for an imaging system that hasn’t moved at all.

With a refractor telescope, if you just secure your guidescope really well to the main telescope, all is (generally) well. That is the only practical potential source of error, outside of focuser wobble. In a Newtonian such as the one I use, though, there’s plenty of other sources. At the end of a Newtonian telescope is a large mirror – 200mm across, in my case. This is supported by a mirror cell – pinching the mirror can cause huge deviation (dozens or hundreds of nanometers, which is unacceptable), so just clamping it up isn’t practical. This means that as the telescope moves the mirror can move a little bit – not much, but enough to move the image slightly on the sensor. While moving the mount isn’t an ideal way to fix this movement – better mirror cells reduce this movement – it’s better than doing nothing at all. The secondary mirror has similar problems. The tube itself can also expand or contract, being quite large – carbon fibre tubes minimise this but are expensive. Refractors have, broadly, all their lenses securely held in place without issue and so don’t suffer these problems.

And so the answer seems to be a solution called “Off Axis Guiding”. In this system, rather than using a separate guide scope, you use a small prism inserted in the optical train (after the focuser but before the camera) to “tap” a bit of the light off – usually the sensor is a rectangle in a circular light path meaning this is pretty easy to achieve without any impact to the light that the sensor receives. This light is bounced into a camera mounted at 90 degrees to the optical train, which performs the guiding function. There are issues with this approach – you have a narrower (and hard to move) field of view, and you need a more sensitive guide camera to find stars – but the resolution is naturally far higher (0.7″ rather than 2.5″) due to the longer focal length and so the potential accuracy of guide corrections improves. But more importantly, your guiding light shares fate with the imaging light – you use the same mirrors, tube, and so on. If your imaging light shifts, so does the guiding light, optically entwined.

The off-axis guiding route is appealing, but complex. I’ll undoubtedly explore it – I want to improve my guide camera regardless, and the OAG prism is “only” £110 or thereabouts. The guide camera is the brunt of the cost – weighing in at around £500-700 for a quality high-sensitivity guide camera.

But in the immediate future my budgets don’t allow for either of these solutions and so I’ve done what I can to minimise the flexure of the guidescope relative to the main telescope. This has focused on the screws used to hold the guidescope in place – they’re really poorly machined, along with the threads in the guidescope rings, and the plastic tips can lead to flexure.

Before and after – plastic-tipped screws

I’ve cut the tips almost back to the metal to minimise the amount of movement in compression, and used Loctite to secure two of the three screws in each ring. The coarse focus tube and helical focuser on the Primaluce guide scope also have some grub screws which I’ve adjusted – this has helped considerably in reducing the ability for the camera to move.

Hopefully that’ll help for now! I’m also going to ask a friend with access to CNC machines about machining some more solid tube rings for the guidescope; that would radically improve things, and shouldn’t cost much. However, practically the OAG route is going to be favourite for a Newtonian setup – so that’s going to be the best route in the long run.

Despite all this I managed a pretty good stab at M51, the Whirlpool Galaxy. I wasn’t suffering from differential flexure so much on these exposures – it’s probably a case of the pointing of the scope being different and so not hitting the same issue. I had two good nights of really good seeing, and captured a few hours of light. This image does well to highlight the benefits of the Newtonian setup – with a 1000mm focal length with a fast focal ratio, paired with my high-resolution camera, I can achieve some great detail in a short period of time.

M51, imaged over two nights at the end of March
Detail, showing some slightly overzealous deconvolution of stars and some interesting features

Alongside my telescope debugging, I’m working on developing my observatory plans into a detailed, budgeted design – more on that later. I’ve also been tinkering with some CCDinspector-inspired Python scripts to analyse star sharpness across a large number of images and in doing so highlight any potential issues with the optical train or telescope in terms of flatness, tilt, and so on. So far this tinkering hasn’t lead anywhere interesting, which either suggests my setup is near perfect (which I’m sure it isn’t) or I’m missing something – more tinkering to be done!

Map of sharpness across 50 or so luminance frames, showing a broadly even distribution and no systemic sharpness deviance

A New Chapter

It’s been almost three years since I last wrote a real long-form blog post (past documentation of LiDAR data aside). Given that, particularly for the last two years, long-form writing has been the bulk of my day job, it’s with a wry smile I wander back to this forlorn medium. How dated it feels, in the age of Twitter and instant 140/280-character gratification! And yet such a reflection of my own mental state, in many ways.

I’ve been working at Gigaclear for about as long – three years – as my absence from blogging; this is no coincidence. My work at BBC R&D was conducted in a sufficiently calm atmosphere to permit me the occasional hobby, and the mental energy to engage with them on fair terms. I spent large chunks of it writing imageboard software; that particular project I consider a success – not only has it been taken on by others technically and organisationally, it’s now hosting almost 2 million images, 10 million comments and has around a quarter of a million users. Not too bad for something I hacked together on long coach journeys and my evenings. I tinkered with drones on the side, building a few and writing software for controlling them.

At Gigaclear – still a startup, at heart – success and survival has demanded my full attention; it is in part a function of working for an organisation that has scaled in the span of three years in staff by over 150%, in live customers by 400%, in built network by 600%. We’ve cycled senior leadership teams almost annually and gone through an investor buyout recently. It is not a calm organisation, and I am lucky (or unlucky, depending on your view) enough to have been close enough to the pointy end of things to feel some of the brunt of it. It has been an incredible few years, but not an easy few years.

I am a workaholic, and presented with an endless stream of work, I find it difficult to move on. The drones have sat idle and gathered dust; my electronics workbench in constant disarray, PCBs scattered. Even for my personal projects, I’ve written barely any code; the largest project I’ve managed lately has been a system to manage a greenhouse heater and temperature sensors (named Boothby), amounting to a few hundred lines of C and Python. My evenings have involved scrawling design diagrams and organisational charts, endless Powerpoint drafts and revisions, hundreds of pages of documentation, too much alcohol, curry, and stress. Given that part of my motivation for moving from R&D to Gigaclear was health (6 hours a day commuting into London was fairly brutal on my mental and physical health) it’s ironic that I’ve barely moved the needle on that front. Clearly, I needed something to allow me to refocus my energy at home away from work, lest work simply consume me.

A friend having a look at the moon in daylight – first light with the new telescope and mount, May 2017

As a kid – back in the late 90s – my father bought a telescope. It was what we could afford – a cheap Celestron branded Newtonian reflector tube on a manual tripod. But it was enough to see Jupiter, Saturn’s rings, and the moon. The tube is still sat in the garage – it was left outside overnight once, wet, in freezing temperatures, and the focuser was damaged in another incident, and it sits idle now, practically unusable. But it is probably part of why today I am so obsessed with space, other than the incredible engineering and beautiful science that goes into the domain. My current bedside reading is a detailed history of the Deep Space Network; a recent book on liquid propellant development is a definite recommendation for those interested in the area. Similar books litter my bookshelves, alongside space operas and books on software and companies.

M31, the Triangulum galaxy

I always felt a bit bad about ruining the telescope (because it was of course me who left it out in the rain) and proposed that for our birthday (my father and I share a birthday, making things much more convenient) we should remedy the lack of a proper telescope in the family; I had been reading various astrophotography subreddits and forums for a while and been astounded by the images terrestrial astrophotographers managed to acquire, so pitched in the bulk of the cash to get an astrophotography-quality mount, the most important bit to spend money on (I had discovered). And so we had a new telescope in the family. Nothing spectacular – a Skywatcher 200mm Newtonian reflector – but on a solid mount, a Skywatcher EQ6-R Pro. Enough to start with a little bit of astrophotography (and get some fabulous visual views on the way).

M81, Bode’s Galaxy

Of course, once one has a telescope, the natural inclination in today’s day and age is to share; and as I shared, I was encouraged to try more. And of course, I then discovered just how expensive astrophotography is as a hobby…

An early shot of Jupiter; I later opted to focus on deep-sky objects

But here it is – a new hobby, and one that I have managed to engage with with aplomb. The images in this post are all mine; they’re not perfect, but I’m proud of them. That I have discovered a love for something that taps directly into my passion for space is perhaps no surprise. Gigaclear is calming down a little as the organisation matures, but making proper time for my hobby has been helpful to settle my own nerves a little.

The scope we bought back in April of 2017; now, in Feb 2019, I think I have what I would consider a “competent” astrophotography rig for deep space objects, albeit only small ones. That particular rabbit hole is worth a few more posts, I think – and therein lies the reason why I have penned this prose.

The Heart Nebula, slightly off-piste due to a mount aiming error

Twitter is a poor medium for detailed discussion of why. Look, here’s this fabulous new filter wheel! Here’s a cool picture of a nebula! But explaining how such things are accomplished, and why I have decided to buy specific things or do particular things and the thought processes around them are not things that Twitter can accommodate. And so, the blog re-emerges.

An early shot of the core of Andromeda, before I had really realised how big Andromeda is and how narrow my field of view was… and before I got a real camera!

I’ve got a fair bit to write about (as my partner will attest to – that I can talk about her publicly is another welcome milestone since my last blog posts) and a blog feels like the right forum for it. And so I will rekindle this strange, isolated world – an entire website for one person, an absurd indulgence – to share my new renewed passion in astrophotography. Hopefully to add to a corpus the parts I feel are missing – the rich documentation of mistakes and errors, as well as celebrations of the successes.

And who knows – maybe that’ll help get my brain back on track, too. Because at the end of the day, working all day long isn’t good for your employer or for your own brain; but if you’re a workaholic, not working takes work!

Mapping Electromagnetic Field

This is part blog post, part prelude and part documentation.

At Electromagnetic Field (EMFCamp, being held later this month) I will be giving a talk on mobile mapping technologies, what the current state of the art looks like, precise location and some open source tools. We use mobile mapping and some of the tools I’ll discuss at my work, Gigaclear, to survey large areas of the rural UK for our fibre-to-the-home network build, which is how I’ve been able to wrangle a quick drive around the EMFCamp site at Eastnor from the survey vehicle.

That vehicle is equipped with fairly standard mobile mapping hardware, using a Ladybug5 camera for panoramic 30MP images (which I can’t distribute for privacy reasons) and a Riegl VUX-1HA scanner for LiDAR scanning. The Riegl captures 1 million points each second and rotates its scan head 250 times every second.


Words of caution and apology

LiDAR data is sometimes a pain to work with. Even with the best kit in the world, and a bunch of time spent processing, without control points and lots of manual marrying up of points in overlapping passes of the scanner, there’s noise and variation in the output. This isn’t a project that Gigaclear have done in our usual manner – I’ve had no such time in preparing this in my evenings, and so this dataset is presented as a “best effort” dataset, likely riddled with all sorts of errors and inaccuracies that we wouldn’t usually accept and which professional users will, rightly, sneer at!

In absolute terms the x/y accuracy of this dataset is pretty good, and an upper bound of 5cm RMS error from OSGB36 (the British National Grid) can be expected throughout most of the scan. Within the scanner output the accuracy is around 3mm between points – but only within the same pass. This dataset contains multiple overlapping and automatically aligned passes (you can see these as point source ID in the LAS file), and so there are some errors and anomalies. On top of this, the colour in this dataset comes from the overlaying of images on the points, using a calibration file and alignment – and I know the alignment I used wasn’t great. And the drivers didn’t go down the middle of the campsite, so there’s a bit of a void there. So, expectations set!


Sensible scale

Often, very dense point clouds can be counterproductive. In the case of our initial dataset there were over 1 billion points returned. Most of the subsequent processing was done on this dataset, thinned to a 5mm grid (still about a billion points). This dataset is about 32 gigabytes and is a real pain to work with.

Intensity view – the infrared brightness of the reflection from the laser

What I’m publishing here is therefore a reduced dataset; it is the same dataset, thinned using simple decimation (taking 1 in every 10 points), making it about 3.2 gigabytes in size and containing 92 million points – something that will fit in RAM on most modern PCs. In terms of detail, it’s still pretty fantastic for many uses. It’s a LAS 1.4 file, georeferenced to the UK National Grid (OSTN15 flavour, for those who care) with some fairly imprecise classifications, raw intensity and RGB data per point.

RGB colours – taking photo data and laying it onto the point cloud

This data can be post-processed for your needs, desires and interest. If you’ve never worked with LiDAR data before, CloudCompare is a great tool to start with – you’ll need the alpha version for liblas LAS 1.4 support. If you fancy generating rasters or generating filtered versions of the data (or writing your own Python code to work with it) then PDAL is a great tool.

Hillshade maps are easily produced by asking PDAL to write a GeoTIFF with the Z dimension


… interesting stuff, right?

If you do think this sort of stuff is downright fascinating from a technology standpoint, I’ll be doing a talk on the underlying technology at EMFcamp, whenever the schedule computer deems it so. Come along and find out more!

I’m personally really excited to see what comes of giving a gathering like EMFcamp this sort of data, and I’ve already heard some great ideas – let me know what you make with it!

And if you fancy a job working on software that works with this sort of stuff, and solving similar interesting problems in the geospatial world, drop me a line or check our website.

The Data!

Eastnor Deer Park – LAS 1.4 – Version 1, 1:10 Decimated – 3.2GB – Download here

This dataset is also available for online consumption here, but if you’re going to do anything interesting or serve it to many people please don’t do it off this server. The online version was produced with PotreeConverter and uses the excellent Potree web based renderer.

As the creator of this dataset, I license this dataset under a Creative Commons BY-SA license. The dataset may be used for any purpose, so long as it is attributed in some way and any derivative works are shared alike.

Creative Commons License
Eastnor Park LiDAR Survey is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.