Coma correction and off-axis guiding

It’s November, which means we’re well into the winter season for astrophotography, with early starts and long nights. So far, I think I’ve had two nights – the weather has been utterly absymal and I lost a few nights to not having everything ready to go.

This season I thought I’d treat my Newtonian (in an upgradeable fashion) to a new coma corrector and some guiding tweaks. All Newtonian telescopes suffer from coma – a phenomenon which causes the light field to curve near the edges, resulting in “comet-like” blurry stars in the corner of images. A coma corrector does what it says on the tin and fixes this, producing a flat light field for your eyepiece or sensor.

As with any bit of glass between you and the sky, though, there’s potential to make things worse. I bought a coma corrector when I was starting out with imaging, since it was clear I’d need something – but I bought a cheap one, a Baader MPCC Mk3. And while it does indeed largely “fix” the coma, it also introduced some other distortions to the field, which limited the sharpness I could achieve.

So this year, impressed by a second-hand eyepiece I picked up (a 17mm DeLite) I coughed up for a TeleVue Paracorr 2, which is widely regarded as the best coma corrector mortals can lay their hands on for a reasonable sum (for some definitions of reasonable).

I also wanted to have a crack at replacing my guidescope with an off-axis guiding solution – as I’ve written about in prior blogs, this should produce more accurate guide corrections without any differential flexure or mirror slop. To this end I picked up a second hand ASI174MM Mini guide camera, and a ZWO off-axis guider. The off-axis guider uses a prism to bounce otherwise-unused light from the edge of the visual field into the guide camera.

Adapters, adapters, adapters

Of course, having a new shiny thing means integrating it with your other shiny things. In my optical train I had my MPCC, an adjustable spacer, the filter wheel, and then the camera. Between the back of the coma corrector and the sensor there has to be exactly 55mm of space (1mm of path is added by the glass filter) to get the focal plane to converge properly.

And of course then there’s the threads. T2 is the main one I use for the imaging system – being a small sensor as it is – but M48 is also used. What isn’t used is what TeleVue supply by default which is a bizzare 2.4″ imperial thread, because the Americans have yet to be civilised in this regard. So, after a quick stop at Teleskop Express to pick up a special TeleVue to T2 adapter, we were sorted.

But then it gets complicated – we have to get the right back-focus from the Paracorr’s top lens surface to the sensor, now adding in the off-axis guider. I ended up with a smorgasbord of adapters and spacers and padding shims, and a bunch of diagrams. As it turns out I was hideously overthinking this and ZWO simply standardised all their depths so you can do it all with the kit they supply between the camera, filter wheel and OAG.

I’ve actually ended up shimming things a little to get closer to a mechanically correct fit for the Paracorr lens, but still might do some other adjustments to get the Paracorr-to-sensor distance spot on and reduce vignetting for the guide camera.

So this is what it all looks like, assembled…

Thoughts on the Paracorr

The overall initial impression, though, is very good. Mechanically it’s a very well made thing and my only complaint (though I understand why they haven’t done this) is the lack of a safety stop on the outside of the tube. Optically, it looks superb in comparison to the MPCC. I’ve only shot a few frames so far but even unguided they’re looking great, with sharp stars to the edge (some polar-alignment drift aside) and good detail in the middle. This is an unguided 120s exposure of M42.

However, the lack of a good manual hurt me. The documentation for the Paracorr is incredibly sparse for imaging use. The adapter has no mechanical drawings in terms of relation to the top lens surface, meaning you get to guess at what the offsets end up looking like. The focuser positioning took me longer than I’ll admit to get set right, too.

So now it’s all set up, it’s looking great – though the weather’s back to abysmal – but TeleVue could do with writing another two sides of A4 on how to make best use of it all.

The OAG I’ve gotten parfocal, but haven’t yet had much experience with – and after taking off my guidescope I forgot to rebalance the mount, so a quick guiding test was really struggling mechanically. I’ll do a further post on that in due course.

As with last year I’m still running everything on a Raspberry Pi 4 mounted onto the telescope directly. This basically works very well with Ethernet to the scope – I’ve got a couple of Reolink IP cameras for monitoring mount motion remotely so I can run it all night from inside the house.

About the only thing I still need to nail down is dew heaters. I’ve now not got a guidescope – the main reason I got one in the first place – and the Paracorr was a bit tricky to get a heater strip around. In its final position I can now refit one and avoid dew on the front optic – a common issue for me. Secondary and primary heaters I can hopefully avoid!

Rusty all-sky cameras

All-sky cameras are a lovely idea, especially if you’re someone like me who enjoys hiding in the warm most of the time and letting the computers look after the telescope in the depths of winter. I’ve been enjoying some time (when the clouds permit) this summer looking at things directly, but deep-space-object imaging in summer is brutal – short periods of darkness mean all the setup and teardown only for an hour or two of adequate darkness.

An all-sky camera is just a camera with a fisheye lens looking up 24/7/365, taking exposures in darkness of typically over 30 seconds to see stars. You can analyse these exposures to find out about sky conditions, use them as a quick visual guide, and spot meteors. And since they live outside all the time, there’s no setup or teardown!

So I figured I’d have a go at doing an all-sky camera, from scratch. There were a few reasons I wanted to do this. My mirror project is still going (slowly) when I get the headspace, but it takes a lot of messy setup and teardown, so is hard to dip into in the evenings when I have time. But mostly I was curious to dig a bit more into INDI, the protocol I use to control my telescope.

INDI is the instrument-neutral device interface, and it is XML over TCP. It’s quite an old design, using fairly old technologies. However, it’s well-liked by manufacturers as it has a quite simple C API, a very flexible abstraction model, and works well over networks as well as for local use. indiserver is the reference server implementation and has dozens of drivers for cameras, mounts, observatory dome motors, and everything inbetween. I can write C, but if I can avoid it, I always try to – so rather than use the libindi library or the pyindi libraries (which depend on the C library) I thought I might have a go at writing a new implementation from scratch.

I’ve been tinkering with Rust for a while now, and wrote my first serious project in it last month – a parser for Telcordia’s slightly-obscure optical time domain reflectometry (OTDR) format. I found it quite jarring coming from Python, even having written C/C++ in the past, but after a while I realised I was making better structural decisions about my code than I was in Python et al. The Rust compiler is aggressively studious, and forces you to think things through – but after a while getting used to the style, this becomes something quite conversational, rather than adversarial.

Aside from writing some Rust I needed some hardware, so through a half-dozen eBay orders and bits I had lying around, I assembled a “v1” camera. This contained a Pi 4, a WiFi radio with a high-gain antenna, an ASI120MC camera I was using for guiding, and a 12V to 5V step-down board. The power was supplied by a 12V PSU in a waterproof box sat next to this.

To look out, the enclosure got a dome thrown on top – an old CCTV camera dome, with silicone sealant around the rim to make a good watertight fit. I didn’t get the hole position spot on, but it was close enough and seals up fine.

Armed with some hardware, I was ready to test. My first test images revealed some horrendous hot pixels – a known issue with un-cooled cameras – and some clouds. An excellent first step!

One frame from the camera
One frame, gently post-processed

Taking frames over the course of an evening and assembling into a video yielded a fairly nice result. To drive the camera I used KStars, with indiserver running on the Pi4 to control the hardware.

I assembled the above video in PixInsight, having done hot pixel removal and a few other bits of post-processing. Not really a viable approach for 24/7 operation!

v2 – Brown Noctuas – not just for PCs! They also do tiny ones.
Inlet and outlet, with grilles to keep the smaller bits of wildlife out (hopefully)

Hot pixels are more or less guaranteed on uncooled cameras, but the box was getting quite hot. So I’ve now wrapped it in shiny aluminium foil to increase its solar reflectance index, and added a 40mm fan to circulate air through the enclosure (with a 40mm opening on the far side, camera in the middle – some quick CFD analysis suggested this as a sensible approach).

This definitely helps matters somewhat, though in winter a dew heater will be required, and rain removal is something that bears further study – my initial approach involves some G Techniq car windscreen repellent.

I’ve started on an INDI client implementation in Rust, now. This has been a challenge so far. For starters, Rust doesn’t have many XML parsers/generators, and those that exist aren’t well documented. However, having gotten the basics working, the way that INDI works presents some challenges. The protocol essentially shoves XML hierarchies around and then makes updates to elements within that hierarchy, and expects clients to trigger events or make changes to their state based on state changes within that hierarchy. There’s very little protocol-defined convention and a lot of unwritten expectations.

This makes for a very flexible protocol, but a very uncertain client. This doesn’t map into Rust terribly well, or at least requires a more complex level of Rust than I’ve used to date! It does also explain a great deal about some of the challenges I’ve had with stable operation using INDI clients of various forms. There’s such a thing as too much rigidity in abstractions, but there’s definitely such a thing as too little.

So, next step is getting the basic client working and stable with good and verifiable handling of errors. I’m intending to fuzz my client in the same way I’ve fuzzed otdrs, but also to have extensive test cases throughout to ensure that I can replicate well-behaved servers as well as the full gamut of odd network errors and conditions that can arise in INDI ecosystems. Hopefully I’ll end up with a client library for INDI which is fairly bulletproof – and then I can start writing an application!

My initial plan is to write something nice and basic that just does a fixed exposure and stores the raw FITS files on disk in a sensible layout. But once that’s done, I want to tackle image analysis for auto-exposure and dark frame processing. This will involve parsing FITS frames, doing some processing of them, and writing either FITS or PNG files out for storage.

It’s definitely an interesting challenge – but it feels tractable to extend my Rust knowledge. I’ve been really taken with the language as a tool for systems programming, which overlaps quite well with astronomy software. We generally want high levels of reliability, do plenty of maths and processing which benefits from low-level language performance, and increasingly do networking or work with remote hardware. It’s a good fit, it feels, and just needs some more time invested in it.

Nationalise Openreach?

Disclaimer: I am Chief Engineer for Gigaclear Ltd, a rural-focused fibre-to-the-home operator with a footprint in excess of 100,000 homes in the south of the UK. So I have a slight interest in this, but also know a bit about the UK market. What I’m writing here is my own thoughts, though, and doesn’t in the least bit represent company policy or direction.

Labour has recently proposed, as an election pledge, to nationalise Openreach and make them the national monopoly operator for broadband, and to give everyone in the UK free internet by 2030.

The UK telecoms market today is quite fragmented and complex, and so this is not the obvious win that it might otherwise appear to be.

In lots of European markets there’s a franchising model, and we do this in other utility markets – power being an excellent example. National Grid is a private company that runs transmission networks, and Distribution Network Operators (DNOs) like SSE, Western Power, etc run the distribution networks in regions. All are private companies with no shares held by government – but the market is heavily regulated and things like 100% coverage at reasonable cost is built in.

The ideal outcome for the UK telecoms market would clearly have been for BT (as it was then) never to have been privatised, and for the government to simply decide on a 100% fibre-to-the-home coverage model. This nearly happened, and that it didn’t is one of the great tragedies in the story of modern Britain; if it had, we’d be up at the top of the leaderboard on European FTTH coverage. As it is, we only just made it onto the leaderboard this year.

But that didn’t happen – Thatcher privatised it, and regulation was quite light-touch. The government didn’t retain majority control, and BT’s shareholders decided to sweat the asset they had and invest strategically in R&D to sweat that asset, along with some national network build-out. FTTC/VDSL2 was the last sticking plaster that made economic sense for copper after ADSL2+; LR-VDSL and friends might have given them some more time if the end of copper was still tied to performance.

As it is, enough people have been demonstrating the value of FTTH for long enough now that the focus has successfully shifted from “fast enough” to “long-term enough”. New copper technologies won’t last the next decade, and have huge reliability issues. Fibre to the home is the only long-term option to meaningfully improve performance, coverage, etc, especially in rural areas.

So how do we go about fixing the last 5%?

First, just so we’re clear, there are layers to the UK telecoms market – you have infrastructure owners who build and operate the fibre or copper. You have wholesale operators who provide managed services like Ethernet across infrastructure – people like BT Wholesale. Then you have retail operators who provide an internet connection – these are companies like BT Retail, Plusnet, TalkTalk, Zen, Andrews & Arnold, Sky, and so on. To take one example, Zen buy wholesale services from BT Wholesale to get bits from an Openreach-provided line back to their internet edge site. Sometimes Zen might go build their own network to an Openreach exchange so they effectively do the wholesale bit themselves, too, but it’s the same basic layers. We’re largely talking about the infrastructure owners below.

The issue is always that commercially the last 5-10% of the network in terms of hardest-to-reach places will never make sense to go and do, because it’s really expensive to do. Gigaclear’s model and approach is entirely designed around that last 5%, so we can make it work, but it takes a long-term view to do it. The hard-to-reach is, after all, hard-to-reach.

But let’s say we just nationalise Openreach. Now Openreach, in order to reach the hardest-to-reach, will need to overbuild everyone else. That includes live state-aid funded projects. While it’s nonsense to suggest that state aid is a reason why you couldn’t buy Openreach, it is a reason why you couldn’t get Openreach to go overbuild altnets in receipt of state aid. It’d also be a huge waste of money – billions already spent would simply be spent again to achieve the same outcome. Not good for anyone.

So let’s also say you nationalise everyone else, too – buy Virgin Media, Gigaclear, KCOM, Jersey Telecom, CityFibre, B4RN, TalkTalk’s fibre bits, Hyperoptic, and every startup telecom operator that’s built fibre to the home in new build housing estates, done their own wireless ISP, or in any other way provides an access technology to end users.

Now you get to try and make a network out of that mess. That is, frankly, a recipe for catastrophe. BT and Virgin alone have incredibly different networks in topology, design, and overall approach. Throw in a dozen altnets, each of whom is innovating by doing things differently to how BT do it, and you’ve got a dozen different networks that are diametrically opposed in approach, both at a physical and logical level. You’re going to have no network, just a bunch of islands that will likely fall into internal process black holes and be expensive to operate because they won’t look like 90% of the new operator’s infrastructure (i.e. Openreach’s network) and so require special consideration or major work to make it look consistent.

A more sensible approach is that done in some European countries – introduce a heavily regulated franchising market. Carve the market up to enable effective competition in services. Don’t encourage competition on territory so much – take that out of the equation by protecting altnets from the national operator where they’re best placed to provide services, and making it clear where the national operator will go. Mandate 100% coverage within those franchise areas, and provide government support to achieve that goal (the current Universal Service Obligation model goes some way towards this). Heavier regulation of franchise operators would be required but this is already largely accounted for under Significant Market Power regulations.

Nationalising Openreach within that framework would make some sense. It’d enable some competition in the markets, which would be a good thing, and it’d ensure that there is a national operator who would go and build the networks nobody could do on even a subsidised commercial basis. That framework would also make state aid easier to provide to all operators, which would further help. Arguably, though, you don’t need to nationalise Openreach – just tighten up regulation and consider more subsidies.

This sort of approach was costed in the same report that Labour appear to be using, which Frontier Economics did for Ofcom as part of the Future Telecoms Infrastructure Review. It came out broadly equivalent in cost and outcomes.

But I do want free broadband…

So that brings us to the actual pledge which was free broadband for everyone. The for everyone bit is what we’ve just talked about.

If you’ve got that franchise model then that’s quite a nice approach to enable this sort of thing, because the government can run its own ISP – with its own internet edge, peering, etc – and simply hook up to all the franchise operators and altnets. Those operators would still charge for the service, with government footing the bill (in the case of the state operator, the government just pays itself – no money actually changes hands). The government just doesn’t pass the bill on to end-users. You’d probably put that service in as a “basic superfast access” service around 30Mbps (symmetrical if the infrastructure supports it).

This is a really good model for retail ISPs because it means that infrastructure owners can compete on price and quality (of service and delivery) but are otherwise equivalent to use and would use a unified technical layer to deliver services. The connection between ISPs and operators would still have to be managed and maintained – that backhaul link wouldn’t come from nowhere – but this can be solved. Most large ISPs already do this or buy services from Openreach et al, and this could continue.

There’d still be a place for altnets amidst franchise operators, but they’d be specialised and narrow, not targeting 100% coverage; a model where there is equal competition for network operators would be beneficial to this and help to encourage further innovation in services and delivery. You’d still get people like Hyperoptic doing tower blocks, business-focused unbundlers going after business parks with ultrafast services, and so on. By having a central clearing house for ISPs, those infrastructure projects would suddenly be able to provide services to BT Retail, Zen, TalkTalk, and so on – widening the customer base and driving all the marketing that BT Retail and others do into commercial use of the best infrastructure for the end-user and retailer. This would be a drastic shake-up of the wholesale market.

Whether or not ISPs could effectively compete with a 30Mbps free service is I think a valid concern. It might be better to drop that free service down to 10Mbps – still enough for everyone to access digital services and to enable digital inclusion, but slow enough to give heavier users a reason to pay for a service and so support the infrastructure. That, or the government would have to pay the equivalent of a higher service tier (or more subsidy) to ensure viability in the market for ISPs.

I think that – or some variant thereof – is the only practical way to have a good outcome from nationalising or part-nationalising the current telecoms market. Buying Openreach and every other network and smashing them together in the hopes of making a coherent network that would deliver good services would be mad.

What about free WiFi?

Sure, because that’s a sensible large-scale infrastructure solution. WiFi is just another bearer at some level, and you can make the argument that free internet while you’re out and about should be no different to free internet at home.

The way most “WiFi as a service” is delivered is through a “guest WiFi” type arrangement on home routers, with priority given to the customer’s traffic so you can’t sit outside on a BTWiFi-with-FON access point and stream Netflix to the detriment of the customer whose line you’re using. Unless you nationalised the ISPs too you can’t effectively see this happening.

Free WiFi in town centres, village halls, and that sort of thing is obviously a good thing, but it still works in the franchise model.

How about Singapore-on-Thames?

Well, Singapore opted to do full fibre back in 2007 and were done by about 2012 – but they are a much smaller nation with no “hard to reach” parts. Even the most difficult, remote areas of Singapore are areas any network operator would pounce on.

But they do follow a very similar model, except for the “free access” bit. The state operator (NetLink Trust) runs the physical network, but there are lots of ISPs who compete freely (Starhub, M1, Singtel, etc). They run all the active equipment in areas they want to operate in, and use NetLink’s fibre to reach the home. Competition shifts from the ability to deploy the last mile up to the service layer. This does mean you end up with much more in the way of triple/quad-play competition, though, since you need to compete on something when services are broadly equivalent.

It’s a good example of how the market can work, but it isn’t very relevant to the UK market as it stands today.

Privacy and security concerns

One other thing I’ve heard people talk about today is the concerns around having a government-run ISP, given the UK government’s record (Labour and Tory) of quite aggressively nasty interference with telecoms, indiscriminate data collection, and other things that China and others have cribbed off us and used to help justify human rights abuses.

Realistically – any ISP in the UK is subject to this already. Having the govt run an ISP does mean that – depending on how it actually gets set up – it might be easier for them to do some of this stuff without necessarily needing the legislation to compel compliance. But the message has been clear for the last 5-10 years: if you care about privacy or security, your ISP must not be a trusted party in your threat model.

So this doesn’t really change a thing – keep encrypting everything end-to-end and promote technologies that feature privacy by design.

Is it needed? Is it desirable?

Everyone should have internet access. That’s why I keep turning up to work. It’s an absolute no-brainer for productivity (which we need to fix, as a country) and some estimates from BT came up with in the order of £80bn of value from universal broadband.

Do we need to shake up the market right now? BT are doing about 350k homes a quarter right now and are speeding up, so if you left them to their own devices they’ll be done in at worst about 16-20 years. Clearly they’re aiming for 2030 or sooner anyway and are trying to scale up to that. However, that is almost all in urban areas.

Altnets and others are also making good progress and that tends to be focused on the harder-to-reach or semi-rural areas like market towns.

I think that it’s unlikely that nationalising Openreach or others and radically changing how the market works is something you’d want to do in a hurry. Moving to a better model for inter-operator competition and increasing regulation to mandate open access across all operators would clearly help the market, but it has to be done smartly.

There are other things that would help radically in deploying new networks – fixing wayleave rules is one. Major changes to help on this front have been waiting in the “when Parliament is done with Brexit” queue for over a year now.

There is still a question about how you force Openreach or enable the markets to reach the really hard to reach last mile, and that’s where that £20bn number starts looking a bit pithy. While the FTIR report from Frontier Economics isn’t mad, it does make the point that reaching the really hard to reach would probably blow their £20bn estimate. I think you’d easily add another £10-20bn on to come to a sensible number for 100% coverage in practice given the UK market as it is.

Openreach spend £2.1bn/yr on investment in their network, and have operating costs of £2.5bn/yr. At current run-rate that means you’d be looking at ~£70bn, not £20bn, to buy, operate and build that network using Openreach in its current form. Labour have said £230m/yr – that looks a bit short, too.

(Since I wrote this, various industry people have chimed in with numbers between £50bn and £100bn, so this seems a consistent number – the £230m/yr appears to include capital discounting, so £700m+/yr looks closer)

The real challenge in doing at-scale fibre rollout, though, is in people. Education (particularly adult education and skills development) is lacking, and for the civil engineering side of things there has historically been a reliance on workforces drawn from across the continent as well as local workforces. Brexit isn’t going to make that easier, however soft it is.

We also don’t make fibre in the UK any more. I’ve stood at the base of dusty, long-abandoned fibre draw towers in England, now replaced by more modern systems in Europe to meet the growing demand there as it dwindled here. Almost every single piece of network infrastructure being built in the UK has come from Europe, and for at least a decade now, every single hair-thick strand of glass at the heart of modern networks of the UK has been drawn like honey from a preform in a factory in continental Europe. We may weave it into a British-made cable and blow that through British-made plastic piping, but fibre networking is an industry that relies heavily on close ties with Europe for both labour and goods (and services, but that’s another post).

Labour’s best move for the telecoms market, in my view, would be to increase regulation, increase subsidy to enable operators to go after the hardest-to-reach, and altogether ditch Brexit. Providing a free ISP on top of a working and functional telecoms market is pretty straightforward once you enable the current telecoms market to go after everyone.