Rusty all-sky cameras

All-sky cameras are a lovely idea, especially if you’re someone like me who enjoys hiding in the warm most of the time and letting the computers look after the telescope in the depths of winter. I’ve been enjoying some time (when the clouds permit) this summer looking at things directly, but deep-space-object imaging in summer is brutal – short periods of darkness mean all the setup and teardown only for an hour or two of adequate darkness.

An all-sky camera is just a camera with a fisheye lens looking up 24/7/365, taking exposures in darkness of typically over 30 seconds to see stars. You can analyse these exposures to find out about sky conditions, use them as a quick visual guide, and spot meteors. And since they live outside all the time, there’s no setup or teardown!

So I figured I’d have a go at doing an all-sky camera, from scratch. There were a few reasons I wanted to do this. My mirror project is still going (slowly) when I get the headspace, but it takes a lot of messy setup and teardown, so is hard to dip into in the evenings when I have time. But mostly I was curious to dig a bit more into INDI, the protocol I use to control my telescope.

INDI is the instrument-neutral device interface, and it is XML over TCP. It’s quite an old design, using fairly old technologies. However, it’s well-liked by manufacturers as it has a quite simple C API, a very flexible abstraction model, and works well over networks as well as for local use. indiserver is the reference server implementation and has dozens of drivers for cameras, mounts, observatory dome motors, and everything inbetween. I can write C, but if I can avoid it, I always try to – so rather than use the libindi library or the pyindi libraries (which depend on the C library) I thought I might have a go at writing a new implementation from scratch.

I’ve been tinkering with Rust for a while now, and wrote my first serious project in it last month – a parser for Telcordia’s slightly-obscure optical time domain reflectometry (OTDR) format. I found it quite jarring coming from Python, even having written C/C++ in the past, but after a while I realised I was making better structural decisions about my code than I was in Python et al. The Rust compiler is aggressively studious, and forces you to think things through – but after a while getting used to the style, this becomes something quite conversational, rather than adversarial.

Aside from writing some Rust I needed some hardware, so through a half-dozen eBay orders and bits I had lying around, I assembled a “v1” camera. This contained a Pi 4, a WiFi radio with a high-gain antenna, an ASI120MC camera I was using for guiding, and a 12V to 5V step-down board. The power was supplied by a 12V PSU in a waterproof box sat next to this.

To look out, the enclosure got a dome thrown on top – an old CCTV camera dome, with silicone sealant around the rim to make a good watertight fit. I didn’t get the hole position spot on, but it was close enough and seals up fine.

Armed with some hardware, I was ready to test. My first test images revealed some horrendous hot pixels – a known issue with un-cooled cameras – and some clouds. An excellent first step!

One frame from the camera
One frame, gently post-processed

Taking frames over the course of an evening and assembling into a video yielded a fairly nice result. To drive the camera I used KStars, with indiserver running on the Pi4 to control the hardware.

I assembled the above video in PixInsight, having done hot pixel removal and a few other bits of post-processing. Not really a viable approach for 24/7 operation!

v2 – Brown Noctuas – not just for PCs! They also do tiny ones.
Inlet and outlet, with grilles to keep the smaller bits of wildlife out (hopefully)

Hot pixels are more or less guaranteed on uncooled cameras, but the box was getting quite hot. So I’ve now wrapped it in shiny aluminium foil to increase its solar reflectance index, and added a 40mm fan to circulate air through the enclosure (with a 40mm opening on the far side, camera in the middle – some quick CFD analysis suggested this as a sensible approach).

This definitely helps matters somewhat, though in winter a dew heater will be required, and rain removal is something that bears further study – my initial approach involves some G Techniq car windscreen repellent.

I’ve started on an INDI client implementation in Rust, now. This has been a challenge so far. For starters, Rust doesn’t have many XML parsers/generators, and those that exist aren’t well documented. However, having gotten the basics working, the way that INDI works presents some challenges. The protocol essentially shoves XML hierarchies around and then makes updates to elements within that hierarchy, and expects clients to trigger events or make changes to their state based on state changes within that hierarchy. There’s very little protocol-defined convention and a lot of unwritten expectations.

This makes for a very flexible protocol, but a very uncertain client. This doesn’t map into Rust terribly well, or at least requires a more complex level of Rust than I’ve used to date! It does also explain a great deal about some of the challenges I’ve had with stable operation using INDI clients of various forms. There’s such a thing as too much rigidity in abstractions, but there’s definitely such a thing as too little.

So, next step is getting the basic client working and stable with good and verifiable handling of errors. I’m intending to fuzz my client in the same way I’ve fuzzed otdrs, but also to have extensive test cases throughout to ensure that I can replicate well-behaved servers as well as the full gamut of odd network errors and conditions that can arise in INDI ecosystems. Hopefully I’ll end up with a client library for INDI which is fairly bulletproof – and then I can start writing an application!

My initial plan is to write something nice and basic that just does a fixed exposure and stores the raw FITS files on disk in a sensible layout. But once that’s done, I want to tackle image analysis for auto-exposure and dark frame processing. This will involve parsing FITS frames, doing some processing of them, and writing either FITS or PNG files out for storage.

It’s definitely an interesting challenge – but it feels tractable to extend my Rust knowledge. I’ve been really taken with the language as a tool for systems programming, which overlaps quite well with astronomy software. We generally want high levels of reliability, do plenty of maths and processing which benefits from low-level language performance, and increasingly do networking or work with remote hardware. It’s a good fit, it feels, and just needs some more time invested in it.

Nationalise Openreach?

Disclaimer: I am Chief Engineer for Gigaclear Ltd, a rural-focused fibre-to-the-home operator with a footprint in excess of 100,000 homes in the south of the UK. So I have a slight interest in this, but also know a bit about the UK market. What I’m writing here is my own thoughts, though, and doesn’t in the least bit represent company policy or direction.

Labour has recently proposed, as an election pledge, to nationalise Openreach and make them the national monopoly operator for broadband, and to give everyone in the UK free internet by 2030.

The UK telecoms market today is quite fragmented and complex, and so this is not the obvious win that it might otherwise appear to be.

In lots of European markets there’s a franchising model, and we do this in other utility markets – power being an excellent example. National Grid is a private company that runs transmission networks, and Distribution Network Operators (DNOs) like SSE, Western Power, etc run the distribution networks in regions. All are private companies with no shares held by government – but the market is heavily regulated and things like 100% coverage at reasonable cost is built in.

The ideal outcome for the UK telecoms market would clearly have been for BT (as it was then) never to have been privatised, and for the government to simply decide on a 100% fibre-to-the-home coverage model. This nearly happened, and that it didn’t is one of the great tragedies in the story of modern Britain; if it had, we’d be up at the top of the leaderboard on European FTTH coverage. As it is, we only just made it onto the leaderboard this year.

But that didn’t happen – Thatcher privatised it, and regulation was quite light-touch. The government didn’t retain majority control, and BT’s shareholders decided to sweat the asset they had and invest strategically in R&D to sweat that asset, along with some national network build-out. FTTC/VDSL2 was the last sticking plaster that made economic sense for copper after ADSL2+; LR-VDSL and friends might have given them some more time if the end of copper was still tied to performance.

As it is, enough people have been demonstrating the value of FTTH for long enough now that the focus has successfully shifted from “fast enough” to “long-term enough”. New copper technologies won’t last the next decade, and have huge reliability issues. Fibre to the home is the only long-term option to meaningfully improve performance, coverage, etc, especially in rural areas.

So how do we go about fixing the last 5%?

First, just so we’re clear, there are layers to the UK telecoms market – you have infrastructure owners who build and operate the fibre or copper. You have wholesale operators who provide managed services like Ethernet across infrastructure – people like BT Wholesale. Then you have retail operators who provide an internet connection – these are companies like BT Retail, Plusnet, TalkTalk, Zen, Andrews & Arnold, Sky, and so on. To take one example, Zen buy wholesale services from BT Wholesale to get bits from an Openreach-provided line back to their internet edge site. Sometimes Zen might go build their own network to an Openreach exchange so they effectively do the wholesale bit themselves, too, but it’s the same basic layers. We’re largely talking about the infrastructure owners below.

The issue is always that commercially the last 5-10% of the network in terms of hardest-to-reach places will never make sense to go and do, because it’s really expensive to do. Gigaclear’s model and approach is entirely designed around that last 5%, so we can make it work, but it takes a long-term view to do it. The hard-to-reach is, after all, hard-to-reach.

But let’s say we just nationalise Openreach. Now Openreach, in order to reach the hardest-to-reach, will need to overbuild everyone else. That includes live state-aid funded projects. While it’s nonsense to suggest that state aid is a reason why you couldn’t buy Openreach, it is a reason why you couldn’t get Openreach to go overbuild altnets in receipt of state aid. It’d also be a huge waste of money – billions already spent would simply be spent again to achieve the same outcome. Not good for anyone.

So let’s also say you nationalise everyone else, too – buy Virgin Media, Gigaclear, KCOM, Jersey Telecom, CityFibre, B4RN, TalkTalk’s fibre bits, Hyperoptic, and every startup telecom operator that’s built fibre to the home in new build housing estates, done their own wireless ISP, or in any other way provides an access technology to end users.

Now you get to try and make a network out of that mess. That is, frankly, a recipe for catastrophe. BT and Virgin alone have incredibly different networks in topology, design, and overall approach. Throw in a dozen altnets, each of whom is innovating by doing things differently to how BT do it, and you’ve got a dozen different networks that are diametrically opposed in approach, both at a physical and logical level. You’re going to have no network, just a bunch of islands that will likely fall into internal process black holes and be expensive to operate because they won’t look like 90% of the new operator’s infrastructure (i.e. Openreach’s network) and so require special consideration or major work to make it look consistent.

A more sensible approach is that done in some European countries – introduce a heavily regulated franchising market. Carve the market up to enable effective competition in services. Don’t encourage competition on territory so much – take that out of the equation by protecting altnets from the national operator where they’re best placed to provide services, and making it clear where the national operator will go. Mandate 100% coverage within those franchise areas, and provide government support to achieve that goal (the current Universal Service Obligation model goes some way towards this). Heavier regulation of franchise operators would be required but this is already largely accounted for under Significant Market Power regulations.

Nationalising Openreach within that framework would make some sense. It’d enable some competition in the markets, which would be a good thing, and it’d ensure that there is a national operator who would go and build the networks nobody could do on even a subsidised commercial basis. That framework would also make state aid easier to provide to all operators, which would further help. Arguably, though, you don’t need to nationalise Openreach – just tighten up regulation and consider more subsidies.

This sort of approach was costed in the same report that Labour appear to be using, which Frontier Economics did for Ofcom as part of the Future Telecoms Infrastructure Review. It came out broadly equivalent in cost and outcomes.

But I do want free broadband…

So that brings us to the actual pledge which was free broadband for everyone. The for everyone bit is what we’ve just talked about.

If you’ve got that franchise model then that’s quite a nice approach to enable this sort of thing, because the government can run its own ISP – with its own internet edge, peering, etc – and simply hook up to all the franchise operators and altnets. Those operators would still charge for the service, with government footing the bill (in the case of the state operator, the government just pays itself – no money actually changes hands). The government just doesn’t pass the bill on to end-users. You’d probably put that service in as a “basic superfast access” service around 30Mbps (symmetrical if the infrastructure supports it).

This is a really good model for retail ISPs because it means that infrastructure owners can compete on price and quality (of service and delivery) but are otherwise equivalent to use and would use a unified technical layer to deliver services. The connection between ISPs and operators would still have to be managed and maintained – that backhaul link wouldn’t come from nowhere – but this can be solved. Most large ISPs already do this or buy services from Openreach et al, and this could continue.

There’d still be a place for altnets amidst franchise operators, but they’d be specialised and narrow, not targeting 100% coverage; a model where there is equal competition for network operators would be beneficial to this and help to encourage further innovation in services and delivery. You’d still get people like Hyperoptic doing tower blocks, business-focused unbundlers going after business parks with ultrafast services, and so on. By having a central clearing house for ISPs, those infrastructure projects would suddenly be able to provide services to BT Retail, Zen, TalkTalk, and so on – widening the customer base and driving all the marketing that BT Retail and others do into commercial use of the best infrastructure for the end-user and retailer. This would be a drastic shake-up of the wholesale market.

Whether or not ISPs could effectively compete with a 30Mbps free service is I think a valid concern. It might be better to drop that free service down to 10Mbps – still enough for everyone to access digital services and to enable digital inclusion, but slow enough to give heavier users a reason to pay for a service and so support the infrastructure. That, or the government would have to pay the equivalent of a higher service tier (or more subsidy) to ensure viability in the market for ISPs.

I think that – or some variant thereof – is the only practical way to have a good outcome from nationalising or part-nationalising the current telecoms market. Buying Openreach and every other network and smashing them together in the hopes of making a coherent network that would deliver good services would be mad.

What about free WiFi?

Sure, because that’s a sensible large-scale infrastructure solution. WiFi is just another bearer at some level, and you can make the argument that free internet while you’re out and about should be no different to free internet at home.

The way most “WiFi as a service” is delivered is through a “guest WiFi” type arrangement on home routers, with priority given to the customer’s traffic so you can’t sit outside on a BTWiFi-with-FON access point and stream Netflix to the detriment of the customer whose line you’re using. Unless you nationalised the ISPs too you can’t effectively see this happening.

Free WiFi in town centres, village halls, and that sort of thing is obviously a good thing, but it still works in the franchise model.

How about Singapore-on-Thames?

Well, Singapore opted to do full fibre back in 2007 and were done by about 2012 – but they are a much smaller nation with no “hard to reach” parts. Even the most difficult, remote areas of Singapore are areas any network operator would pounce on.

But they do follow a very similar model, except for the “free access” bit. The state operator (NetLink Trust) runs the physical network, but there are lots of ISPs who compete freely (Starhub, M1, Singtel, etc). They run all the active equipment in areas they want to operate in, and use NetLink’s fibre to reach the home. Competition shifts from the ability to deploy the last mile up to the service layer. This does mean you end up with much more in the way of triple/quad-play competition, though, since you need to compete on something when services are broadly equivalent.

It’s a good example of how the market can work, but it isn’t very relevant to the UK market as it stands today.

Privacy and security concerns

One other thing I’ve heard people talk about today is the concerns around having a government-run ISP, given the UK government’s record (Labour and Tory) of quite aggressively nasty interference with telecoms, indiscriminate data collection, and other things that China and others have cribbed off us and used to help justify human rights abuses.

Realistically – any ISP in the UK is subject to this already. Having the govt run an ISP does mean that – depending on how it actually gets set up – it might be easier for them to do some of this stuff without necessarily needing the legislation to compel compliance. But the message has been clear for the last 5-10 years: if you care about privacy or security, your ISP must not be a trusted party in your threat model.

So this doesn’t really change a thing – keep encrypting everything end-to-end and promote technologies that feature privacy by design.

Is it needed? Is it desirable?

Everyone should have internet access. That’s why I keep turning up to work. It’s an absolute no-brainer for productivity (which we need to fix, as a country) and some estimates from BT came up with in the order of £80bn of value from universal broadband.

Do we need to shake up the market right now? BT are doing about 350k homes a quarter right now and are speeding up, so if you left them to their own devices they’ll be done in at worst about 16-20 years. Clearly they’re aiming for 2030 or sooner anyway and are trying to scale up to that. However, that is almost all in urban areas.

Altnets and others are also making good progress and that tends to be focused on the harder-to-reach or semi-rural areas like market towns.

I think that it’s unlikely that nationalising Openreach or others and radically changing how the market works is something you’d want to do in a hurry. Moving to a better model for inter-operator competition and increasing regulation to mandate open access across all operators would clearly help the market, but it has to be done smartly.

There are other things that would help radically in deploying new networks – fixing wayleave rules is one. Major changes to help on this front have been waiting in the “when Parliament is done with Brexit” queue for over a year now.

There is still a question about how you force Openreach or enable the markets to reach the really hard to reach last mile, and that’s where that £20bn number starts looking a bit pithy. While the FTIR report from Frontier Economics isn’t mad, it does make the point that reaching the really hard to reach would probably blow their £20bn estimate. I think you’d easily add another £10-20bn on to come to a sensible number for 100% coverage in practice given the UK market as it is.

Openreach spend £2.1bn/yr on investment in their network, and have operating costs of £2.5bn/yr. At current run-rate that means you’d be looking at ~£70bn, not £20bn, to buy, operate and build that network using Openreach in its current form. Labour have said £230m/yr – that looks a bit short, too.

(Since I wrote this, various industry people have chimed in with numbers between £50bn and £100bn, so this seems a consistent number – the £230m/yr appears to include capital discounting, so £700m+/yr looks closer)

The real challenge in doing at-scale fibre rollout, though, is in people. Education (particularly adult education and skills development) is lacking, and for the civil engineering side of things there has historically been a reliance on workforces drawn from across the continent as well as local workforces. Brexit isn’t going to make that easier, however soft it is.

We also don’t make fibre in the UK any more. I’ve stood at the base of dusty, long-abandoned fibre draw towers in England, now replaced by more modern systems in Europe to meet the growing demand there as it dwindled here. Almost every single piece of network infrastructure being built in the UK has come from Europe, and for at least a decade now, every single hair-thick strand of glass at the heart of modern networks of the UK has been drawn like honey from a preform in a factory in continental Europe. We may weave it into a British-made cable and blow that through British-made plastic piping, but fibre networking is an industry that relies heavily on close ties with Europe for both labour and goods (and services, but that’s another post).

Labour’s best move for the telecoms market, in my view, would be to increase regulation, increase subsidy to enable operators to go after the hardest-to-reach, and altogether ditch Brexit. Providing a free ISP on top of a working and functional telecoms market is pretty straightforward once you enable the current telecoms market to go after everyone.

An evening in the hobby

I’ve gotten into quite a good routine, sequence, whatever you might call it, for my hobby. While it’s an excellent hobby when it comes to complex things to fiddle around with, once you actually get some dark, clear skies, you don’t want to waste a minute, particularly in the UK.

Not having an observatory means a lot of my focus is on a quick setup, but it also means I’ve gotten completely remote operation (on a budget) down pretty well.

I took a decision to leave my setup outdoors some time ago, and bought a good quality cover rated for 365-days-of-the-year protection from Telegizmos. So far it’s survived, despite abuse from cats and birds. The telescope, with all its imaging gear (most of the time), sits underneath on its stock tripod, on some anti-vibration pads from Celestron. I also got some specialist insurance and set a camera nearby – it’s pretty well out of the way and past a bit of security anyway, but it doesn’t hurt to be careful. Setting up outside has been the best thing I’ve done so far, and is further evidence in support of building an observatory!

The telescope, illuminated from an oversize flat frame generator, after a night of imaging.

Keeping the camera mounted means I can re-use flat frames between nights, though occasionally I will take it out to re-collimate if it’s been a while. The computer that connects to all the hardware remains, too – a Raspberry Pi 4 mounted in a Maplin project case on the telescope tube.

This means everything stays connected and all I have to do is walk out, plug a mains extension cable in, bring out a 12V power supply, and plug in two cables – one for the mount, and one for the rest. Some simple snap-fit connector blocks distribute the 12V and 5V supplies around the various bits of equipment on the telescope.

That makes for quite calm setup, which I can do hours in advance of darkness in these early season nights. The telescope’s already cooled down to ambient, so there’s no delay there, either. I’ve already taken steps to better seal up my telescope tube to protect against stray light, which also helps keep any would-be house guests out.

My latest addition to the setup is an old IP camera so I can remotely look at the telescope position. This eliminates the need for me to take my laptop outside whenever the telescope is moving – I can confirm the position of the telescope and hit the “oh no please stop” button if anything looks amiss, like the telescope swinging towards a tripod leg.

I use the KStars/Ekos ecosystem for telescope control and imaging, so this all runs on a Linux laptop which I usually VNC into from my desktop. This means I can pull data off the laptop as I go and work on e.g. calibration of data on the desktop.

A normal evening – PixInsight, in this case looking at some integration approaches for dark frames, and VNC into KStars/Ekos, with PHD2 guiding, and a webcam view of the telescope

So other than 10 minutes at the start and 10 minutes in the early hours of the following morning my observing nights are mostly spent indoors sat in front of a computer. That makes for a fairly poor hobby in terms of getting out of my seat and moving around, but a really good hobby in terms of staying warm!

I do often wander out for half an hour or so and try to get some visual observation in, using a handheld Opticron monocular. Honestly, the monocular isn’t much use – it’s hard to hold steady enough, and low-magnification. Just standing out under the stars and trying to spot some constellations and major stars is satisfying, but I’d quite like to get a visual telescope I can leave set up and use while the imaging rig is doing its thing. That’s a fair bit of time+money away though, and I’d prefer to get the observatory built first. On a dark night, lying down and staring up at the milky way is quite enough to be getting on with.

A typical night, though, involves sitting indoors with the telescope under its cover, and yelling at clouds or the moon (which throws out enough light to ruin contrast on deep space objects).

On that basis I’ve been thinking about other ways to enjoy the hobby that don’t involve dark, clear nights. Some good narrowband filters would let me image on moonlit nights, but run into the many hundreds of pounds, putting a set of Ha/OIII/SII filters around £1k.

Narrowband image, shot in the hydrogen alpha emission line using a Baader 7nm filter – cheap but cheerful – of some of the Elephant’s Trunk Nebula; ~7.5 hours of capture

Making my own telescope, though, struck me as a fun project. It’s something quite frequently done, but the bit that most interested me is mirror making. That’s quite a cheap project (£100 or so) to get started on and should take a few months of evenings, so ought to keep me busy for a while – so that’s the next thing to do. I’ve decided to start with an 8″ f/5 mirror – not only is it quite a small and simple mirror, I could place it directly into my existing telescope without having to spend any more money. I’ve been doing lots of research, reading books on the topic and watching videos from other mirror-makers.

And that is definitely one of the recurring themes throughout the hobby – there’s always something to improve on, and nothing is trivially grasped. Everything takes a bit of commitment and thought. I think that’s one of the reasons I enjoy it so much.