Engineering FM – Part 4

Time for the exciting fourth instalment of my ongoing series on engineering at a small community radio station launching an FM service for the first time. In this post I’m going to be looking (somewhat briefly) at processing.

There’s been a place for processors ever since radio has existed, to make sure that audio input to a modulator doesn’t exceed the modulator’s input limits without making the audio itself sound bad. This typically involved an automatic gain control circuit, or AGC. As time went by, these evolved into multiband AGCs, processing in typically four chunks for LF, LMF, HMF, and HF audio segments. Clippers and limiters were also used to protect equipment but as stations aimed for a competitive edge processors became used to make stations sound louder and punchier by maximising the amount of modulation used at any time.

Nowadays processors serve a dual purpose – limiting feed to the transmitter and in some instances ensuring compliance with broadcast modulation rules, and processing the audio fed to the transmitter to maximise the loudness, often with emphasis on bass frequencies that often suffer in FM broadcast.

Our processor of choice is the current cheapest processor available (as far as I know), the BW Broadcast DSPXmini-FM SE. At £1,000 it is considerably cheaper than some processors, but still has a basic set of components found in nearly all processors – some ADCs, some DSP chips to handle the processing, a microcontroller control system, and a composite (MPX) output for the transmitter. There’s a few more bits to it than that, but under the hood, that’s what’s at play.

What these parts actually make up in terms of audio processing is fairly simple. Here’s a step by step look at the audio processing workflow:

  • Audio comes into the unit, and is either sampled (ADC) for analog inputs, or run through a sample rate converter for digital inputs
  • A preemphasis filter is applied
  • A simple single-frequency high-Q bass parametric EQ is applied for a bass boost
  • The audio enters a 4-channel crossover – a LPF (Low Pass Filter) for LF, HPF (High Pass Filter) for HF, and the two MF bands use BPFs (Band Pass Filters)
  • Each portion of the signal goes through a gated AGC (the gate stops volume suck-up during quiet periods and limits ‘pumping’ artefacts)
  • Each portion of the AGC’d signal is then limited, and a gain applied to each signal
  • The LF and LMF signals are summed, and the bass is clipped independently of the rest of the signal, before being run through a LPF
  • The HF and HMF are then summed with the clipped LF/LMF signal, and everything is then clipped again
  • A LPF then takes off the top end and an MPX signal is generated from the audio
  • The MPX signal is then clipped to avoid overshoots, and summed with a pilot signal at 19kHz
  • The MPX signal is run through a DAC and summed with the SCA input, and output to the transmitter, along with a pilot output (just containing the 19kHz reference)

So, quite a complex set of operations. What’s the end result? Depends entirely on the profile. These are the building blocks from which you can construct complex processes.

No one profile will fit all programme types or material types, and so we need some flexibility. In the DSPXmini-FM SE we can store 8 user profiles and select them from a trigger port on the unit via contact closures. When we’ve not selected anything it’ll revert to a generic preset we define. This isn’t perfect (and BW Broadcast do seem to be interested in letting people use their UDP API for more comprehensive processor control), but is a good starting point which gives us some flexibility.

The trick of course, as with any processing workflow for audio, is not to end up processing too much. Ideally most of the work is done by the gentler AGCs, then the limiters are driven quite gently, with minimal clipping afterwards. This results in a uniform but consistent station sound without overly compromising the quality or dynamic range of the audio input. So say the engineers, in any case. Then we have station managers and producers, who typically follow the mantra of louder == better. This is, of course, not always the case, but as with AM processing, there’s a truth in this to a point. The harder a transmitter is driven, the better reception will be of the modulated audio. The trick is in finding a balance between cranking everything to eleven and standing well back, and having everything set so far backed off that the transmitter rarely modulates at full power.

You can happily spend weeks or even months fine-tuning preset parameters and adjusting things, but it’s mostly a creative process that producers can have control or involvement in, and from an engineering standpoint, we’re only really interested in the clipper and limiters. So long as those aren’t driven too hard and the final composite output produces a signal in compliance with our licensing, all is well.

It’s well worth having a broadcast analyzer so you can tell what you’re actually doing – I’m currently flying blind but hopefully will get one of these in the near future, which is a very affordable little box to provide some information on what the MPX is doing. The plan is to tie the analyzer into our Nagios install so we can monitor our regulatory compliance and modulation parameters and have any violations trigger alarms.

So, that’s the processing side of things, at least as an overview. There’s a huge amount to talk about on this topic, so I’ll end here. Part five will be looking at powering equipment and monitoring uninterruptable power supply systems.