Congratulations! You’ve just been hired to record the crowd at the Super Bowl!
The producer wants you to figure out how loud each section, and ideally how loud each fan, is cheering so that they can better choose where to station the T-shirt cannons. They don’t want to know the actual dialogue – they’re not the NSA – only each fan’s loudness, or amplitude*. You don’t even have to record the fans’ cheering over time, but just how loud each one is, on average, over the full game.
*technically, loudness is a psychoacoustic property of the amplitude, the average intensity, and the frequency spectrum of the signal, but in this article we’re assuming (for the purposes of keeping it relatively short) that they’re essentially the same.
The only problem?
You’re on the other side of the field, 160 feet away from the nearest spectator. And you have one microphone with which to record all twenty thousand fans.
Hi! This is an introduction to astronomical interferometry and radio telescopy! This article might assume that you have some familiarity with Fourier transforms, but that’s about it. This article is currently in beta; if you find any issues, errors, or omissions, please let me know through the comments below. Thanks!
The basic technique’s simple; point your microphone at a fan, record the average loudness over some length of time, then repeat for every other person there. If you record each direction for half a second, you’ll finish before the end of the game (although the resulting loudness image will be very noisy – the block of really enthusiastic Patriots fans in the center might be not quite as clear – because the rare occurrence of touchdowns violates our assumption that each fan is shouting at the same volume throughout the entire game. Perhaps pinpointing the location of vuvuzelas at the World Cup would be a better application of our technique.)
Unfortunately, this only works if our microphone only records sounds coming from within one degree of an object’s source. (In other words, if our microphone is so directionally sensitive that rotating it 1/360th of a full turn gives us a totally different sound.)
Unsurprisingly, most microphones are specifically designed to avoid this! (Most of the time, you don’t want the audio on a take to be ruined because an actor was two inches away from their mark.)
Omnidirectional microphones have no variation (hence their name), and so you have no choice but to record the entire crowd. Cardioid mics have some directionality (to a first approximation, they record things within a 120-degree angle), but they’re still not good enough.
We can do slightly better, and use a parabolic reflector to bounce all sound waves coming from one direction to a focal point – where we’ll place the microphone. All sounds not coming from that direction will, when reflected, miss the microphone and essentially be rejected.
In practice, though, the ‘beam’ of sound we’ll be able to hear won’t be a perfect cylinder, but will instead spread out into a sort of cone – not necessarily because of construction, but because sound is (for all practical purposes within the scope of this article) a wave instead of a constant flow of information-carrying particles (the corpuscular theory).
Because of this, the best parabolic microphones we could conceivably bring to the game would have to record at least a 48.4-degree range of sound, blurring out our map of loudness so much it would be nearly unusable.
And that’s why the NFL just gives players microphones.
Astronomers have the same problem: Although most telescopes use parabolic mirrors (or complicated lens designs), and should hypothetically have the ability to provide perfect focus, diffraction blurs out the image, preventing us from seeing the intricate details of far-away galaxies.
We’ve been talking about diffraction a lot already, and we’ll be talking about it a lot more, so at this point we should probably actually describe diffraction.
If you’re already familiar with the mechanics of lens and aperture diffraction, you can probably skip this section. Diffraction was actually relatively difficult for me to understand the first time around, so I’m including this here so that others may have a (hopefully better) explanation of what diffraction is.
Diffraction is a property of all electromagnetic waves, the properties of which (when averaged over time) are described by the Heimholtz equation,
where is the Laplacian, k is the wavenumber (2*pi divided by the wavelength), and A is the amplitude.
Realistically, though, while the Heimholtz equation does technically perfectly describe diffraction (and can actually be directly used to optimally place WiFi antennas), it’s not particularly intuitive. Here’s a slightly better explanation:
Essentially, diffraction is the bending of waves around objects. It’s the reason why you can hear someone speaking when echoes alone wouldn’t carry the sound to you – such as on the other side of a pillar, or in an anechoic room:
But why do waves bend in the first place?
Intuitively, waves bend because they have to – because (at least in the case of pressure waves) this scenario depicting a wave passing through an aperture doesn’t make sense:
Generally, the wider the aperture is, the less diffraction exists. When your aperture’s less than one wavelength across, the wave on the other side is essentially identical to a single point source, while if your aperture’s infinitely many wavelengths across, the resulting waveform’s identical on both sides of the aperture (well, if there even is an infinitely wide aperture that isn’t just an open space). In between, the mechanics of waves passing through apertures are dictated by the Hyugens-Fresnel principle: wave propagation through an aperture (in the ‘forward’ direction) can be expressed (but not necessarily described) as a sum of spherical waves, each one wavelength apart from each other.
Because light waves have very short wavelengths (green light, for instance, has a wavelength somewhere around 530 nanometers), diffraction generally isn’t a problem even for very small lenses (for instance, the iPhone’s lens is at least red light wavelengths across, based on this stackexchange post). It’s possible to see visible-light diffraction, though, if your aperture’s small enough:
This is also why simple ray- (not wave-) tracing works so well for CGI renderers (well, most of the time).
However, both parabolic microphones and radio telescopes are diffraction-limited – that is, the extremely long wavelengths of the types of waves these devices observe blur out point sources into Airy disks. As we’ll soon see, it’s partially possible to remove these artifacts, although we’ll need spiffier techniques for finer details.
Why Not Use an Optical Telescope?
Optical (visible-light-based) telescopes have their advantages: the technology’s been around for a long time, they see in the same frequencies as their operators, and they aren’t as harshly limited by physical factors as radio telescopes. However, optical astronomy is generally limited far more by more mundane things, like…
- Light pollution! (Light, mostly from streetlights, enters the atmosphere and scatters, creating a diffuse haze which is also why you can’t see the stars at night in large cities)
- Astronomical seeing! (The air in the atmosphere’s often turbulent, especially on hot nights, causing images of stars to twinkle.)
- Mechanical wobble! (Motors and their assemblies aren’t perfect, and even miniscule imperfections can result in shaky images when you’re using an incredibly long lens.)
- Atmospheric absorption! (Water vapor and other atmospheric gases absorb most of various frequencies of light, including infrared and ultraviolet. This makes it nearly impossible to observe those frequencies emitted by stars from Earth.)
- Weather! (Rain, clouds, and even snow – a common occurrence, since many telescopes are located on the tops of mountains to avoid atmospheric scattering and absorption – will obviously block out any chance of seeing the stars.)
- and finally, interstellar dust clouds! (Yes, you read this right – clouds of microscopic particles formed from supernovae reduce and scatter visible light, wreaking havoc with long-distance observations, but occasionally producing very nice images.)
Much of the modern science of astronomy consists of ways of getting around these limitations – and often, in surprising and really impressive ways.
For instance, astronomers recently figured out that atmospheric distortion can be un-distorted by warping a computer-controlled secondary mirror faster than the atmosphere can change. How do they know how to warp the mirror? Easy; they measure the distortion of a known star.
But what if that star isn’t bright enough? No problem; shoot lasers into the sky, exciting sodium atoms in the upper atmosphere, creating an artificial star.
But I digress.
or, can you sharpen a microphone?
The telescopes of the Very Large Array, located on a wide plain about fifty miles away from both Socorro and Pie Town*, New Mexico, manage to get around many of these limitations by using radio waves instead of visual light for astronomical observation.
*which really does have fantastic pies!
Nearly all major astronomical objects, from stars on up (although some planets’ atmospheres do as well), emit radio waves, and certain radio wavelengths (such as the hydrogen line, with a wavelength of 21 cm) contain incredibly useful information about the compositions of astronomical objects. (Unfortunately, some terrestrial objects – such as cell phones and spark plugs – also emit radio waves near the VLA’s current frequency bands.)
Radio waves also pass through interstellar dust and Earth’s atmosphere (consider, for instance, the fact that you can listen to a radio signal transmitted from many miles away from inside a building, but can’t see what’s on the other side of the wall without a window), eliminating astronomical seeing issues. Plus, radio’s long wavelengths (each VLA telescope can handle ten different wavelengths between 4 meters and 6 millimeters) enable the construction of very light and very large telescopes using frames instead of full reflectors, such as in the Giant Metrewave Radio Telescope:
Radio isn’t a magical cure-all, though: The longer wavelength of radio also cooresponds to a wider beam-width (diffraction again), blurring the resulting images. We can do two things to compensate for this loss of sharpness:
- We can build a larger telescope, which isn’t too difficult (since we’re on Earth and not in space), lowering the amount of diffraction and also collecting signals over a larger area, which reduces noise. There is an upper limit, though – it’s very difficult to build a larger telescope than the 1000-foot dish of the Arecibo Observatory, which was literally built into a sinkhole at the top of a mountain range.
- Or, if we know what the diffraction pattern looks like, we can actually attempt to reverse the blurring and recover some amount of detail!
Here’s the idea: Blurring (or, more generally), is just the convolution of an image with a kernel.
Cyclic convolution (which is the same as convolution, except the blur wraps around – so you need to add some number of 0s to the sides of the image) can be quickly computed by taking the Fourier transform of the image and the kernel, multiplying them, and returning the inverse Fourier transform.
So if we know the blurring kernel (g), and our blurred image (f*g), we can recover f!
Alright, let’s try this out.
This is a radio image, captured in 1989 by the VLA, of the Whirlpool Galaxy, about 23 million light-years away from Earth. It was discovered in 1779, so this particular image isn’t really all that impressive, but the VLA’s gotten far better since then. The full image covers about three-tenths of a degree of the night sky.
So what does a single telescope see?
A single VLA telescope has a beamwidth of 8.6 arcminutes, or about 0.14 degrees. (You can actually calculate this directly from the equation for diffraction, but here we’re using the statistics from a 1980 book on the subject.) That’s nearly the diameter of the entire Whirlpool Galaxy, so the entire thing’s reduced to a faintly visible smudge:
Here, we’re blurring with a Gaussian kernel with a radius of 40 pixels (over a 1779-pixel image), which allows us to see some details, but not the fine structure. In reality, we’d probably be using an Airy kernel instead of a Gaussian one, but they’re close enough for demonstration.
So, let’s just divide the Fourier transform of that image by the Fourier transform of a Gaussian kernel, invert the Fourier transform, and we should be done!
Except instead of a nice, clean image we get
Because the Fourier transform of a Gaussian curve is another Gaussian, a few of the higher-order frequencies are, for all practical purposes, 0. And, of course, when we divide zero (since the original image also almost entirely consists of low-frequency terms) by zero, things go haywire and you basically get static.
In order to handle this noise (because that’s essentially what it is), we can use one of the many approaches available and add a constant, very small number to the spectrum of the kernel.
But, practically, there are also quantization artifacts (reducing a floating-point format to eight bits)
which basically null out whatever we were doing. Granted, we have made some progress – here’s what we started out with compared to what we managed to reconstruct –
but it’s not terrific, and certainly no substitute for an optical telescope. And, unfortunately for us, building a mega-Arecibo isn’t plausible. Not only that, the generally productive xkcd strategy
doesn’t even work – we might get a brighter image, but it’ll still be blurry.
So what else can we try?
or, Using Diffraction for Fun and Profit (and the occasional scientific discovery)
45 GHz is a ridiculously high frequency. So is 74 MHz, the lowest band the VLA recieves. We might be able to use this to our advantage.
If we can measure the phase difference between two signals (the relative offset in the signals two antennae receive caused by the differences in the amount of time the wave takes to get to each antenna), we might be able to amplify a miniscule difference in the position of a point source into a huge difference in phase, which we can also detect.
Or, putting it another way, we can use the difference in the time it takes a signal from a radio source to reach two telescopes to narrow the main lobe of the beam (reducing how blurry the image is) in exchange for more ringing around stars. Then, we might be able to remove the ringing, leaving only the main lobe, either through the same Fourier transform trick as before, or by some other method we haven’t determined yet.
And that, essentially, is the idea behind interferometry. Let’s get down to the actual details.
Let’s suppose, for now, that there is exactly one star in the sky. We can generalize to arbitrary numbers of stars later on, but it’s easiest to take things, well, one thing at a time.
If we have two radio telescopes pointing in a direction at an infinitely far away source, separated by a distance , the signals they receive will be exactly the same, except for a time delay on the one further away.
Here, ‘s two pi times the frequency, so antenna 2 receives the signal , and antenna 1 receives .
Individually, we can get an approximation of the intensity of the thing we’re looking at (V) by averaging the cos wave over some amount of time – that’s what we’ve done before, and the problem is usually that the beamwidth (in this case, the square root of the envelope of the graph on the lower-right) was usually too wide.
We can then perform a neat trick, and multiply the two cos waves together, getting a sum of their arguments’ sum and difference:
which happen to be the cosine of something times the phase (and only the phase), and a really high-frequency term! So if we average this out over time, we’ll get
and substituting in our value for , the value for the pixel returned by the correlator (the machine that coorelates signals between all 27 different telescopes and also handles a whole bunch of signal processing routines we’re glossing over right now) would be
This (when multiplied by your beam’s strength per angle) is the pattern a single star will produce on your image. So long as the frequency (w/(2 pi)) and the distance between our telescopes (b) are large, this interference pattern will have a far higher frequency than the original beam, but will have many rings around the center, making it look like a target. That’s actually a good thing, because then it’s easier to locate the object’s precise location!
According to the NRAO’s course on the subject, while a single telescope may only be able to locate an object within eight arcminutes, the 27 telescopes of the VLA, when combined, can pinpoint locations to within a thousandth of an arcsecond.
And, once we’ve got our sharp, but ring-ridden image, we can apply the same sorts of Fourier tricks as above to remove the rings and produce a clean image.
Alternatively, in the spirit of presenting proofs without words, here’s interferometry as a series of GIFs:
Finally, here’s one final way of looking at interferometry. Normally, waves are emitted by a source, propagate through space, and reach the telescopes at two different times. Because electromechanics is reversible, we can look at this process the other way around: the two telescopes emit waves at two different times, which propagate through space and reach the source at the same time.
Conveniently, the sensitivity of the telescope to this point source depends on whether the two ‘telescope waves’ are constructively or destructively interfering with each other at that point in space – that is, the interferometer’s graph of reception is an interference pattern!
As an added bonus, this interference pattern corresponds exactly to the diffraction of a wave passing through a slit of width equal to twice the wavelength, which also provides another explanation of the VLA’s better sensitivity at higher frequencies.
It seems like we can do almost anything with just two telescopes –
So What are All Those Other Telescopes For?
The very first astronomical interferometer looked like this.
This was Karl G. Jansky‘s original telescope, originally built to detect radio emissions from telephones but which wound up detecting a large radio source at the center of the Milky Way. It consisted of one large antenna, an analog information storage system (a pen and notepad), and a track which allowed it to be rotated. While Jansky’s telescope wasn’t an interferometer, the reason there even was a track is relevant – if you use only two telescopes in your interferometer, or (equivalently) one long antenna, your signals will be sharp in one direction, and blurred in another.
The reason for this is (somewhat) simple: Up until now, we’ve been working in two dimensions, with two telescopes. In three dimensions, there’s interference along the direction the telescopes are laid out, but no interference perpendicular to that direction. You need at least three to get a sharp image, which is why the VLA has three arms set at 120 degree angles to each other.
However, the interferometry procedure described above doesn’t actually work for more than two telescopes. Instead, we treat the three telescopes as (3 choose 2) = three pairs of baselines between two telescopes. The VLA computes the results for each baseline, then adds up appropriate amounts of each point response, minimizing the rings while amplifying the center lobe.
With 27 antennas, we have (27 choose 2) = 351 independent baselines, all at once, making for an incredible point response.
Eventually, we’d get a sort of Gaussian point response – which is actually a bit of a problem! As we’ve seen before, when we blur an image with a Gaussian kernel, we find that we can’t reconstruct some of the high frequencies, lest we run into a division-by-zero error. To put it another way, the blurring, when followed by quantization, destroys information, and we can only reconstruct the large structures in the image. We can err in the other direction, too: if our telescopes are too far apart, we’ll only be able to reconstruct the fine details of the original image, and we won’t be able to see the large structures.
That’s why the telescopes of the VLA are shuttled around on their rails every few months, cycling through four configurations – from the D configuration, with a maximum baseline of less than one mile, to the 22-mile A configuration. By compositing all four images, we can achieve a complete radio image of just about any celestial object.
But we can do even better.
The Earth rotates, so by taking measurements every so often throughout the day, we can pretend we have multiple copies of the VLA around the planet, and infer even more baselines from that.
But we can do even better.
The VLA is so sensitive that the movement of the Earth’s crust beneath it blurs out its images. By making use of the Very Long Baseline Array, ten additional antennas spaced throughout the United States, the VLA can correct for these errors using what essentially equates to an five-thousand-mile wide sparsely-sampled antenna.
And of course, there’s even more we can do on the signal processing side of things, which are, unfortunately, way out of the scope of this article. Techniques have been developed for correcting chromatic aberration (just as some lenses do), dealing with non-flat baselines using maximum entropy techniques, implementing spherical harmonic Fast Fourier transforms, running web servers, and simply handling the gigabytes of information the antennas return every second quickly, efficiently, and using hardware custom-designed not to interfere with its own antennas.
And yes, sometimes errors occur, and sometimes antennas go down. Neither the VLA, nor its operators, are perfect, and there are some things we probably won’t be able to glimpse for a long time.
All the same, the VLA is an incredibly impressive work of engineering, and it’s spotted things we wouldn’t be able to see any other way, from early protostars, to synchrotron radiation from black holes.
But let’s go back a bit: you’re actually at the Super Bowl, and you’re actually trying to record individual audience members. Or, let’s say you’re trying to mic a conference room without having to give everyone lapel microphones. Then, believe it or not, you may actually be familiar with the idea of microphone arrays, which essentially use techniques from interferometry to create virtual microphones which are more directed than any of their components – or acoustic cameras, which are now used for detecting audio emissions from products. Some sensor arrays detect waves propagating through the Earth’s crust, and use the results to detect the presence of oil. (But of course, it’s very difficult to create an audio source within the Earth – which is why they use the noise from nearby highways, rebounding off objects deep beneath the Earth’s surface.)
Interferometers have been used in meteorology, in wind tunnels, in chemistry, quantum mechanics, particle physics, microscopy, and undoubtedly a few more fields by the time you read this. Now, even some optical telescopes use rotating apertures to create a virtual array of smaller telescopes – just like the VLA.
Signal processing is still a developing field, and new techniques are being discovered every day, not just in astronomy, but in just about every scientific field there is. And that, to say the very least, is fantastic.