This texture shrinks on one axis while stretching on the other – and yet somehow loops every three seconds.

Here’s the full code of a (very) heavily modified version, which fits in one and a half tweets (you can also view it on Shadertoy):

#define p exp2(fract(iDate.w/3.)+float(i)) void mainImage( out vec4 fragColor, in vec2 fragCoord ){ for(int i=-9;i<9;i++){ fragColor+=min(p,1./p)*texture2D(iChannel0,fragCoord*.004*vec2(1./p,p))/3.; } }

]]>

The producer wants you to figure out how loud each section, and ideally how loud each *fan*, is cheering so that they can better choose where to station the T-shirt cannons. They don’t want to know the actual dialogue – they’re not the NSA – only each fan’s loudness, or amplitude*. You don’t even have to record the fans’ cheering over time, but just how loud each one is, on average, over the full game.

*technically, loudness is a psychoacoustic property of the amplitude, the average intensity, and the frequency spectrum of the signal, but in this article we’re assuming (for the purposes of keeping it *relatively *short) that they’re essentially the same.

The only problem?

You’re on the other side of the field, 160 feet away from the nearest spectator. And you have one microphone with which to record all twenty thousand fans.

*Hi! This is an introduction to astronomical interferometry and radio telescopy! This article might assume that you have some familiarity with Fourier transforms, but that’s about it. This article is currently in beta; if you find any issues, errors, or omissions, please let me know through the comments below. Thanks!*

The basic technique’s simple; point your microphone at a fan, record the average loudness over some length of time, then repeat for every other person there. If you record each direction for half a second, you’ll finish before the end of the game (although the resulting loudness *image* will be very noisy – the block of really enthusiastic Patriots fans in the center might be not quite as clear – because the rare occurrence of touchdowns violates our assumption that each fan is shouting at the same volume throughout the entire game. Perhaps pinpointing the location of vuvuzelas at the World Cup would be a better application of our technique.)

Unfortunately, this only works if our microphone only records sounds coming from within one degree of an object’s source. (In other words, if our microphone is so directionally sensitive that rotating it 1/360th of a full turn gives us a totally different sound.)

Unsurprisingly, most microphones are *specifically designed* to avoid this! (Most of the time, you don’t want the audio on a take to be ruined because an actor was two inches away from their mark.)

Omnidirectional microphones have no variation (hence their name), and so you have no choice but to record the entire crowd. Cardioid mics have *some *directionality (to a first approximation, they record things within a 120-degree angle), but they’re still not good enough.

We can do slightly better, and use a parabolic reflector to bounce all sound waves coming from one direction to a focal point – where we’ll place the microphone. All sounds *not* coming from that direction will, when reflected, miss the microphone and essentially be rejected.

In practice, though, the ‘beam’ of sound we’ll be able to hear won’t be a perfect cylinder, but will instead spread out into a sort of cone – not necessarily because of construction, but because sound is (for all practical purposes within the scope of this article) a wave instead of a constant flow of information-carrying particles (the corpuscular theory).

Because of this, the best parabolic microphones we could conceivably bring to the game would have to record at *least* a 48.4-degree range of sound, blurring out our map of loudness so much it would be nearly unusable.

And that’s why the NFL just gives players microphones.

Astronomers have the same problem: Although most telescopes use parabolic mirrors (or complicated lens designs), and should *hypothetically* have the ability to provide perfect focus, diffraction blurs out the image, preventing us from seeing the intricate details of far-away galaxies.

We’ve been talking about diffraction a lot already, and we’ll be talking about it a lot more, so at this point we should probably actually describe diffraction.

If you’re already familiar with the mechanics of lens and aperture diffraction, you can probably skip this section. Diffraction was actually relatively difficult for me to understand the first time around, so I’m including this here so that others may have a (hopefully better) explanation of what diffraction is.

Diffraction is a property of all electromagnetic waves, the properties of which (when averaged over time) are described by the Heimholtz equation,

where is the Laplacian, k is the wavenumber (2*pi divided by the wavelength), and A is the amplitude.

Realistically, though, while the Heimholtz equation *does* technically perfectly describe diffraction (and can actually be directly used to optimally place WiFi antennas), it’s not particularly intuitive. Here’s a slightly better explanation:

Essentially, diffraction is the bending of waves around objects. It’s the reason why you can hear someone speaking when echoes alone wouldn’t carry the sound to you – such as on the other side of a pillar, or in an anechoic room:

But why do waves bend in the first place?

Intuitively, waves bend because they *have *to – because (at least in the case of pressure waves) this scenario depicting a wave passing through an aperture doesn’t make sense:

Generally, the wider the aperture is, the less diffraction exists. When your aperture’s less than one wavelength across, the wave on the other side is essentially identical to a single point source, while if your aperture’s infinitely many wavelengths across, the resulting waveform’s identical on both sides of the aperture (well, if there even *is* an infinitely wide aperture that isn’t just an open space). In between, the mechanics of waves passing through apertures are dictated by the Hyugens-Fresnel principle: wave propagation through an aperture (in the ‘forward’ direction) can be expressed (but not necessarily described) as a sum of spherical waves, each one wavelength apart from each other.

Because *light* waves have very short wavelengths (green light, for instance, has a wavelength somewhere around 530 nanometers), diffraction generally isn’t a problem even for very small lenses (for instance, the iPhone’s lens is *at least* red light wavelengths across, based on this stackexchange post). It’s possible to see visible-light diffraction, though, if your aperture’s small enough:

This is also why simple ray- (not wave-) tracing works so well for CGI renderers (well, most of the time).

However, both parabolic microphones and radio telescopes are *diffraction-limited *– that is, the extremely long wavelengths of the types of waves these devices observe blur out point sources into Airy disks. As we’ll soon see, it’s partially possible to remove these artifacts, although we’ll need spiffier techniques for finer details.

Optical (visible-light-based) telescopes have their advantages: the technology’s been around for a long time, they see in the same frequencies as their operators, and they aren’t as harshly limited by physical factors as radio telescopes. However, optical astronomy is generally limited far more by more mundane things, like…

- Light pollution! (Light, mostly from streetlights, enters the atmosphere and scatters, creating a diffuse haze which is also why you can’t see the stars at night in large cities)
- Astronomical seeing! (The air in the atmosphere’s often turbulent, especially on hot nights, causing images of stars to twinkle.)

- Mechanical wobble! (Motors and their assemblies aren’t perfect, and even miniscule imperfections can result in shaky images when you’re using an incredibly long lens.)
- Atmospheric absorption! (Water vapor and other atmospheric gases absorb most of various frequencies of light, including infrared and ultraviolet. This makes it nearly impossible to observe those frequencies emitted by stars from Earth.)
- Weather! (Rain, clouds, and even snow – a common occurrence, since many telescopes are located on the tops of mountains to avoid atmospheric scattering and absorption – will obviously block out any chance of seeing the stars.)
- and finally, interstellar dust clouds! (Yes, you read this right – clouds of microscopic particles formed from supernovae reduce and scatter visible light, wreaking havoc with long-distance observations, but occasionally producing very nice images.)

Much of the modern science of astronomy consists of ways of getting around these limitations – and often, in surprising and really impressive ways.

For instance, astronomers recently figured out that atmospheric distortion can be un-distorted by warping a computer-controlled secondary mirror *faster than the atmosphere can change*. How do they know how to warp the mirror? Easy; they measure the distortion of a known star.

But what if that star isn’t bright enough? No problem; *shoot lasers into the sky, exciting sodium atoms in the upper atmosphere, creating an artificial star*.

But I digress.

The telescopes of the Very Large Array, located on a wide plain about fifty miles away from both Socorro and Pie Town*, New Mexico, manage to get around many of these limitations by using radio waves instead of visual light for astronomical observation.

*which really does have fantastic pies!

Nearly all major astronomical objects, from stars on up (although some planets’ atmospheres do as well), emit radio waves, and certain radio wavelengths (such as the hydrogen line, with a wavelength of 21 cm) contain incredibly useful information about the compositions of astronomical objects. (Unfortunately, some terrestrial objects – such as cell phones and spark plugs – *also *emit radio waves near the VLA’s current frequency bands.)

Radio waves also pass through interstellar dust *and* Earth’s atmosphere (consider, for instance, the fact that you can listen to a radio signal transmitted from many miles away from inside a building, but can’t see what’s on the other side of the wall without a window), eliminating astronomical seeing issues. Plus, radio’s long wavelengths (each VLA telescope can handle ten different wavelengths between 4 meters and 6 millimeters) enable the construction of very light and very large telescopes using frames instead of full reflectors, such as in the Giant Metrewave Radio Telescope:

Radio isn’t a magical cure-all, though: The longer wavelength of radio also cooresponds to a wider beam-width (diffraction again), blurring the resulting images. We can do two things to compensate for this loss of sharpness:

- We can build a larger telescope, which isn’t too difficult (since we’re on Earth and not in space), lowering the amount of diffraction and also collecting signals over a larger area, which reduces noise. There is an upper limit, though – it’s very difficult to build a larger telescope than the 1000-foot dish of the Arecibo Observatory, which was literally
*built into a sinkhole at the top of a mountain range.*

Source: NAIC – Arecibo Observatory

- Or, if we know what the diffraction pattern looks like, we can actually attempt to reverse the blurring and recover some amount of detail!

Here’s the idea: Blurring (or, more generally), is just the convolution of an image with a kernel.

Here, our image is the red curve, our Gaussian kernel is the blue curve, and our blurred image is the green curve. From Weisstein, Eric W. “Convolution”, Wolfram MathWorld

*Cyclic* convolution (which is the same as convolution, except the blur wraps around – so you need to add some number of 0s to the sides of the image) can be quickly computed by taking the Fourier transform of the image and the kernel, multiplying them, and returning the inverse Fourier transform.

If we want to *remove* a convolution of g from (f*g), we just divide it out of the Fourier product:

So if we know the blurring kernel (g), and our blurred image (f*g), we can recover f!

Alright, let’s try this out.

This is a radio image, captured in 1989 by the VLA, of the Whirlpool Galaxy, about 23 million light-years away from Earth. It was discovered in 1779, so this particular image isn’t really all that impressive, but the VLA’s gotten far better since then. The full image covers about three-tenths of a degree of the night sky.

So what does a single telescope see?

A single VLA telescope has a beamwidth of 8.6 arcminutes, or about 0.14 degrees. (You can actually calculate this directly from the equation for diffraction, but here we’re using the statistics from a 1980 book on the subject.) That’s nearly the diameter of the entire Whirlpool Galaxy, so the entire thing’s reduced to a faintly visible smudge:

Just to make it even possible to recover information, let’s assume we’re using a telescope observing at about 30 times the original frequency (42.6 GHz, almost the VLA’s maximum supported frequency)

Here, we’re blurring with a Gaussian kernel with a radius of 40 pixels (over a 1779-pixel image), which allows us to see some details, but not the fine structure. In reality, we’d probably be using an Airy kernel instead of a Gaussian one, but they’re close enough for demonstration.

So, let’s just divide the Fourier transform of that image by the Fourier transform of a Gaussian kernel, invert the Fourier transform, and we should be done!

Except instead of a nice, clean image we get

Because the Fourier transform of a Gaussian curve is another Gaussian, a few of the higher-order frequencies are, for all practical purposes, 0. And, of course, when we divide zero (since the original image *also* almost entirely consists of low-frequency terms) by zero, things go haywire and you basically get static.

In order to handle this noise (because that’s essentially what it is), we can use one of the many approaches available and add a constant, very small number to the spectrum of the kernel.

And then we actually get a very nice reconstruction!

But, practically, there are also quantization artifacts (reducing a floating-point format to eight bits)

and noise

which basically null out whatever we were doing. Granted, we have made some progress – here’s what we started out with compared to what we managed to reconstruct –

but it’s not terrific, and certainly no substitute for an optical telescope. And, unfortunately for us, building a mega-Arecibo isn’t plausible. Not only that, the generally productive xkcd strategy

doesn’t even work – we might get a brighter image, but it’ll still be blurry.

So what else can we try?

45 GHz is a ridiculously high frequency. So is 74 MHz, the *lowest* band the VLA recieves. We might be able to use this to our advantage.

If we can measure the phase difference between two signals (the relative offset in the signals two antennae receive caused by the differences in the amount of time the wave takes to get to each antenna), we might be able to amplify a miniscule difference in the position of a point source into a huge difference in phase, which we can also detect.

Or, putting it another way, we can use the difference in the time it takes a signal from a radio source to reach two telescopes to narrow the main lobe of the beam (reducing how blurry the image is) in exchange for more ringing around stars. Then, we might be able to remove the ringing, leaving only the main lobe, either through the same Fourier transform trick as before, or by some other method we haven’t determined yet.

And that, essentially, is the idea behind interferometry. Let’s get down to the actual details.

Let’s suppose, for now, that there is exactly one star in the sky. We can generalize to arbitrary numbers of stars later on, but it’s easiest to take things, well, one thing at a time.

If we have two radio telescopes pointing in a direction at an infinitely far away source, separated by a distance , the signals they receive will be exactly the same, except for a time delay on the one further away.

Here, ‘s two pi times the frequency, so antenna 2 receives the signal , and antenna 1 receives .

Individually, we can get an approximation of the *intensity *of the thing we’re looking at (V) by averaging the cos wave over some amount of time – that’s what we’ve done before, and the problem is usually that the beamwidth (in this case, the square root of the envelope of the graph on the lower-right) was usually too wide.

We can then perform a neat trick, and multiply the two cos waves together, getting a sum of their arguments’ sum and difference:

which happen to be the cosine of something times the phase (and only the phase), and a really high-frequency term! So if we average *this *out over time, we’ll get

and substituting in our value for , the value for the pixel returned by the correlator (the machine that coorelates signals between all 27 different telescopes and also handles a whole bunch of signal processing routines we’re glossing over right now) would be

This (when multiplied by your beam’s strength per angle) is the pattern a single star will produce on your image. So long as the frequency (w/(2 pi)) and the distance between our telescopes (b) are large, this interference pattern will have a far higher frequency than the original beam, but will have many rings around the center, making it look like a target. That’s actually a good thing, because then it’s easier to locate the object’s precise location!

According to the NRAO’s course on the subject, while a single telescope may only be able to locate an object within eight arcminutes, the 27 telescopes of the VLA, when combined, can pinpoint locations to within a *thousandth of an arcsecond*.

And, once we’ve got our sharp, but ring-ridden image, we can apply the same sorts of Fourier tricks as above to remove the rings and produce a clean image.

Alternatively, in the spirit of presenting proofs without words, here’s interferometry as a series of GIFs:

Finally, here’s one final way of looking at interferometry. Normally, waves are emitted by a source, propagate through space, and reach the telescopes at two different times. Because electromechanics is reversible, we can look at this process the other way around: the two telescopes emit waves at two different times, which propagate through space and reach the source at the *same* time.

Conveniently, the sensitivity of the telescope to this point source depends on whether the two ‘telescope waves’ are constructively or destructively interfering with each other at that point in space – that is, the interferometer’s graph of reception is an interference pattern!

As an added bonus, this interference pattern corresponds exactly to the diffraction of a wave passing through a slit of width equal to twice the wavelength, which also provides another explanation of the VLA’s better sensitivity at higher frequencies.

It seems like we can do almost anything with just two telescopes –

The very first astronomical interferometer looked like this.

This was Karl G. Jansky‘s original telescope, originally built to detect radio emissions from telephones but which wound up detecting a large radio source at the center of the Milky Way. It consisted of one large antenna, an analog information storage system (a pen and notepad), and a track which allowed it to be rotated. While Jansky’s telescope wasn’t an interferometer, the reason there even was a track is relevant – if you use only two telescopes in your interferometer, or (equivalently) one long antenna, your signals will be sharp in one direction, and blurred in another.

The reason for this is (somewhat) simple: Up until now, we’ve been working in two dimensions, with two telescopes. In three dimensions, there’s interference along the direction the telescopes are laid out, but no interference perpendicular to that direction. You need at least three to get a sharp image, which is why the VLA has three arms set at 120 degree angles to each other.

However, the interferometry procedure described above doesn’t actually work for more than two telescopes. Instead, we treat the three telescopes as (3 choose 2) = three pairs of baselines between two telescopes. The VLA computes the results for each baseline, then adds up appropriate amounts of each point response, minimizing the rings while amplifying the center lobe.

With 27 antennas, we have (27 choose 2) = **351** independent baselines, all at once, making for an incredible point response.

Eventually, we’d get a sort of Gaussian point response – which is actually a bit of a problem! As we’ve seen before, when we blur an image with a Gaussian kernel, we find that we can’t reconstruct some of the high frequencies, lest we run into a division-by-zero error. To put it another way, the blurring, when followed by quantization, *destroys information*, and we can only reconstruct the large structures in the image. We can err in the other direction, too: if our telescopes are too far apart, we’ll only be able to reconstruct the fine details of the original image, and we won’t be able to see the large structures.

That’s why the telescopes of the VLA are shuttled around on their rails every few months, cycling through four configurations – from the D configuration, with a maximum baseline of less than one mile, to the 22-mile A configuration. By compositing all four images, we can achieve a complete radio image of just about any celestial object.

But we can do even better.

The Earth rotates, so by taking measurements every so often throughout the day, we can pretend we have multiple copies of the VLA around the planet, and infer even more baselines from that.

But we can do even better.

The VLA is so sensitive that *the movement of the Earth’s crust beneath it* blurs out its images. By making use of the Very Long Baseline Array, ten additional antennas spaced throughout the United States, the VLA can correct for these errors using what essentially equates to an five-thousand-mile wide sparsely-sampled antenna.

And of course, there’s even more we can do on the signal processing side of things, which are, unfortunately, way out of the scope of this article. Techniques have been developed for correcting chromatic aberration (just as some lenses do), dealing with non-flat baselines using maximum entropy techniques, implementing spherical harmonic Fast Fourier transforms, running web servers, and *simply handling* the gigabytes of information the antennas return every second quickly, efficiently, and using hardware custom-designed not to interfere with its own antennas.

And yes, sometimes errors occur, and sometimes antennas go down. Neither the VLA, nor its operators, are perfect, and there are some things we probably won’t be able to glimpse for a long time.

All the same, the VLA is an incredibly impressive work of engineering, and it’s spotted things we wouldn’t be able to see any other way, from early protostars, to synchrotron radiation from black holes.

But let’s go back a bit: you’re actually at the Super Bowl, and you’re actually trying to record individual audience members. Or, let’s say you’re trying to mic a conference room without having to give everyone lapel microphones. Then, believe it or not, you may actually be familiar with the idea of microphone arrays, which essentially use techniques from interferometry to create virtual microphones which are more directed than any of their components – or acoustic cameras, which are now used for detecting audio emissions from products. Some sensor arrays detect waves propagating through the Earth’s crust, and use the results to detect the presence of oil. (But of course, it’s very difficult to create an audio source within the Earth – which is why they use *the noise from nearby highways**, rebounding off objects deep beneath the Earth’s surface.*)

Interferometers have been used in meteorology, in wind tunnels, in chemistry, quantum mechanics, particle physics, microscopy, and undoubtedly a few more fields by the time you read this. Now, even some *optical* telescopes use rotating apertures to create a virtual array of smaller telescopes – just like the VLA.

Signal processing is still a developing field, and new techniques are being discovered every day, not just in astronomy, but in just about every scientific field there is. And that, to say the very least, is fantastic.

]]>

The rules are very simple: every time you swipe across the screen, all of the tiles try to move in the direction you swiped. Two tiles can combine if their values add up to 3, or if the tiles are equal and both integer multiples of 3. If you try to combine two tiles (by squishing one against the wall) and they can’t, then they act as a barrier, and that particular column or row doesn’t budge. Finally, there’s the part which makes it tricky: Every time you move the tiles, another tile is introduced. The goal is to reach the elusive 6144 tile, or more realistically, to last as long as you can without running out of possible moves.

For reasons which are yet to be fully understood, this game has attracted a stunningly large audience of players. Perhaps it’s because the game’s aesthetically appealing, or perhaps it’s because it’s apparently (and initially) easy to get past the first few stages, and yet nearly impossible to reach the final goal. It’s also a game which encourages study from just about everyone who’s played it (in the same way as Chess does), but also lacks any sort of threat of failure- a cheerful sound plays, and you get to see how many points you wound up with without losing in any sort of public way. In any case, even though it’s been around for only 2 and one-third months, there’s been an enormous amount of public interest and quite a few successful attempts to work out how the game works internally. Unfortunately, quite a few clones, some of which have offered notable modifications to the game, have also been created.

One of the most notable clones is Gabriel Cirulli’s * 2048*, which is almost identical to

- The tiles are the powers of 2 (2,4,8…) instead of three times the powers of two along with 1 and 2 (1,2,3,6,12,24…)
- Only tiles reading 2 and 4 are ever inserted, as opposed to the 1,2,3, and sometimes 6 or more of
*Threes* - The tiles slide as far as possible instead of moving at most one space
- The tiles are placed randomly on the board (in
*Threes*, they only ever enter from the edge you swiped from) - The goal is to get to 2048 instead of 6144, which makes the game a bit easier, since there are two types of tiles you never have to deal with, and
*2048*is free and open-source, and this, more than anything else, has probably led to its popularity and the number of subsequent clones.

If you’ve never played *Threes* or *2048* before, I highly recommend giving them a try, if only so that you can develop your own intuition for these games.

One of the few things that everyone who’s played *Threes* or *2048* agrees about is that these games are *really* difficult. As it turns out, people have discovered quite a few strategies for playing these games which make them a bit easier, but usually not by very much. However, there is a complicated and arcane algorithm for *2048*, known as the Corner Strategy, that will allow you to match or surpass scores you may have spent hours achieving in just a few minutes, using only a few very simple calculations.

This works *ridiculously* well, given the amount of thought necessary to run this algorithm.

(This isn’t sped up or time-lapsed in any way)

Of course, the standard corner strategy rarely gets you to the 2048 tile, though it sometimes does. There are other methods, such as the so-called Mancini technique (keep the largest number in a corner, construct chains, never press Left), but almost all are versions of the Corner Strategy.

What we’d really like to know, though, is how to play *2048* or *Threes* optimally; that is, what is an algorithm that will play a game so as to gain the highest score? While this is almost certainly extraordinarily computationally intensive, a number of programmers have developed algorithms which play *2048* extraordinarily well. There was actually a bit of a competition on StackExchange not very long ago to design the best AI for the game, and many of the submissions were able to attain the 2048 tile almost all of the time! In particular, the best AI, developed by nneonneo, uses a surprisingly simple technique known as *expectimax optimization*, which works something like this:

- Consider the game tree of
*2048*: We move in one direction, then the computer places a piece randomly, then we move in another direction, the computer places another piece randomly, and so on until we can’t move anymore. - Suppose we can assign a “score” to each state of the board, which tells us roughly how good a position is, without actually looking into the future for possible moves or anything like that. The function used to calculate the score can be as simple as counting the number of empty spaces on the board, to complicated heuristics (such as ovolve’s combination of monotonicity, smoothness, and free tiles) It’s a bit like looking at a chessboard and guessing how much one side is winning.
- That said, we can get a better idea of how good a particular state is if we can look a few moves ahead, and measure the approximate score of each of
*those*positions, assuming we played optimally up to each one. - Now, suppose we’re at a particular state, and we want to determine how good a move such as, say, moving right is. It’s actually fairly easy to compute the expected score of the
*computer**‘s*move- just add up the score times the probability of a particular move for each move the computer could make. For instance, if there were a probability of 0.9 that the computer would place a 2 (say) resulting in a state with a score of 5, and a probability of 0.1 that the computer would place a 4, resulting in a state with a score of 2, then the expected score would be

0.9*5 + 0.1*2 = an expected score of 4.7

- If we know how good each move we could make is, then we should just play the best move (obviously).
- Expectimax optimization starts by asking “What is the best move I can make?” at the current state. To do that, it has to compute the score of each of the moves it could make, which it does by first branching over each of the moves the computer could make, and then measuring the score of each of the resulting positions by asking “What is the score of the best move I can make?” for each of
*those*. Theoretically, this could go on forever, so expectimax just uses the base heuristic once it’s sufficiently far down the game tree (that is, once it’s thinking a particular number of moves ahead). Once it has decided on an accurate score for each of the possible moves it could make, it simply plays the one with the best score.

Not only is this algorithm *very* good at playing the game when equipped with a good base heuristic- nneoneo’s implementation achieved ** 4096** in 97 of 100 trials, and gets to 16384 about once in every eight attempts – it’s also very fast!

(this is *also* not sped up in any way- it really is doing 30 moves per second!)

Of course, if you have an AI that play the game, it’s not difficult to create an AI that can always place the new tile in the worst possible place for the player, making the game more or less impossible. (For instance, see Stephen B. Beevan’s *Hatetris*) This is exactly what Zsolt Sz. Sztupák has done with *2048-Hard*, based off of Matt Overlan’s solver. Interestingly enough, the “Impossible” mode isn’t *entirely* impossible- I actually managed to get the 64 tile, with a final score of 540, while the embedded AI solver often gets to the 128 tile.

Unfortunately, if you try the Corner Strategy on *Threes*, you’ll probably get the *lowest* score you’ve ever gotten. In fact, the designers of *Threes* found out about the corner strategy fairly early on, and modified the game a bit to make braindead strategies like it ineffective. This has the side effect of making the game *much* more difficult.

*Threes*, actually, is a bit less random, for two main reasons:

- Not only do you get to see what type of card will be placed next, but you can also predict future tiles by counting cards! According to TouchArcade member kamikaze28, the tiles are drawn from a shuffled deck of 12 cards (4 1s, 4 2s, and 4 3s), which is reshuffled every time the deck runs out of cards. (This means, for instance, that if you’ve just drawn 2 1s, 3 2s, and 4 3s, and the next card is a 2, then the one after that will almost certainly be a 1.) Additionally, if the highest card on the board is greater than 24, there is a 1 in 21 chance that the next card will come not from the deck of normal cards, but will be (apparently?) randomly chosen out of a set of cards from 6 to (top card)/8.
- As mentioned above, cards can come only from the side you swipe from, and even then they only enter into rows or columns that just moved. This is incredibly useful for combining 1s and 2s, although it’s still very easy to get a stray 1 or 2 in an inconvenient area on the board.

However, *Threes* not only has two more cards than *2048* (which alone would make it like 8192), but the first two of these cards, unlike 2 and 4, cannot combine with themselves! According to the designers, as of March 28th only six people had actually reached the 6144 card in *Threes*. One of these people, known on TouchArcade as y2kmp3, gave a few observations on the game after reaching the final card:

1. Once again, it was a random “high number” tile card (in this case, a 384 tile card) that made this run a success.

2. The most difficult part of the game is to learn how to get out of a potential jam, the most dangerous of which is “staggering”. This occurs when a “low number” tile card appears between two very “high number” tile cards. It is very important to remove staggering as early as possible (without replacing it with another staggering).

3. I don’t use so-called center or corner strategy. Instead, I make it a priority to keep the “high number” tile card that I want to match against the wall, preferably not in a corner. This way, when a random “high number” tile card appears on the same wall, I can get to that card quickly to match.

4. While the game undoubtedly requires skills to win, the “element” of chance plays a significant role in this game. In fact, I would argue that chance dominates over skills in the later levels. I found it simply too difficult to maintain two separate chains to create two identical “high number” card tiles to merge. Instead, my strategy is to create only one “high number” tile card of each kind, so that whatever the random “high number” tile card appears, you can make use of it to escalate.

5. The game is quite taxing to play at the later levels. Near the end, I was keeping count of the 1’s and 2’s that were appearing and would frequently change my strategy when I could count on the fact that these tiles might not appear for awhile (assuming the stack theory is correct; see #6).

6. I, too, am convinced that there is some unknown stack from which the tile cards are drawn and this stack gets renewed and reshuffled.

7. I am fairly convinced that, given a number of open rows or columns where a new tile card can appear, their probabilities are NOT equal. More often than not, the new tile card would appear in a “less” favorable row or column instead of a “more” favorable row or column with which I could do an immediate match. I am fully aware of the potential issue of “recall” bias, so I welcome other players’ impression of this theory.

I should emphasize that good games of *Threes* take a *lot* of time to play through; y2kmp3’s run, for instance, lasted “10-15 hours”, most of it spent planning.

Although AIs have been written to play *Threes*, and even though *Threes* might appear to be a more deterministic game when compared to *2048*, I know of none that have actually beaten the game on an actual (non-simulated) device. However, a few (most notably Team Colorblind’s Threesus) have gotten very close.

So far as I know, the first *Threes* AI to have been published is Nicola Salmoria (of MAME and Nontrivial Games)’s simulator, which uses expectimax at a depth of 9 with the following heuristic:

+4 points for each empty square

+4 points for every pair of adjacent cards that can be merged

-1 point for each card which is between two higher cards vertically or horizontally (-2 points if both)

The reasoning behind the scoring should be clear: reward empty spaces or spaces which can be emptied later, and penalize checkerboard patterns which are harder to get rid of.

Although he’s apparently still tuning the AI, Salmoria’s program has some pretty good simulated scores:

[percentage of times each card was reached]

384: 100%

768: 100%

1536: 88%

3072: 34%

6144: 5%

min score = 29,553

median score = 89,235

max score = 733,119

Note, however, that while his AI searches through nearly as many positions as Deep Blue, it almost never achieves a 6144. An “oracle” version of the AI (that is, one that knew all the future cards and where each card would be placed) managed to achieve a 12288 a whopping 18% of the time, which seems to indicate that his program probably doesn’t have any major bugs; *Threes* is just *that* difficult.

Probably the most famous attempt at beating *Threes* via computer analysis is the robotic **Threesus** from Team Colorblind.

Not only is Threesus a remarkably good player, but it’s also capable of playing *Threes* on an iPad using an Arduino and two servomotors. In a particular sense, this robot with a Twitch channel has turned *Threes* into something of a spectator sport (Matthew Wegner, one of the programmers of Threesus as well as one-half of Team Colorblind, usually streams it playing the game for a few hours every night). Perhaps because of this popularity, Threesus is continually being improved based on suggestions from channel viewers, and has gotten *very *close to reaching 6144 (at one point, it had enough material on the board to each the final card, but things were disorganized enough that the board became blocked up.) However, Threesus has been playing *Threes* constantly at the Aztez (Team Colorblind’s flagship game) booth at the 2014 Penny Arcade Expo, and at some point, whether by sheer perseverance or just random chance, it finally succeeded.

Threesus got a 6144 at the @aztezgame PAX booth today! Then the tile intro screen barfed AI state, oops: pic.twitter.com/Do7dKmKaxH

— Matthew Wegner (@mwegner) April 11, 2014

…and then the AI crashed. Or started making horrible moves. (I’m not exactly sure.)

As explained by Walt Destler (the other programmer of Threesus and prolific game designer), Threesus uses expectimax with a depth of 6, and card-counts for the first 3 of those moves. (Afterwards, it assumes the cards are randomly distributed). Furthermore, in order to increase performance, it codes the entire board as a single 64-bit integer, using 4 bits per square to represent values from 0 to 12288. Although this is almost identical to Salmoria’s approach, Threesus somehow has a better record of reaching every tile up to 6144, despite evaluating far fewer numbers of states!

100 games completed!

Total time: 03:03:22.0262799

Low Score: 30126

Median Score: 89436

High Score: 717960

% of games with at least a 384: 100%

% of games with at least a 768: 100%

% of games with at least a 1536: 94%

% of games with at least a 3072: 41%

% of games with at least a 6144: 1%(Don’t read too much into the decrease in % of 6144. The difference between 1 game and 3 games is statistically insignificant.)

So why does Threesus do so well? The answer, so far as I can tell, is that it uses a better evaluation function – that is, it’s better at determining, without doing any heavy computation, how good or bad a position is. It’s not actually all that difficult to make a good *Threes*-playing AI using mediocre heuristics, but it’s nearly impossible to create a great AI without great heuristics. The original evaluation function worked a bit like this:

- Every empty space is worth 2 points.
- Every matching pair of adjacent cards is worth 2 points.
- A card next to another card twice its value is worth 1 point.
- A card trapped between two other cards of higher value, or between a wall and a card of higher value, is
penalized1 point.

but since then, it’s been modified quite a bit:

- Every empty space is worth
**3**points. - Every matching pair of adjacent cards is worth 2 points.
- A card next to another card twice its value is worth 2 points.
- A card trapped between two other cards of higher value, or between a wall and a card of higher value, is
*penalized***5**points. - Cards of the second-largest size get a bonus of 1 point if they’re next to the largest card, and an extra point if they’re next to a wall.
- Cards of the third-largest size get a bonus of 1 point if they’re next to a wall and are next to a card of the second-largest size.
- The largest card gets a +3 bonus if it’s next to one wall, or a +6 bonus if it’s in a corner.

Notice that last +6 bonus for having the largest card in the corner: Threesus uses a Corner Strategy!

In conclusion, while the world’s best *Threes* AIs are pretty good at playing the game, and occasionally beat it, there’s still room for experimentation and improvement- from modifying evaluation functions, to reverse-engineering the deeper secrets of the game, to even trying completely new search methods.

Finally, here’s a quick puzzle: What’s the largest tile you can possibly achieve on the board of *Threes*, assuming the random number generator will give you exactly the tiles you need it to?

]]>

But first, some acknowledgements. The very first image, an example of a marble machine, was taken directly from denha’s fantastic video, “Marble machine chronicle“. While denha’s Youtube channel focuses primarily on marble machines, the electronics videos are certainly interesting as well, so definitely check it out. I used Blender for most of the non-schematic animations, so thanks to the people behind the animation engine and Cycles Render. And finally, the proof would undoubtedly have been much more difficult without the ideas of Demaine, Hearn, and Demaine (choose your own ordering), not to mention the many other people who’ve done work on computational complexity theory and all the things that led up to the field. (I’m sure some would prefer I name-checked everybody involved, but then I’d have to include my kindergarten teacher and the crew behind the Broome Bridge and, well, this would be a lot longer.)

So, without further ado, here are the various images and videos from my talk, presented in substantially more than 6 minutes.

This is, as mentioned above, denha’s “Marble machine chronicle”, for those who have never heard of marble runs under that particular name before.

I made (with assistance from Peter Bickford) a short video demonstrating a few recurring elements in marble machines- specifically, the ones I would be analyzing later in the talk. The original marble machine, from the Tech Museum of Innovation, also contains a few other elements (such as loop-de-loops, chimes, and randomized switches), some of which do nothing to the computational ability of the machine, others which actually do change the problem a bit. Additionally, I consider problems with exactly one marble or pool ball, although Hilarie Orman suggested that it might be possible to simplify the construction using two or more pool balls.

This is a decoy that looks like a duck, used for a quick gag and counterexample to the Duck test. This was actually the shortest clip and the longest render for the entire project; the good news was, it rendered at 24 times the speed of a Pixar film. The bad news was that it rendered at only 24 times the speed of a Pixar film.

A short slide made to demonstrate the proof that single-switch marble machines *cannot* emulate a computer (unless NP=PSPACE, which is really unlikely). Up top, we have the problem we’re trying to solve, while on the lower-left is a randomly generate marble run with edges annotated and on the right is a system of equations representing said marble run. (Click through to see the rest of the images)

The next simplest thing is a double switch, which is essentially two single switches ganged together so that they always have the same “value”. This was a bit of a problematic slide because the second half, showing a marble entering the front switch, then the back, actually hadn’t finished rendering by the time I went on stage.

These next four videos show how this particular gadget works, and how it emulates a single, amnesiac memory cell… most of the time. All of the schematics were generated using a custom animation and drawing package designed for marble runs within Mathematica.

Interestingly, given two switches ganged together, you can get any positive integer of connected switches! Here’s a larger image of the construction (imagine the switch has been rotated 90 degrees clockwise):This is composed of three modules fed by two main loops, which you can think of as train stops:

Essentially, it works a bit like this: We start by getting on the right train loop, and then ride the train around each station, getting out and drawing two chalk lines- red and blue, say- at every stop. We’re pursuing a hypocritical campaign against red chalk lines, though, so the moment we see a red chalk line we get out and erase it, and since we drew that line, we’re at the same station we originally started at. (You’ve probably figured this out by now, but each station is a module, the presence of a red chalk line is the state of the top double switch, and the presence of a blue chalk line is the state of the middle, long double switch). We immediately take out our orange chalk and draw a “campaign against red chalk” logo, and then board a train with specially tinted windows so that we can’t see anything blue, while still getting out at every station and erasing the red chalk marks. (We only draw the logo when exiting the first train.)

But once we *don’t* see a red chalk mark, we draw a red chalk mark and are about to board the first train when we see the red chalk mark (that we just drew) and remove it. Since we just exited the first train, we attempt to draw our logo, but see it’s already there and (apparently ashamed) erase it. (We’re now in the lower-left corner of the video.) We see the blue mark, and erase it while drawing a purple mark and a white mark. But whenever we draw something with the white chalk, we remember we have to clean or draw something with the blue chalk, at which point we see the purple mark (while erasing it) and finally erase the white mark. Every time we erase a white mark, we check to see if we’re holding a piece of blue chalk, which we are. In total, we’ve toggled the presence of a blue mark at every station and wound up at the same station we started at, holding a piece of blue chalk *if and only if* every station now has a blue chalk mark, with at most two rules per piece of chalk (depending on whether we came from the left or the right train).

Another way of looking at it is that the top double switch records the answer to the question “have I been here before?”, while the long middle double switch records the answer to the question “what is the state of the emulated n-switch?” We start out by toggling both types of switches until we’ve been somewhere before, at which point we start toggling only the first type of switches until we’ve never been somewhere before. We use an extra switch to remember whether we’re writing or erasing, located in the upper-left, and finally the gizmo in the lower-left simply reads the value of a switch without toggling it.

This is a “function call” gadget, which takes x as an input, and returns the marble along the track corresponding to the ordered set {x, f(x)}. It does this by using some really long switches and a linear array search: It stores the input, computes f(x) and stores that in a (2m+1)-switch, reads the input, and moves right across the (2m+1)-switch array until it sees a 1 (Right), at which point it exits. (Here, m is the number of inputs, and n is the number of ouputs.). In schematic form, the marble will exit at output n*x+f(x).

This is the first part of a tape cell gadget, which allows you to store any integer within a range and read the stored value.

You can use that inside a function call gadget to exit through a particular output once set to create a gadget which is much easier to work with. By this point, we’re not actually even working with the original double and single switches any more, which is not necessarily bad, but seems to imply that a few things could be simplified and/or micro-optimized. In fact, this particular gadget could use a pair of n-switches in place of the function call.

This is a quick and not fully defined block diagram of a Turing machine- “move left or right” should read “move left or right on the tape”. (Additionally, the version of this in the working slides had “set new state” written twice.)

This is a more formalized version of a Turing machine, which is essentially equivalent to the previous diagram, but which explicitly arranges the paths and set/read sections of the block diagram.

Finally, if you replace each block in the above diagram with the necessary construction, you get the above, which looks complicated but is still essentially equivalent to the above. This finishes the construction of the polynomial-space Turing machine and proves that double-switch marble runs are PSPACE-Complete. (The above picture is actually of a maximal 3-state Busy Beaver, which sets every single tape cell on the right. The entirety of the “programming” of the computer is contained within the connections above each of the three tape cells in the bottom-middle.)

By the way, all of the schematics above were constructed in *Mathematica* as functions, which means that you can actually tweak the number of inputs and outputs and have the schematic automatically change- except for the final Turing machine construction, which was stymied only by the deadline.

Interestingly enough, this proof is actually simpler if you try to construct a simulation of Conway’s Game of Life in the double-switch marble run system instead of a polynomial-space Turing machine; Since the Game of Life can emulate a register machine, which is Turing-Complete, a simulation of the Game of Life on a bounded grid should be PSPACE-Complete. Furthermore, a computer which simulates the Game of Life can be easily constructed using only the n-switch construction (along with double switches and ramps and the other base elements), although this hasn’t been tested yet (the idea came too late for the conference).

This is by no means practical for a modern-day computer. The above 3-state Busy Beaver alone has 259 single switches, 852 double switches, and 3,756 paths (determined by a Python program), and if you wanted to simulate a laptop processor along with 4GB of memory, you would need a marble run about 1/5 the diameter of the Sun. (At this point, it starts generating its own gravity, and then the marble might get stuck, and all sorts of bad things start happening.) But the important thing is that *marble runs can compute***anything! **

In particular, if a particular system can move an object from any place to any other place (as much as necessary) and emulate a double switch, then this shows that it can automatically emulate a Turing machine, and so that the problem of determining that system’s output is PSPACE-Hard. (The slides said -Complete, which is *in* PSPACE-Hard, but very likely not the same)

Finally, here are two extra questions which at the moment remain unsolved:

Is the construction simpler if there are *two *marbles? For instance, can you use the timing of marble lifts as memory? (From Hilarie Orman)

Can you construct a crossover gate in 2D?

If you have answers to either (or both) of these questions, I’d appreciate it if you could contact me at (rot13) grpuvr314@tznvy.pbz .

Oh, and one other thing: I plan to get Random (Blog) back up and running in the next few months, so stay tuned.

]]>

I was originally planning to finish the series of articles on SBPs at this post, concluding with the announcement of the optimal puzzles for a 4×4 grid. Partially, this was because the SBPFinder algorithm would have taken prohibitive amounts of time to compute the boards for any larger size. For example, for the classic 4×5 grid, it would take somewhere around 128 days! A much worse problem is that it would have used approximately 64 GB of RAM, which, even at its cheapest, is an investment I cannot reasonably justify.

Fortunately, however, I soon realized how utterly stupid I had been when designing the SBPFinder algorithm. In case the reader doesn’t want to read the thousands of words in the previous posts, I’ll quickly describe it: Essentially, the algorithm used dynamic programming and a peculiar way of indexing into a 2D array to, cell by cell, generate all possible positions. The primary flaw with this technique is that it generates lots of duplicates, which both slows it down and requires the use of a hash set or other method of eliminating duplicate positions. (To go into the technical details, this is because certain oddly shaped pieces, which -not coincidentally- appear in the list of hardest simple^3 SBPs, cause the dynamic programming stage to consider too many branches of the tree. This, in turn, is because the computer cannot currently time travel and see what it does before it does it)

The solution is to place entire pieces (or voids) in undecided spaces, proceeding left-to-right and top-to-bottom. Currently, this has been implemented as a recursive function, as follows:

Given a board B (which may be partially filled):

- Find the first cell which has not been decided yet- that is, does not contain a cell of a piece or a void, but rather exists in a state of indecision between the two. (This is implemented as a third cell type)
- If there are no undecided cells, the board is a valid placement of pieces and voids. (It may not be justsolved, though, so these are typically placed in a file to be processed further)
- Otherwise,
- Recurse to the case where that cell is filled with a void,
- Find all pieces which can be placed on the board and which cover only undecided cells
*and*which covers the cell determined in step 1. - For each of those pieces, recurse to the case where that piece is placed on the board.

Here’s the search tree for 2×2 SBPs using this method:

This approach fixes basically all of the problems with the previous algorithm: it doesn’t duplicate positions (the nth piece will always have label n), it uses little memory (once each board has been generated, it can immediately be written out to a file), and it doesn’t require complicated data structures. However, the search tree contains lots of redundancies- for example, see the first and second branches of the root node. There, the shapes of the subtrees are identical, and aside from the topmost square being either a 1×1 or blank, the nodes are identical as well. Here, it’s possible to use memoization/caching/memorization/value-storing, since future choices are dependent only on the shape of the collection of cells which have not been determined yet-not on the specific values of the cells that have been determined. Even better, every case which this algorithm evaluates corresponds to a polyomino fitting within the board containing the upper-left cell. Sorry if this is a bit verbose- here’s an example.

After the case for, say,

we can get all possible positions for the caseby replacing the large piece in the first case with the two other pieces and a void, then renumbering the pieces consistently. What I mean by this last step is that, inside SBPFinder, boards are represented as arrays of nonnegative integers, with 0 representing a void, 1 the first piece, 2 the second, and so forth. This is to prevent pieces unexpectedly merging- for example, in 1D, this would come up when the 1 piece (a domino) in {1,1,2,2} was replaced by {1,2} (that is, two 1x1s), resulting in {1,2,2,2}. * tl;dr: if the program skips this step, bad things happen*. This still requires storing a huge number of boards, but it’s actually possible to implement this as a series of not very complicated read/write operations on files- in other words, I have the file system do the memoization. Here’s what the search tree looks like under this approach:

Anyways, the resulting algorithm can also be used to quickly enumerate the number of ways of packing polyominoes, with holes, into an n*m grid. For example, I can tell you that there are 6,904,653,755 ways of doing so for a 5*4 (height*width) board, 3,117,469,335,485 ways for a 5*5 board, and 168,315,938,005,873 ways for a 15*2 board. Each of these considered 53341, 1054066, and 803760 cases respectively, which took a few minutes. The 5*4 board is just within the range of possibility, since it would take only around 145 GB to store each possible position; the other grid sizes would take up 81 TB and 5217 TB, which, while technically possible, are certainly infeasible.

Even better, it gives an upper bound: A110, the Bell numbers! To derive this, first “unfold” a 2D n*m board by flattening the array to get an array, of length n*m (redundant syntax, I know), but with disconnected pieces. Since the set of ways of placing disconnected polyominoes into a 1D array is a superset of the set of unfolded packings of polyominoes into a 2D board, and is much easier to compute the size of, we’ll use that as our simplified model. Then, the number of ways is defined by

i.e., a board with n+1 cells either starts with an empty cell, or a disconnected piece (formed by choosing the first cell, and then adding on k of the n remaining cells). There are two ways to prove that a[n]=BellB[n+1]; the first is to try to express things as an EGF, which involves calculus and knowledge of binomial convolution, and eventually lands you at

.

Integrating this to get a_{n-1} gives

An alternative approach is “proof by OEIS”- specifically, 1D non-connected SBPs with n cells have a simple bijection to rhyme schemes of n+1 lines: Add an empty space to the front, then add 1 to each cell (turning the voids into piece 1, piece 1 into piece 2, and so forth. Convert numbers to letters, and you’ve got a rhyme scheme, listed in the Comments section of OEIS A110 !

Unfortunately, the Bell numbers are not a terrific upper bound. An example: There are 55,807,716 ways of placing pieces on a 4*4 board (see post 2 in the series). BellB[16+1] is 82,864,869,804 – only three orders of magnitude off! Even a revised estimate from the old SBPFinder algorithm gives 3*4^6*5^9 = 24,000,000,000. Clearly, it could use some work.

So what’s the point of all this counting? Well, it takes about 100 seconds for SBPFinder to compute and write out all 1.42 GBs of 4*4 positions and about 20 seconds to reduce those to the 12,595,006 justsolved positions. Extrapolating linearly, it should take only 4 hours to process *every single 5*4 board*. The time’s OK; the space can be handled- what’s stopping us?

or, 2073 boxes in a warehouse

The problem is memory. Again. Originally, SBPFinder would sort the positions into (n*m-1) files, with the kth file containing those positions with k pieces. Typically, the largest of these files contains one fourth of all positions, or about 14 million or a modest 226 MB for 4*4 boards. Even reduced to just the justsolved positions, this comes out to 7.5 GB on the 5*4s. Furthermore, due to 64-bit aligning, C#’s garbage collector, and the use of a HashSet, boards take up about 7 times as much space as they should, turning this into 52 GB to store a single file. Even better, *because* of some much-needed changes (see the “Computation” section below), two HashSets momentarily each contain all the boards, although this, at least, could be remedied without switching to C++. I’m not exactly willing to dual-boot into a 32-bit OS, and I don’t know how to buy RAM. The solution is, of course, more algorithms.

We start by defining an *SBP Categorization Algorithm* as an algorithm that sorts justsolved boards into different files (or boxes, if you prefer) such that all justsolved states reachable from any particular board are also in the same file. The simplest one of these is to put all the boards in the same file- trivially, this satisfies the requirements. The previously defined algorithm is also an SBP categorization algorithm: since the number of pieces is invariant under the Moves metric, all states reachable from a particular puzzle will have the same number of pieces, and thus will be in the same file. Under the same trivial logic, since the pieces themselves never change under the Moves metric, sorting puzzles by the number of pieces of each shape they contain is also an SBP categorization algorithm. Unfortunately, since there are well more than 53340 polyominoes that can fit in a 5*4 board, and presumably an enormous number of combinations of polyominoes, this would result in a lot of files. Currently, what seems to be a correct balance is to categorize boards by a list of the number of n-ominoes that each one has, with n going from 1 to 20. For example, if an SBP position had 3 monominoes, 1 tetromino, and 2 pentominoes, it would be given the list (3,0,0,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0) and placed in the file with all other boards with that list.

It’s easy to see that the number of files that will be generated using this technique is at most the sum of A000041, the partition numbers, from 1 to n*m-1. (This is because we are “partitioning” the n*m cells into blocks, and there must be a number of holes from 1 to n*m-1). Checking this, we get 2086 files for a 5*4 grid, which is a very close upper bound- but in practice, there are only 2073 files. Where do these extra 13 come from?

(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0) (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0) (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0) (0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0) (0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0) (0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0) (0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0) (0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0) (0,0,0,0,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0) (0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0) (0,0,0,0,1,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0) (0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0) (0,0,0,0,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0)The Impossible Files

In each of the cases just listed, there are no puzzles with the specified numbers of polyominoes because there are no *justsolved* positions. In an earlier blog post, I defined a justsolved position as one where a piece had just moved into the goal position (as far to the lower-right as it can possibly go). This would mean that the piece in question had moved right or down, and thus must be able to move left or up (since moves are reversible). This quickly rules out the first of these files, since it is impossible to place a 19-omino in a 5*4 board such that it can move *at all*. The other files are a bit more difficult, but can be eliminated in a similar way. For example, in the last file (2 pentominoes and 1 nonomino), each of the pieces needs at least two empty spaces to move, which can be seen by ruling out the I-pentomino and noting that the minimum dimension of any bounding box is 2. However, there is only one empty space, so no movement can occur, and so there can be no justsolved boards.

Way back in Part 3, I poorly attempted to justify avoiding a few thousand puzzles which had ambiguous goal pieces. Specifically, in cases like

it’s impossible to tell which piece is the goal piece in this justsolved state. Turns out there is a way to deal with such ambiguity and still find the hardest simple^{3} SBP, suggested by a remarkably foreseeing Navy gunner:

*When you come to a fork in the road, take it!*

Furthermore, at the time, it seemed like there would be no puzzle which was harder if the goal piece was the one second from the lower-right-hand corner. However, I eventually stumbled upon one of Dries de Clerq’s many unnamed SBPs, which turned out to be a bit of a wake-up call:

So, SBPSolver now considers all cases where there may be multiple goal pieces. (There is a slight inefficiency in that the operation to move boards from one HashSet to another [lines 296-297] momentarily takes up twice the memory as storing the file itself, but this hasn’t posed a problem so far, and could be fixed with some work.) Lastly, I also multithreaded it. This shouldn’t really need explanation; all the threads just work on different files.

*a tour of interesting simple ^{3} SBPs*

In total, it takes about 16 days to generate and solve every single simple^{3} SBP: two days to write out all the justsolved positions, a few hours to sort them into individual files, and two weeks to compute the “best” puzzles for each of these files. This comes at a cost of a whopping 180 GB of disk space (most of which is temporary- only 29GB of positions are justsolved) but uses a relatively low amount of memory. So, after all these weeks (and a restart due to fixing the ambiguity problem), I finally managed to find and confirm the hardest 5*4, simple^{3} SBP. But as it turns out, it’s been discovered before.

I’ve mentioned Bob Henderson in the context of sliding block puzzles before, but it turns out he’s actually a rather active member of the puzzling world. He created many of the levels for Andrea Gilbert’s (clickmazes) BoxUp and Extreme TJ-Wriggle (TJ here standing for Tom Jolly) puzzles, independently discovered the advertising-invalidating “*Quzzle-Killer”* SBP along with Gil Dogon, analysed almost every puzzle on Nick Baxter’s Sliding Block Puzzle Page , and even created a whole new set of puzzles for that site. He’s contributed problems and solutions to any number of puzzle sources, including Ed Pegg’s Mathpuzzle. He may also have co-written a huge number of auto repair manuals, though I’m not sure of that last part.

Sometime before June 2002, Henderson devised the incredibly difficult “Gauntlet” series of puzzles, which he later posted to Dries De Clerq’s 3S Puzzle Ring. Each of these 8 puzzles is a standard sliding block puzzle, requiring you to move a monomino to a corner in a rectangular board no larger than 25 cells. The easiest takes 183 moves; the hardest, 484. The second of these, pictured above, just happens to be a 5*4 simple^{3} SBP. Oddly enough, he missed a 3*6 simple SBP which is a predecessor of a simple^{3} SBP:

Anyways, he posted a surprisingly specific description of how he finds hard SBPs while writing about a different topic on the Extreme TJ-Wriggle page, here excerpted:

About 20 years ago I entered an annual nationwide puzzle competition in the USA. One of the challenges was to solve a simple sliding block puzzle in the fewest possible moves. Having taken a few computer programming classes and automated the solutions to various block-packing puzzles, I felt sure that sliding-block puzzles would yield to a similar approach. The method I used was a full-width search: finding first all of the positions that could be reached in one move from the initial state, then those that could be reached in two moves, etc. until the goal state was found. It sped up the process considerably to store only the states that had not already been reached, which was most easily handled by storing the new states for each generation in their own file for comparison with states found in later generations.

As it happened, I won that competition, which led to a deepening interest in slide puzzles. I read L. Edward Hordern’s book

Sliding Piece Puzzles, corresponded with David Singmaster (who wrote its foreword), visitedNick Baxter’s Sliding Blocks site, and provided Nick with several shorter slide puzzle solutions. I collected the sliding block solver software available over the Internet and even had Rik van Grol send me a floppy disc with his own original program allowing human solvers to create and solve slide puzzles on-screen.I soon became interested in creating as well as solving such puzzles. The movement rules for most slide puzzles (as well as many other sequential-movement puzzles) allow any legal move to also be made in reverse. It followed that a solver that could take all possible winning positions as its initial state and perform a full-width search to first find all new states one move from winning, then all new states two moves from winning, etc. would eventually reach some end states from which no new states could be reached. The other sliding block solvers all seemed to be limited to only one initial state, but my solver used an input file that could include any number of states within the computer’s processing and memory limits. It was not difficult to write a block-packing program to generate a file containing all the winning states for any given board (grid) and set of block shapes. Running my solver program backward (without specifying a winning state), I was able to find those states the largest number of moves away from the goal and verify that they represented the most difficult possible slide puzzles (those requiring the most moves to solve) for a given grid, set of blocks and goal. …

If you’ve been following along (and if my writing hasn’t been too vague) you might notice that this is very, very similar to how SBPSearcher works. I don’t know why, and honestly I got the “justsolved” idea from Tromp & Cilibrasi. I think the main difference is that his program *probably* takes a list of pieces as input and finds the hardest puzzle using those, while SBPFinder just brute-forces all possible combinations.

It would be a bit uninteresting, or even cruel, to get through nearly 3000 words of technical analysis and end with “…and somebody already found it” (although this has happened before: for instance, Bill Cutler ran a distributed, three-year search on all 35,657,131,235 different “holey” 6-piece burrs in an attempt to find the one which takes the most moves to disassemble… and found that Bruce Love had discovered the hardest before, *by hand*.)

For the sake of completeness, I went and ran the set of three programs on all rectangular boards with less than 21 cells, recorded the results for each file, and ran a fourth Processing program to generate images for each of *these*. The result is a massive website which allows you to browse through the hardest puzzles for more than 5,400 sets of polyominoes, from 2*2 to 5*4 and even 10*2. Here‘s a link to the page for 3x3s, which shouldn’t immediately crash your browser. All of the pages are interconnected- that is, you’ll be able to access any page from any other one, including itself. (In other words, its adjacency graph is the constant 1 matrix of size 15*15) As it turns out, when you have a lot of puzzles, a few will tend to be interesting. I’ve collected some of my favorites from the list below, ranging from unexpectedly difficult puzzles to novel but short positions. In each case, the goal piece is in the upper-right-hand corner (yes, there may be ambiguity), and needs to be moved into the lower-right-hand corner.

**(2,8):** This is probably the simplest of the ridiculously difficult puzzles to construct, requiring only dominoes, quarters, and a frame to create a position that requires **186 moves** to solve! There’s also a neat stairstep pattern visible on the right-hand side. (If you’re *really* on a budget, see the substantially easier **(0,9)**)

**(0,2,0,0,0,1):** Remember how the hardest 3-piece 4*4 simple^{3} SBP took 9 moves to solve? Turns out that with a 5*4 grid, we can do better by just 1 move! This is also the only 3-piece list which has a **10-move** problem.

**(14):** This puzzle is trivial to solve. Matching the **10 move** record, though, is much more difficult.

**(0,0,4):** If you happen to be a triomino purist, this is probably the SBP for you. Incidentally, this **18 move** puzzle also happens to be the longest puzzle with no pieces *smaller* than a triomino. (No pieces smaller than a domino: the 102-move **(0,5,2)**)

**(2,1,1,0,0,0,0,0,1):** There’s an interesting subset of simple^{3} SBPs which I call the “packing-sliding puzzles”, where there is one piece occupying the entire top row, all but the lower-right square of the right column, and the goal is really just to move that piece a single square. To do this, the other pieces have to be rearranged into exactly the right shape, which can occasionally be difficult. This is the longest of these puzzles for the 4*5 board; it takes **26** moves to solve.

**(0,2,1,0,0,1) and (0,3,1,1):** are respectively the longest 4 and 5-piece simple^{3} SBPs on a 5*4 grid. That’s about it.

**(4,5,0,1):** The second-hardest simple^{3} SBP requires **226 moves**. Interestingly, it doesn’t have any triominoes, while most of the other high-move puzzles do.

**(2,1,4):** Speaking of triominoes, there are many 5*4 puzzles with a large number of triominoes which are still very difficult. Here’s one which requires **115 moves**.

**(3,2,1,0,1):** The sparsest puzzle that requires more than 100 moves to solve, possibly making this the most unintuitively difficult by some metric. (Just barely: It takes **101 moves**)

**(2,0,5):** And in case you can’t get enough triominoes… **29 moves.** (5 is the maximum)

**(3,6,1):** Finally, here’s a puzzle which is very, very similar to many 5*4 tray puzzles such as L’Ane Rouge or Minoru Abe’s Block 10, perhaps because of the symmetric voids. I feel like I’ve actually seen this particular design before, but I haven’t been able to find it anywhere else. It takes an amazing **193 moves** to solve!

So, that about does it for all the 5*4 simple^{3} SBPs. Hopefully I’ve provided you with some insight into the problem of solving puzzles via computer, and that some of the puzzles generated have been interesting. This isn’t done, by any means (the set ℤ*ℤ is infinite, after all), but I at least plan to take an indefinite hiatus from this problem.

Over the last four blog posts on this subject, we’ve discussed puzzles that combine ridiculous difficulty with ridiculous size, a general program for generating hard SBPs using genetic algorithms which tends to get stuck, and created and improved a pipeline of programs designed to quickly and efficiently solve billions of these puzzles, all motivated by a somewhat sadistic impulse to create the most frustrating puzzle possible in the smallest box. We’ve also discussed metrics (I still haven’t gotten around to implementing the BB metric), numbering, and cases where five-word instructions aren’t enough.

In case you wish to verify the results, I’ve posted the code on github, as well as a list of hashes for each of the files generated by SBPSorter. (The entire 29GB directory compresses *really* well, all the way into a single 400MB file which I’ll be more than willing to send to anyone who emails me about it.)

Also, if you somehow managed to get through all that and somehow would like to solve more SBPs, I recently learned about two collections/programs which are both well done: Rodolfo Valeiras’ (somewhat unstable) Deslizzzp, which also happens to have the earliest occurrence I know of Bob Henderson’s “Gauntlet” puzzles, and the totally nonminimalistic Bricks Game. Finally, the sliding block puzzles in Professor Layton are kind of the inspiration for this whole thing, so I would definitely recommend those (especially as they’re actually solvable by humans! Except Puzzle 170 in *The Last Specter*. I have *no idea* what is up with that).

Richard Hess manufactured the 132-move 4*4 puzzle for the 32nd International Puzzle Party!

Happy puzzling!

]]>

In case you missed it, you can click on the image to view it at its full resolution (4096 by 4096 pixels). I highly recommend doing so- for one thing, that background’s not gray.

If you’ve seen any of Robert Bosch or Craig Kaplan’s artwork based on the Traveling Salesman problem (thanks to George Hart for the links), this should look familiar: both are created with a single loop (or, in this case, a line), and appear to be pictures of faces.

So what is it? Simply, it’s a picture of David Hilbert, made out of a Hilbert curve. To create this image, Bill Gosper and I wrote a program which repeatedly subdivided each section of the fractal using a quadtree, until the desired darkness value was reached, with a maximum recursion depth of 10. However, due to the way the Hilbert curve is defined, many tilted lines were created when adjacent nodes of the quadtree were at different depths:

To solve this, we had a rendering program interpolate between the different depths using a sort of Z-shaped connector (For more on this, see the bottom of the post)

The same technique can be used on other fractals, as well. For example, here’s the considerably simpler “Peano(Peano)” curve:

Note the better contrast, mostly due to the thicker line width.

Lastly, here’s the more technical explanation for how this was done, copied verbatim from the math-fun thread:

“Okay, so it may be a bit kludgish, but here’s how we generated the

Hilbert(Hilbert) picture, in case anyone’s interested in the technique:

We were originally inspired by Brian Wyvil’s work, specifically his portrait of

Bill Gosper using a flowsnake curve. He’s apparently done quite a bit using

fractals to indicate values in images, even going so far as to write a drawing

program which automatically produces the curve and colors it in real-time.

While his program doesn’t have to deal with quite so many line segments, he

did decide to use fractional iterations (i.e., interpolated between two

levels) of the fractal, which was something we wanted to avoid.If I remember correctly, Bill wanted to create a portrait of Hilbert using a

Hilbert curve right away, but due to some technical problems (described below)

we started out trying to create a portrait of Peano using a Peano curve. The

technique for doing this is fairly simple:

Starting with an initial, preferably square image (we used the one from

http://en.wikipedia.org/wiki/Giuseppe_Peano ), and a level n curve,

– For each edge, sample the image around the line and return the

minimum value of the samples

– If this value is less than the “expected brightness” of the current

iteration depth (currently, naively computed as 1-n/maxDepth), subdivide

that edge according to the current fractal system.

Repeat this until the iteration depth meets the maximum depth, at which point

the line segments will create an approximation to the original image.Here’s the result using this technique:

http://neilbickford.com/assets/peano-frac-2.png

(5.87 MB, try peano-frac-small.png if that’s too large)Aside from some noticeable problems, such as the clear difference in grays

produced by only using 7 iterations, this technique works fairly well. However,

Hilbert(Hilbert) involves some additional difficulties- specifically, the

recursion always adds additional line segments, and the spacefilling path only

begins and ends at 0 and 1 in the limit (that is, iteration n goes between

(1+I)/2^n and 1-(1-I)/2^n). The source image we used for Hilbert

( http://en.wikipedia.org/wiki/File:Hilbert.jpg ) also doesn’t have as many

hard edges as the image for Peano, and there are many fine details, such as the

hairs in his beard or his glasses, some of which aren’t very clear in the final

picture.This means that not only did we have to modify the above algorithm to recurse

on points, but also that we had to come up with some way of connecting parts of

the curve that were in two different recursion depths using only orthogonal

lines. Our original idea was to connect the pieces using parts of the Hilbert

curve itself, and while we did have a sort of proof of concept

(screenshot at http://neilbickford.com/assets/hilbinterpolation.png ), it was

incredibly finicky to work with, and I eventually gave it up.

(It’s not impossible, though, and it’d be neat if someone came up with a

version that actually followed the Hilbert curve)Once we got the code working, we soon found out that it would be necessary to

go down to level 9 to get a good result. This immediately led to the discovery

of the memoization bug in Mathematica, which Gosper covered in an earlier

email. Once <i>that</i> was fixed, we got an image which is at least passable:

http://neilbickford.com/assets/hilbinterpolation.png (2.62 MB)

Notice how in areas where the original image was textured, such as Hilbert’s

suit, almost all the lines are slanted.However, you’ll notice that the final image does have straight lines.

Okay, I’ll admit it: we cheated a bit. Bill suggested moving the vertices

around until the the lines were straight, so I wrote out the coordinates to a

file (available at http://neilbickford.com/assets/hilblines.zip ), then wrote a

C#/XNA program to nudge everything around until it looked about right.

Once that was finished, the program saved the image out to a file, which was

what Bill posted at the top of this thread.It should be noted that there was a lot of trial-and-error in finding the

correct parameters to get a good image involved in this process- for example,

the rendering program would sometimes round off the corners too much and

actually reduce the number of iterations in the curve, or wind up with

intersecting edges. Typical failures looked like Ulysses S. Grant with a hat;

at worst, Bill compared the resulting image to Freddy Kreuger.

One interesting result was that a black background worked best; not only did it

provide high contrast with the edge of Hilbert’s face, but it also set the

minimum recursion level to a high enough value that smaller details are visible.Lastly, some things about the final picture could be improved. I didn’t let the

rendering program run for long enough, and so intersecting and even slightly

slanted lines can still be seen, although they’re not very visible.”

The method of moving the vertices around has since been replaced with another method, specifically:

Given the two endpoints, p0 and p3,

If the segment in question is pointing more vertically than horizontally

(i.e. (p3-p0).{0,1}/||p3-p0||>1/sqrt(2)), insert the pointsp1={p0.x,(p0.y+p3.y)/2} and

p2={p3.x,(p0.y+p3.y)/2}.

Otherwise, do the same thing, but with

p1={(p0.x+p3.x)/2,p0.y} and

p2={(p0.x+p3.x)/2,p3.y}.

where v.x and v.y represent the x and y components of v, respectively

(or if you prefer, x={1,0} and y={0,1} and the period is a dot product)This certainly isn’t new or novel, but it works in the case of connecting parts

of multi-level Hilbert curves.

Julian Ziegler-Hunts sent me a way of interpolating between levels which actually follows the Hilbert curve by spiraling in towards a conveniently located point at the intersection of the two curves, and then spiraling outwards until the quadtree cell has been exited. His current implementation works by separating the level-variance problem into three subcases (connecting two sides of a square by 90, 180, or 270-degree turns) and then using a simple geometric construction for each of those. Unfortunately, the code’s a bit long, but hopefully the reader can figure out the form from this brief description.

]]>

First, however, it helps to simplify the problem a bit: Throughout this post, except when specified otherwise, we’ll be referring to a two-dimensional, square, array of heights, called a height map. Typically, this will represent a region of land, sampled at a constant interval in both the x and y directions, called the sampling rate. While this approach does have some deficiencies- for example, it is incapable of making caves, and it will look very strange if projected onto a sphere- there are ways of overcoming these, and in general it usually shortens the description of each method by a lot.

Anyways, the first algorithm you might think of would be to assign each cell in the height map to be a random value from, say, -1 to 1. It should be obvious to most readers, though, that this is a horrible algorithm and does not produce realistic results. It produces mountains and valleys, but the density of these features depends on the sampling rate of the map, and if the map was meant to represent a small area, such as a 1m by 1m square, it would look more like a spike pit than an actual piece of land.

A basic feature of landscapes, at least in mountainous regions, is that they go up, they go down, and it’s a bit rough in between. These qualities can be simulated using Brownian noise (a.k.a. “1/f” noise, a “random walk”, or a “drunkard’s walk”). Essentially, a random walk is created in 1 dimension by starting from 0, then repeatedly adding a random number (between -1 and 1). In other words, a random walk is created by summing 1D noise. While this does produce a good profile of a mountain range, there are two problems: There’s not an elegant way to extend it to two dimensions, and it, much like the random number approach, depends on the sampling rate of the map.

A further feature of mountain ranges is that they, to a certain extent, are self-similar. For example, one could view Mount Everest as a sort of “sub-mountain” of the land around it, and various localized peaks on Everest as “sub-sub-mountains”. With this in mind, it is possible to modify the algorithm for producing Brownian motion to display fractal behavior. Start out with two points, forming a line segment. Then, move the midpoint of each line segment up or down a random amount, forming two more lines per line segment. If you repeat this second step, having the random amount decrease exponentially with the number of iterations, you get a result which almost looks plausible.

Not only is fractal Brownian motion invariant of the sampling rate- given a fraction of the form p/(2^q), it is possible to deterministically evaluate the height function at that point, as long as you store or can recreate the random numbers generated- it can be extended to two and higher dimensions as well, although the generalization’s not the most obvious thing.

I’ll just describe the algorithm for fBm in two dimensions, also known as the Diamond-Square Algorithm, then show how to extend it to three or more. First of all, notice that the 1D algorithm for fBm could be framed as an interpolation scheme- first start with an array of length 2, zoom in and fill in the missing point (with a random offset) to get a array of length 3, then 5, 9, 17, 33, 65, …= . The Diamond-Square algorithm works in a similar way: Starting from an array of size by , we zoom in (enlarging each 2×2 square to 3×3) , and fill in the missing 5 points.

I’ve left out an important part in this description: The order in which you fill in the missing points matters. Basically, points on the edge of a subsquare don’t have at least 3 adjacent points to average from, but rather 2. The correct approach here is to evaluate the center points of the subsquares first, by averaging the 4 *diagonally* adjacent points and adding a random amount, then to determine the heights at the edges of the subsquares from the 4 (3 if on the edge of the heightmap) adjacent points. While this can sometimes be surprisingly tricky to implement, it generates landscapes of almost professional quality.

I say “almost” here because, as with most terrain generation methods, there are problems with it. Perhaps the most evident of these is that you can see “creases” on the final model where early subdivisions happened, which makes it seem artificial. Additionally, real landscapes simply don’t have infinite fractal detail. Natural processes, such as erosion, tend to cause land to not be self-similar at small scales. If algorithms such as the Diamond-Square method are run for too long, the result will begin to look like a strangely shaped Gothic cathedral, or the side of a buckled-in hull of a ship- not a map of a mountain range. I didn’t include that in the animation above, but here’s a picture of when the Diamond-Square method has run for too long:

The generalization of fBm to higher dimensions is similar to the generalization of fBm to two dimensions: Start with the center of the (hyper)cube, and work your way out to the edges. For example, in 3D you evaluate the centers of the cubes, then the face centers, then the edges.

The other widely-used method for generating random fractal landscapes is to use a sort of “smooth noise” developed by Ken Perlin sometime around 1983, after working on the original Tron movie. While it does rely on an interpolation function, Perlin noise adds separate functions together to create the final result. The basic idea is that mountains can be thought of as a waveform, composed of a large, low frequency wave, a smaller, higher frequency wave, an even smaller, even higher frequency wave, and so forth. To create a wave in Perlin noise, we first create a list of random numbers, then create an (often piecewise) function which interpolates between these numbers. There are many interpolation functions that you can use, but basically any one except linear interpolation on Paul Bourke’s “Interpolation methods” page should work just fine.

Once n interpolation functions, have been defined, 1D Perlin noise is just , where p and q are typically both greater than 1.

Interestingly, a special case of Perlin noise was discovered by Weierstrass in 1872, in a completely different context. Suppose that the random numbers generated were 1,-1,1,-1… for all i, and that cosine interpolation was being used, so that . Then we get Weierstrass’ function, , which is nowhere differentiable and which, not coincidentally, looks a bit like a mountain.

Perlin noise can be generalized to 2D and higher dimensions by simply modifying the interpolation function to first interpolate over the x axis, then over the y axis, and so forth. For example, in 2D, to interpolate over a rectangle with heights a, b, c, and d, . (Another way is to start with an image of random noise, repeatedly zoom into the upper-left quadrant (blurring the image as a result), then add all the resulting images together, with the noisier ones contributing exponentially less to the final sum)

As I mentioned in the second paragraph, way above at the top of the post, the height map simplification is incapable of making caves or overhangs, because height maps are defined to only store the amount of terrain that sits directly above each point- not where any vacancies might be. Clearly, this is a problem for games such as Minecraft, where overhangs are practically a part of the game mechanic. This problem is fixed by using a 3D height map instead, where positive values might mean that there is land at that particular location, and negative values mean there is just air there. However, simply populating a 3D height map with 3D Perlin noise won’t work- instead of getting a landscape with mountains and valleys,

you’ll get a bizarre structure with nothing much which could really be called a mountain or a valley, and with entire chunks of land just hanging in the air:

In short, Bad Things Happen. Lots of work goes into making 3D noise behave properly, and I won’t go fully into it since it could take up a whole other blog post. For Minecraft in particular, Notch, the former developer, posted some information about it on his blog.

Can Perlin noise be used in other ways than terrain generation? To quote Don Knuth, “Yes”. Whenever a source of noise which has some continuity to it is needed, animated or otherwise, Perlin noise is always an option. 1D Perlin noise can be used for creating inexact lines, 2D and 3D Perlin noise can be used for artificial textures, such as marble or wood or bumps, 3D noise can be used to create static smoke or clouds, 4D noise can animate it, etc. It’s used in most CGI software, and Perlin himself won an Academy Award for it. I should mention, though, that the computation time of Perlin noise grows exponentially with the number of dimensions. This lead to the development of Simplex noise, which is faster and alleviates some problems with Perlin noise. In short, it’s like Perlin noise, but better.

In addition to the fractal terrain generation methods listed above, there are many which rely on completely different approaches, and which may or may not be suitable for computer implementation. One objection to fractal terrain is that it is fractal- specifically, there’s no reason why fractal landscapes happen to approximate the authentic ones so well. Perhaps the most realistic approach to terrain generation is just to simulate the system of tectonic plates directly, and see what forms it creates. This has been tried many times (I’ve found examples dating back to at least 1996), with levels of computation time and success varying over many magnitudes. Just this year, Lauri Viitanen published a thesis describing how to simulate such a system, and the video published with it seems to show that the algorithm works. However, there are a few problems with it: It uses the Euler approximation to simulating a system, which is prone to “blowing up” or otherwise malfunctioning; The plates have to be broken up and given new velocities every few hundred steps, when in reality the plates are driven; and the algorithm has to start from a height map, even though new land is automatically created. (Additionally, I’ve tried several times to translate it into C#, but have failed every time- I either get a boring result, or a clear bug, such as bouncing or teleporting continents)

The analogy of mountains to waveforms can be transformed into a function another way: Instead of having the amplitude of each wave scale by p^(-f), where f is the frequency, we could modify it to use f^(-p) instead. Paul Bourke’s implementation goes like this: We populate an array with random numbers between -1 and 1, take the discrete Fourier transform (converting from (amplitude, time) to (amplitude, frequency)) , scale by , then take the inverse discrete Fourier transform to convert back to (amplitude, time). The result will be a terrain which is about as good as Perlin noise, but has one additional property: Because the discrete Fourier transform assumes that the function is periodic, the resulting terrain can be tessellated smoothly. Additionally, this approach allows for a much easier generalization to higher dimensions than either Perlin noise or fBm: To get an nD landscape, just use an nD array and an nD Fourier transform!

It’s very hard to map rectangles onto spheres without any deformation- in fact, it’s impossible. However, there are map generation algorithms which can be implemented on a sphere, such as Hugo Elias’ spherical landscapes algorithm: Repeatedly slice a sphere with a plane in random places in random directions, always moving the part of the sphere in front of the plane out by a small amount, and the part of the sphere behind the plane in by the same amount. There’s not really any problems with this method, other than that points can’t be generated without running thousands of steps of computation.

Lastly, there are some methods which have no reason to work at all. Bill Gosper, my mentor, has noticed that when certain curtains are hung in front of a window, the shadows caused by the windowsill create a silhouette which looks vaguely like a mountain range. Unfortunately, I don’t have a picture of this effect, but as soon as I find one I’ll post it here.

In conclusion, there are a wide range of algorithms available for creating fake landscapes, many of which are quite imaginative and yet work well.

]]>

In Borges` short story, the fictional Sinologist Stephen Albert describes Ts`ui Pen’s “The Garden of Forking Paths” as essentially a “Choose Your Own Adventure” novel; the story branches based on choices the characters make. However, there are three things which set it apart from the modern version: Firstly, the reader can see everything happening in all the stories at once- if there were two stories being told, the book would list a part of the first story, then the same part, chronologically, of the second story, then the next part of the two stories, and so forth. (This style of writing is sometimes referred to as “hypertext fiction”) Another thing which is unconventional is that sometimes two stories merge. As Albert said to Dr. Tsun, in one path the protagonist came to his house as a friend; in another, as an enemy. In the book, an army marched through the mountains, and after acquiring a disdain for life, won the battle easily; alternatively, the same army passed through a palace in which a ball was being held, and because of this, won the battle the same way. Lastly, some things are inevitable: In life, this is death; in the story, this is the fact that Captain Richard Madden will capture the narrator.

To digress for now to the second topic: In the field of quantum mechanics, the Many-Worlds hypothesis (first printed by Hugh Everett in 1957, 16 years after Borges` story) is in its simplest state the idea that every time someone or something makes a choice, the universe branches off into many other universes, in each of which a different one of the possible choices was made. The usual example of this is the Schrödinger’s Cat thought experiment. Imagine a cat was placed in a box, and a device inside the box then flips a coin and measures whether it falls on heads or tails. If it falls on tails, it kills the cat; otherwise, it does nothing. After the first coin flip, the universe can be thought of as splitting into two different universes: one in which the cat is dead, and another in which the cat is alive. By observing the state of the cat, the experimenter can see which universe he’s in, but he can’t suddenly switch from the “plot” of one universe to the plot of another.

Ts`ui Pen’s story shares many things with this theory: they both involve branching due to choices, the idea of separate timelines, and in fact Stephen Albert claimed “The Garden of Forking Paths” was, like Many-Worlds, “an incomplete, but not false, image of the universe”. The two ideas differ only in that Pen’s includes universes merging and inevitable events, while in the Many-Worlds hypothesis universes can’t merge, and there are somewhat extreme ways to keep otherwise inevitable events from happening. (See [2], pp. 26-27, 32-33, 54-57)

As it turns out, while the style of writing in Ts`ui Pen’s fictional book certainly was original, the most important ideas appeared more than half a century before Borges` story, in Lewis Carroll’s little-known novel *Sylvie and Bruno, *published in 1889. In the beginning of Chapter 23, the narrator witnesses a tragic event: a box is left in the street and a bicyclist crashes into the box and is flung over the handlebars, causing him to get very badly injured and go to the hospital. Using a magical watch, the narrator goes back to before the crash, removes the box, which causes the bicyclist not to crash. However, at precisely the time he wound the watch backwards, the scene instantaneously switches to the bicyclist in the hospital, with exactly the same injuries as when the narrator didn’t remove the box.

While this might initially seem strange, it can be interpreted as follows: At time, say, 1:00, the universe bifurcates into two different universes: A, in which the narrator does not remove the box; and B, in which he does. Initially, we see the narrator go down the timeline of universe A, but then at 1:30 (when the bicyclist is in the hospital of universe A) he goes back in time just before the bifurcation point. Here, he chooses to go down the path of universe B, and for a while he remains in that timeline, until 1:30, at which the narrator from universe A is returned to universe A. (It is unclear what the state of narrator B is during this timespan- are there two copies of the narrator, or is narrator A seeing the world through narrator B’s eyes?) In this way, Carroll not only gave the idea for the Many-Worlds hypothesis, but also presented how time travel might work in such a system. Carroll’s model shares the aspects of branching based on choices with Borges’ story and the Many-Worlds hypothesis, and shares merging and inevitability with Borges’ model.

As with anything, there are a few objections that could be made- Carroll’s model switches back abruptly, as if a scene from a film had been replaced with another scene that just happened to match at the beginning, which seems unnatural; Ts`ui Pen’s novel shows every universe happening at once, which certainly the inhabitants of our universe don’t see; the description of universes ‘bifurcating’ is just a way to think of the evolution of the wave function, and so forth. However, one can definitely conclude from the evidence given that Borges’ *Garden of Forking Paths*, Carroll’s Magic Watch tale, and the Many-Worlds hypothesis resemble each other- and perhaps one even inspired another.

[0] Joseph Luis Borges, “The Garden of Forking Paths” – from Collected Fictions, translated by Andrew Hurley. Published by the Penguin Group, 1998.

[1] Lewis Carroll, “Sylvie and Bruno”. From The Collected Works of Lewis Carroll, 2005 Barnes and Noble edition. Illustrated by John Tenniel.

[2] Jason Shiga, “Meanwhile”, Second Edition. Published by Amulet Books, 2010.

]]>

(Videography due to Peter Bickford- there was a cameraman recording all the talks there, but I haven’t heard from him recently.)

Transcript (sentences in brackets represent notes, i.e, things I wanted to say in the talk but didn’t have time for):

Hello, I’m Neil Bickford, and this is my talk on “Reversing the Game of Life for Fun and Profit”.

Now, the topic of reversing the Game of Life has been mentioned in a few talks here before, and they honestly almost gave me a heart attack.

I thought I was too young for that.

Anyways, for those who don’t know, the Game of Life is a simulation that you can play yourself on ordinary 2-dimensional graph paper. It goes like this: First, draw a pattern, then follow these rules:

- For each cell, consider its 8 surrounding neighbors.
- If the current cell is alive and more than 3 of those neighbors are alive, the cell dies due to overcrowding.
- If the cell is alive and less than 2 of those neighbors are alive, then the cell dies due to loneliness.
- If the cell is dead and exactly 3 of those neighbors are alive, then the cell is spontaneously born. (I don’t know why).
- Otherwise, the cell stays the same.
Anyways, so around the same time this game was proposed, somebody asked “Can we reverse the Game of Life”? I.e, going from that glider on the right, can we essentially move it backwards and get the glider on the left?

Well, as it turns out, we can, but it’s fairly hard. For example, suppose we start with a simple pattern, that anyone who has played video games should recognize, [It’s a Space Invader] and we try to find a particular predecessor of it. Well, here’s one you’d have to look through of the actual predecessors of it. Here are a few more- [this repeats a few times, the slides zooming out of a matrix of Space Invader predecessors], and if we zoom it out a bit- Here are all of them in a bounding box of radius 1. There are 2,680 of them, and it’s quite hard to find all of them.

That’s not my exchange gift. [It really wasn’t- this was my exchange gift. The purpose of these slides was to show that finding predecessors of GoL patterns is really hard for computers, and also to show that puzzles involving reversing GoL patterns, such as Yossi Elran’s Retrolife, can be really really hard!]

Anyways, suppose we rephrase the question, and ask instead “Can

computersreverse the Game of Life”?As it turns out, they can, but certainly not using the brute-force method!

Suppose we go back to this pattern [Space invader again]. We’d have to search through this number of cells [possibilities, whoops] if we just tried to brute-force all the possibilities, which Mathematica pronounces only as “A very large number”. [The number is 2^((11+2)(8+2)) = 1361129467683753853853498429727072845824]

How large is this number?

Here’s a comparison of three heavenly objects. The one on the right, right there is the sphere of size you would need if you printed out each of these test cases on a 1-inch by 1-inch piece of standard office paper.

The thing in the center is the Sun.

And also there is a single pixel right there which is the Earth. (Oversized actually, this screen isn’t large enough) [The sun in the picture shown was 20 pixels across, meaning that the Earth would have to fit in 1/5 x 1/5 of a pixel]

Anyways, so there have to be better methods, and luckily, there are. The first of them was published around 1970 by a guy named Don Woods. It goes like this:

- Take a pattern [rectangular] and split it up into cells.
- Then, for each of those cells, find predecessors of [it] in a 3×3 square.
- Then, start joining the cells together one after another.
- Once you’ve done that [for all the cells] you should have a list of the possible predecessors.
[Technically, it’s actually adding on cells to the parts of the pattern you already have, not merging a bunch of squares together. For a very verbose description on how exactly to do this, see the source code]

Now, this [method] is good for many patterns but bad for others, specifically big patterns. I won’t go into the reasons why though, since I don’t have enough time- [Actually, large patterns whose top rows have many many predecessors. Woods used his method to verify that Roger Banks’ Garden of Eden was actually a Garden of Eden (and an orphan too!), and in fact his method is great for verifying Gardens of Eden because Gardens of Eden usually don’t have very many predecessors for most rectangular subsets of the pattern] – but it led another person named J. Hardouin-Duparc to create a second algorithm, which I call Duparc’s method. It goes like this:

- Take a pattern, and then split it up into rows.
- Then, find predecessors to each of those rows using any method you like. [My program uses a variant of Woods’ algorithm to do this, but the binary method also works]
- Then, merge all the rows together and you should have all the possible predecessors.
[This is a complete lie- this method wasn’t actually proposed by Duparc! What this actually is is what I

thoughtDuparc’s method was after reading the Wikipedia article on Gardens of Eden. Duparc’s actual algorithm as described in his paper involves starting with predecessors to the first row, then finding predecessors of the second row to tack on, then the third row, and so on. While Duparc’s original algorithm can be used for almost memory-free DFS search, the merging variation as implemented is usually much faster.]This is only good for certain patterns [Tall, skinny patterns because of the atavising by rows- Duparc used his method to find a 6×119 and a 6×122 Garden of Eden] so I don’t recommend using it, but the merge operation in step 3 is particularly useful.

A few things about merging:

- It requires no Life function evaluations, which are actually rather slow,
- It’s fairly fast as long as you keep the sizes of the stacks you’re merging together about the same
and about 2 months ago I was wondering “Could you possibly make a predecessorifier [ataviser] using only merge operations?” [well, mostly merge operations!] Well, you can, but it’s a bit tricky.

To use this algorithm, which I call “QuadWoods” [well, it’s very much like Woods’ algorithm but performed on a quadtree] , you start with the pattern, then:

- Split it up into four squares,
- Split those up into four squares, and so on until each of those squares have become cells.
- Then, find predecessors to each of those cells and start merging things together in clumps of 4.
Once you’ve done that (and written about 300 lines of computer code [actually 800, with base routines]) you should have a list of all the possible predecessors. Now, this [algorithm] is quite fast, but anyone should be able to see that there’s a problem: it doesn’t work for a general rectangle! There is a solution, but it’s fairly interesting because it’s related to square packing: You have to split up the rectangle into squares of certain [2^n] sizes, and then merge all the parts together!

Since I think I’m running out of time, here’s some misleading statistics. [Why are these statistics misleading? Well, a sample size of 2 certainly shouldn’t be used for any published results! I ran a fairly long testing program to see how fast each of the algorithms were, testing 10 random patterns for each algorithm on a pattern size from 3×3 to 10×10 with a 10 minute time limit. It still took 3 days though! Here are the results, in ms:

Woods’ Method:

24 38 30 71 109 321 206 1453 42 88 122 195 803 1332 1833 4598 215 268 749 2179 1630 69021 66163 43663 851 375 2599 1352 12006 69064 10060 103035 1681 9877 9611 13868 23431 154755 122122 149701 4450 28062 40948 135354 128119 215935 211178 357205 30752 51053 187093 187062 270526 474005 424836 490232 189641 365318 210208 195629 389105 499821 549801 471189 The version of Duparc’s method presented:

35 47 48 72 96 136 750 291 40 136 188 212 483 521 617 920 71 197 1083 1832 1441 9000 10684 6614 168 272 1284 3877 15458 25417 13298 83154 116 591 1785 7818 64424 142969 214087 138432 184 489 2239 22765 88580 595849 530842 574550 191 573 2846 16112 128053 535570 failed all failed all 961 3187 3740 17588 92127 518550 failed all failed all QuadWoods:

46 55 74 107 291 451 1573 4192 45 32 72 78 272 338 1246 1448 90 81 410 702 1259 8891 27218 16237 184 73 710 413 2345 13550 13228 38855 262 426 981 1200 7965 12824 57472 76472 884 216 1482 10931 42164 51733 136175 164764 7131 1381 10606 14601 335295 208159 390242 493339 265880 6388 104134 60871 483578 170099 454376 509703 For easier reading, here’s a color-coded array showing which algorithm was found to be best in each case from 3×3 to 10×10. (Red is the presented variation of Duparc’s Method, Green is Woods’ Method, Blue is Duparc’s actual method, Purple is a merging variation of the faster version of Duparc’s Method, Orange is QuadWoods without square packing, and Yellow is QuadWoods with square packing.)

] As you can see in each case, [invader and logo] QuadWoods was about 10 times faster than the other algorithms.Thank you very much.

First of all, while I said “Predecessorifier” in the talk, “Ataviser” seems to be the accepted word, coming from “Atavism”, which the online Merriam-Webster dictionary defines as “recurrence of or reversion to a past style, manner, outlook, approach, or activity”.

Additionally, the not-so-misleading statistics listed in the square brackets in the end may still be misleading because of a reason Richard Schroeppel kindly pointed out the day before I got on the airplane to Atlanta: Computers don’t like jumping around memory accessing completely random places, and programmers on early computers tried to prevent this from happening in order for their programs to run faster. My current implementation of these algorithms is almost entirely unoptimized, since I only got everything done a few weeks before G4GX, and is coded in C#, a language that, while it handles garbage collection automatically, easily gobbles up memory faster than Cookie Monster eats cookies. Perhaps if these algorithms were carefully coded in a language such as C++, Woods’ method, as well as Duparc’s, would be much faster.

Also, the algorithms presented work on a pattern, pattern being defined as a rectangle with a state (on or off) for each of the cells inside. While defining a pattern as a shape, possibly with holes, with a state for each of the cells inside can lead to some interesting theorems on Gardens of Eden and Orphans (that is, patterns that have no predecessor even when surrounded by additional on cells), it’s much easier to write programs that atavise rectangular patterns.

Related to this is a question I have using the rectangular definition for a pattern: Are all Gardens of Eden Orphans? That is, are there any Gardens of Eden which are not orphan patterns? It’s actually much easier to check if a pattern is a Garden of Eden, as you just have to find a pattern fitting within a n+2 by m+2 rectangle (if the original pattern is n by m cells) which generates the wanted pattern in the center, possibly with some extra stuff on the sides. If you can’t find one using an ataviser, then the pattern is an Orphan. To test if a pattern is a Garden of Eden, though, you have to either show that the pattern is an Orphan (from which it follows that the pattern is also a GoE), or show that there is no pattern which generates the wanted pattern. This can go on forever, because if there is no predecessor in an n+2 by m+2 rectangle, there *might* be one in an n+4 by m+4 rectangle, or n+6 by m+6, etc. Bram Cohen claims that if a pattern has an orphan predecessor in an n+2 by m+2 rectangle, then you can add on ON cells to the orphan predecessor to make a proper predecessor (i.e., one that turns into the wanted pattern with no extra on cells outside the bounding box of the wanted pattern). I have no idea how this would be done, but it would solve the are-GoEs-Orphans problem.

Lastly, here‘s a zipped archive containing the program source as well as a compiled 64-bit version, with a GUI! Documentation to be added shortly.

Thanks to Don Woods for information on atavising algorithms, and Mary Deignan for translation help with Duparc’s paper.

]]>

*Additional notes: ‘Simple simple simple’ is referred to as this post as simple^3,*

*Warning: This article contains original research! (It’s also more papery than most of the other articles on this site)*

In the previous post I mentioned several methods that could be used to speed up the finding and exhaustive searching of sliding block puzzles, as well as a candidate for the ‘hardest’ one-piece-goal diagonal-traverse 4×4 sliding block puzzle. I am pleased to say that the various bugs have been worked out of the sliding block puzzle searcher, and that 132 has been confirmed to be the maximum number of moves for a simple simple simple 4×4 sliding block puzzle with no internal walls!

(that’s not a lot of qualifiers at all)

As the reader may expect, this blog post will report the main results from the 4×4 search. Additionally, I’ll go over precisely how the algorithms very briefly mentioned in the previous post work, and some estimated running times for various methods. Following that, various metrics for determining the difficulty of a SBP will be discussed, and a new metric will be introduced which, while it is slow and tricky to implement, is expected to give results actually corresponding to the difficulty as measured by a human.

Running SBPSearcher on all the 4x4s takes about 3 hours (processing 12295564 puzzles), which was quite useful for fixing hidden bugs in the search algorithm. Here are the best simple^3 4×4 n-piece SBPs, where n goes from 1 to 15:

And for comparison, the move counts vs. the previously reported move counts:

1 4 9 19 36 51 62 89 132 81 64 73 61 25 21 1 4 9 24 36 52 68 90 132 81 64 73 61 25 21

Notice that all the entries in the top row of the table are either less than or equal to their respective entry in the bottom row of the table, some (such as in the case of p=7 or p=4) being much less This is because the previous search (run, in fact, as a subfeature in the sliding block puzzle evolver) started with all possible positions and defined the goal piece to be the first piece encountered going left to right, top to bottom, across the board. As such, the search included both all the 4×4 simple^3 sliding block puzzles as well as a peculiar subclass of sliding block puzzles where the upper-left square is empty but the goal piece is often a single square away! This accounts for the 6-piece case and the 8-piece case (in which the first move is to move the goal piece to the upper-left), but what about the other two cases?

For the 4-piece case, the originally reported puzzle (see here for the whole list) wasn’t simple^3, and it isn’t possible to convert it into a simple^3 SBP by shifting blocks in less than 5 moves. Interestingly, the new ‘best’ puzzle for 4 pieces is just James Stephens’ Simplicity with a few blocks shifted, a different goal piece, and a different goal position! (There could be copyright issues, so unless stated, the puzzles in this article are public domain except for the one in the upper-right corner of this image) Additionally, the 7-piece 68-move puzzle in the previous article is impossible to solve! The upper-left triomino should be flipped vertically in place. I expect this to have been a typing error, but the question still stands: Why is there a 6-move difference?

As mentioned before, the actual stating of what constitutes a simple simple simple puzzle is as such: “A sliding block puzzle where the piece to be moved to the goal is in the upper-left corner, and the goal is to move the goal piece to the lower-right corner of the board”. Unfortunately, there’s some ambiguity as to when a piece is in the lower-right corner of the board – is it when the lower-right cell is taken up by a cell in the goal piece, or is it when the goal piece is as far to the lower-right as it can be? If we take the latter to be the definition, then additional ambiguities pop up when faced with certain problems, such as the following one often encountered by SBPSearcher:

Because of problems like these, SBPSearcher uses the former definition, which means that puzzles where the goal piece takes the shape of an R aren’t processed. (In actuality, it’s SBPFinder that does this checking, when it checks if a puzzle is in the ‘justsolved’ state). If we say that the first definition is stricter than the second, then it could be said that SBPSearcher searched only through the “Strict simple simple simple 4×4 Sliding Block Puzzles”. While I don’t think that any of the results would change other than the p=7 case, it would probably be worth it to modify a version of SBPSearcher so that it works with non-strict simple simple simple sliding block puzzles.

A few last interesting things: The 13-piece case appears to be almost exactly the same as the Time puzzle, listed in Hordern’s book as D50-59! (See also its entry in Rob Stegmann’s collection) Additionally, the same split-a-piece-and-rearrange-the-pieces-gives-you-extra-moves effect is still present, if only because of the chosen metric.

There are only two ‘neat’ algorithms contained in the entire SBP collection of programs, these two in SBPFinder and SBPSearcher respectively. The first of them is used to find all possible sliding block puzzle ‘justsolved’ positions that fit in an NxM grid, and runs in approximately O(2*3^(N+M-2)*4^((N-1)*(M-1))) time. (Empirical guess; Due to the nature of the algorithm, the calculation of the running time is probably very hairy).

First of all, the grid is numbered like this:

0 1 3 6 10 2 4 7 11 15 5 8 12 16 19 9 13 17 20 22 14 18 21 23 24

where the numbers increase going from top-right to lower-left, and moving to the next column over every time the path hits the left or bottom edges. (Technical note: For square grids, the formula x+y<N?TriangleNumber(x + y + 1) – x- 1:N*M – TriangleNumber(N + M- x – y – 1) + N – x – 1 should generate this array)

Then, the algorithm starts at cell #0, and does the following:

Given a partially filled board: (at cell 0 this board is all holes)

- Take all the values of the cells from (cell_number-1) to the number of the cell to the left of me, and put that in list A.
- Remove all duplicates from A.
- If the following values do not exist in A, add them: 0 (aka a hole) and the smallest value between 1 and 4 inclusive that does not exist in A (if there is no smallest value between 1 and 4, don’t add anything except the 0)
- Run fillboard (that is, the current function), giving it cell_number+1 and the board given to
*this*level of the function with the value of cell_number changed to n, for each value n in A.

However, if the board is all filled (i.e, cell_number=25) , check to see that the board has holes and that it is justsolved, and if so, standardize the piece numbering and make sure that you haven’t encountered it before, and if so, sort the board into the appropriate piece number “box”.

Once the fillboard algorithm finishes, you should have N*M-1 boxes with all the possible justsolved positions that can be made on an N*M grid! There are a number of other possible methods that do the same thing- all it basically needs to do is generate all possible ways that pieces fitting in an NxM grid can be placed in a grid of the same size.

For example, you could potentially go through all possible 5-colorings of the grid (4-colors and holes), and remove duplicates, but that would take O(5^(N*M)) time, which isn’t the most feasible option for even a 4×4 grid. You could also proceed in a way that would generate the results for the next number of pieces based on the results of the current number of pieces and all possible rotations and positions of a single piece in an NxM grid by seeing which pieces can be added to the results of the current stage, but that would take O(single_piece_results*all_justsolved_results). While that number may seem small, taking into account the actual numbers for a 4×4 grid (single_piece_results=11505 and all_justsolved_results=12295564) reveals the expected running time to be about the same as the slow 5-coloring method. However, it may be possible to speed up this method using various tricks of reducing which pieces need to be checked as to if they can be added. Lastly, one could go through all possibilities of *edges* separating pieces, and then figuring out which shapes are holes. The time for this would be somewhere between O(2^(3NM-N-M)) and O(2^(2NM-N-M)), the first being clearly infeasible and the second being much too plausible for a 4×4 grid.

In practice, the fillboard algorithm needs to check about 1.5 times the estimated number of boards to make sure it hasn’t found them before, resulting in about half a billion hash table lookups for a 4×4 grid.

The second algorithm, which SBPSearcher is almost completely composed of, is much simpler! Starting from a list of all justsolved puzzles (which can be generated by the fillboard algorithm), the following is run for each board in the list:

- Run a diameter search from the current puzzle to find which other positions in the current puzzle’s graph have the goal piece in the same position;
- Remove the results from step 1 from the list;
- Run another diameter search from all the positions from step 1 (i.e consider all positions from step 1 to be 0 moves away from the start and work from there), and return the
*last*position found where the goal piece is in the upper-left.

Step 2 is really where the speedup happens- Because each puzzle has a graph of positions that can be reached from it, and some of these positions are also in the big list of puzzles to be processed, you can find the puzzle furthest away from any of the goal positions by just searching away from them. Then, because the entire group has been solved, you don’t need to solve the group again for each of the other goal positions in it and those can be removed from the list. For a 4×4 board, the search can be done in 3 hours, 27 minutes on one thread on a computer with a Core I7-2600 @3.4 Ghz and a reasonable amount of memory. In total, the entire thing, finding puzzles and searching through all of the results, can be done in about 4 hours.

Of course, where there are algorithms, there are also problems that mess up the algorithms- for example, how would it be possible to modify SBPSearcher’s algorithm to do, say, simple simple puzzles? Or, is it possible to have the fillboard algorithm work with boards with internal walls or boards in weird shapes, or does it need to choose where the walls are? An interesting thing that would seem to point that the answer to the first question might be yes is that to find the pair of points furthest apart in a graph (which would be equivalent to finding the hardest compound SBP in a graph) requires only 2 diameter searches! Basically, you start with any point in the graph, then find the point furthest away from that, and let it be your new point. Then, find the point furthest away from your new point, and the two points, the one you’ve just found and the one before that, are the two points farthest away from each other. (See “Wennmacker’s Gadget”, page 98-99 and 7 in Ivan Moscovich’s “The Monty Hall Problem and Other Puzzles”)

Through the last 3 posts on sliding block puzzles, I have usually used the “Moves” metric for defining how hard a puzzle is. Just to be clear, an action in the Moves metric is defined as sliding a single piece to another location by a sequence of steps to the left, right, up and down, making sure not to pass through any other pieces or invoke any other quantum phenomena along the way. (The jury is out as to if sliding at the speed of light is allowed). While the majority of solvers use the Moves metric (my SBPSolver, Jimslide,KlotskiSolver, etc.), there are many other metrics for giving an approximation to the difficulty of a sliding block puzzle, such as the Steps metric, and the not-widely used sliding-line metric. The Steps metric is defined as just that- an action (or ‘step’) is sliding a single piece a single unit up, down, left, or right. The sliding-line metric is similar: an action is sliding a single piece any distance in a straight line up, down, left or right. So far as I know, only Analogbit’s online solver and the earliest version of my SBPSolver used the steps metric, and only Donald Knuth’s “SLIDING” program has support for the sliding-line metric. (It also has support for all the kinds of metric presented in this post except for the BB metrics!)

Additionally, each of the 3 metrics described above has another version which has the same constraints but can move multiple pieces at a time in the same direction(s). For example, a ‘supermove’ version of the Steps metric would allow you to slide any set of pieces one square in any one direction. (As far as I know, only Donald Knuth’s SLIDING program and Soft Qui Peut’s SBPSolver have support for any of the supermove metrics) In total, combining the supermove metrics with the normal metrics, there are 6 different metrics and thus 6 different ways to express the difficulty of a puzzle as a number. Note however, that a difficulty in one metric can’t be converted into another, which means that for completeness when presenting results you need to solve each sliding block puzzle 6 different ways! Even worse, the solving paths in different metrics need not be the same!

For example, in the left part of the image above, where the goal is to get the red piece to the lower-right corner, the Moves metric would report 1, and the red piece would go around the left side of the large block in the center. However, the Steps metric would report 5, by moving the yellow block left and then the red block down 4 times. Also, in the right picture both the Moves and Steps metrics would report ∞, because the green block can’t be moved without intersecting with the blue, and the blue block can’t be moved without intersecting with the green, but any of the Supermove metrics would report a finite number by moving both blocks at once!

Various other metrics can be proposed, some with other restrictions (You may not move a 1×1 next to a triomino, etc.), some which, like the Supermove metrics and the second puzzle above, actually change the way the pieces move, and you can eventually get to the point where it’s hard to call the puzzle a sliding block puzzle anymore. (For example, Dries de Clerq’s “Flying Block Puzzles” can be written as a metric: “An action is defined as a move or rotation of one piece to another location. Pieces may pass through each other while moving, but only one piece may be moved at a time”.

Suppose, however, that for now we’re purist and only allow metrics which generate numbers based on the step, sliding-line, moves, super-step, super-sliding-line, and super-moves metrics. It can be seen , quite obviously in fact, that these metrics don’t in all cases show the actual difficulty of the puzzle being considered. For example, take a very large (say 128×128) grid, and add a 126×126 wall in the center. Fill the moat that forms with 507 1×1 cells, all different pieces, and make the problem be to bring the block in the upper-left down to the lower-right. If my calculations are correct, the resulting problem should take 254*507+254=129,032 steps, sliding-line actions, and moves to solve, which would seem to indicate that this is a very hard puzzle indeed! However, any person who knows the first thing about sliding block puzzles should be able to solve it -assuming they can stay awake the full 17 hours it would take!

Because of this discouraging fact- that is, 6 competing standards, none of which are quite right, I would like to introduce a 7th, this one based on a theoretical simulation of a robot that knows the first thing about sliding block puzzles, but nothing else.

Nick Baxter and I have been working on a metric which should better approximate the difficulty of getting from one point to another in a position graph. The basic idea is that the difficulty of getting from node A to node B in a graph is about the same as the average difficulty of getting from node A to node B in all spanning trees of the graph. However, finding the difficulty of getting to node A to node B in a tree is nontrivial, or at least so it would seem at first glance.

Suppose you’re at the entrance of a maze, and the maze goes on for far enough and is tall enough such that you can only see the paths immediately leading off from where you are. If you know that the maze is a tree (i.e, it has no loops), then a reasonable method might be to choose a random pathway, and traverse that part of the maze. If you return from that part to the original node, then that part doesn’t contain the goal node and you can choose another random pathway to traverse, making sure of course not to go down the same paths you’ve gone down before. (Note that to be sure that the goal node isn’t in a part of the maze, you need to go down all the paths twice, to walk down a path and then back up the path). For now, we take the difficulty of the maze to be the average number of paths you’ll need to walk on to get to the goal node or decide that the maze has no goal(counting paths you walk down and up on as +2 to the difficulty). Because of the fact that if the node you’re in has no descendant nodes which are the goal node you’ll need to go down all of the paths leading from that node twice, the difficulty of the part of the maze tree branching off from a node A can be calculated as

sum(a_i+2,i=1 to n) (eq. 1)

where n is the number of subnodes, and a_i is the difficulty of the ith subnode. Also, if the node A* is* on the solution path between the start and end nodes, then the difficulty of A can be calculated as

a_n+1+1/2 sum(a_i+2,i=1 to n-1) (eq. 2)

where a_n is assumed to be the subnode which leads to the goal. This basically states that on average that you’ll have to go down half of the subpaths and the path leading to the goal to get to the goal. Because root nodes are assumed to have 0 difficulty, you can work up from the bottom of the maze, filling in difficulties of nodes as you go up the tree. After the difficulty of the root node has been calculated, the length of the path between start and end nodes should be subtracted to give labyrinths (mazes with only a single path) a BB difficulty of 0.

Perhaps surprisingly, it turns out that using this scheme, the difficulty of a tree with one goal node is always measured as V-1-m, where V is the number of nodes in the tree (and V-1 is the number of edges, but this is not true for graphs) and m is the number of steps needed to get from the start node to the end node in the tree! Because of this, the difficulty of getting from one point to another in a graph under the BB metric is just the number of vertices, minus one, minus the average path length between the start and end nodes in the graph.

A few things to note (and a disclaimer): First of all, the actual graph of which positions can be reached in 1 action from each of the other positions actually depends on the type of move chosen, so the BB metric doesn’t really remedy the problem of the 6 competing standards! Secondly, the problem of computing the average path length between two points in a graph is *really really hard* to do quickly, especially because an algorithm which would also give you the maximum path length between two points in polynomial time would allow you to test if a graph has a Hamiltonian Path in polynomial time, and since the Hamiltonian Path problem is NP-Complete, you could also do the Traveling Salesman Problem, Knapsack Problem, Graph Coloring Problem, etc. in polynomial time! Lastly, I haven’t tested this metric on any actual puzzles yet, and I’m also not certain that nobody else has come up with the same difficulty metric. If anybody knows, please tell me!

One last note: Humans don’t actually walk down mazes by choosing random paths- usually it’s possible to see if a path dead-ends, and often people choose the path leading closest to the finish first, as well as a whole host of other techniques that people use when trying to escape from a maze. Walter D. Pullen, author of the excellent maze-making program Daedalus, has a big long list of things that make a maze difficult here. (Many of these factors could be implemented by just adding weights to eqns. 1 and 2 above)

- What are the hardest simple simple simple 3×4 sliding block puzzles in different metrics? 2×8? 4×5? (Many, many popular sliding block puzzles fit in a 4×5 grid)
- How much of a difference do the hardest strict simple^3 puzzles have with the hardest simple^3 SBPs?
- How hard is it to search through all simple simple 4×4 SBPs? What about simple SBPs?
- (Robert Smith) Is there any importance to the dual of the graph induced by an SBP?
- Why hasn’t anybody found the hardest 15 puzzle position yet? (According to Karlemo and Östergård, only 1.3 TB would be required, which many large external hard drives today can hold! Unfortunately, there would be a lot of reading and writing to the hard disk, which would slow down the computation a bit.) (Or have they?)
- Why 132?
- What’s the hardest 2-piece simple sliding block puzzle in a square grid? Ziegler & Ziegler have shown a lower bound of 4n-16 for an nxn grid, n>6. How to do so is fairly easy, and is left as an exercise for the reader.
- Is there a better metric for difficulty than the BB metric?
- Is there a better way to find different sliding block puzzle positions? (i.e, improve the fillboard algorithm?)
- Is it possible to tell if a SBP is solvable without searching through all possible positions? (This question was proposed in Martin Gardner’s article on Sliding Block Puzzles in the February 1964 issue of Scientific American)
- (Robert Smith) How do solutions of SBPs vary when we make an atomic change to the puzzle?
- Are 3-dimensional sliding block puzzles interesting?
- Would it be worthwhile to create an online database of sliding block puzzles based on the OEIS and as a sort of spiritual successor to Edward Hordern’s Sliding Piece Puzzles?

Ed Pegg, “Math Games: sliding-block Puzzles”, http://www.maa.org/editorial/mathgames/mathgames_12_13_04.html

James Stephens, “Sliding Block Puzzles”, http://puzzlebeast.com/slidingblock/index.html (see also the entire website)

Rob Stegmann, “Rob’s Puzzle Page: Sliding Puzzles”, http://www.robspuzzlepage.com/sliding.htm

Dries De Clerq, “Sliding Block Puzzles” http://puzzles.net23.net/

Neil Bickford, “SbpUtilities”, http://github.com/Nbickford/SbpUtilities (meh)

Jim Leonard, “JimSlide”, http://xuth.net/jimslide/

The Mysterious Tim of Analogbit, “Sliding Block Puzzle Solver”, http://analogbit.com/software/puzzletools

Walter D. Pullen, “Think Labyrinth!”, http://www.astrolog.org/labyrnth.htm

Donald Knuth, “SLIDING”, http://www-cs-staff.stanford.edu/~uno/programs/sliding.w

Martin Gardner, “The hypnotic fascination of sliding-block puzzles”, *Scientific American*, 210:122-130, 1964.

L.E. Hordern, “Sliding Piece Puzzles”, published 1986 Oxford University Press

Ivan Moscovich, “The Monty Hall Problem and Other Puzzles”

R.W. Karlemo, R. J. Östergård, “On Sliding Block Puzzles”, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.7558

Robert Hearn, “Games, Puzzles, and Computation”, http://groups.csail.mit.edu/mac/users/bob/hearn-thesis-final.pdf

Ruben Grønning Spaans, “Improving sliding-block puzzle solvingusing meta-level reasoning”, http://daim.idi.ntnu.no/masteroppgave?id=5516

John Tromp and Rudi Cilibrasi, “”Limits on Rush Hour Logic Complexity”, arxiv.org/pdf/cs/0502068

David Singmaster et al., “Sliding Block Circular”, www.g4g4.com/pMyCD5/PUZZLES/SLIDING/SLIDE1.DOC

Thinkfun & Mark Engelberg, “The Inside Story of How We Created 2500 Great Rush Hour Challenges”, http://www.thinkfun.com/microsite/rushhour/creating2500challenges

Ghaith Tarawneh, “Rush Hour [Game AI]”, http://black-extruder.net/blog/rush-hour-game-ai.htm

Any answers to questions or rebukes of the data? Post a comment or email the author at (rot13) grpuvr314@tznvy.pbz

]]>