This post is mostly a transcript with notes of a talk I did at G4GX, the 10th Gathering for Gardner. As such, it differs from the normal format a bit.
(Videography due to Peter Bickford- there was a cameraman recording all the talks there, but I haven’t heard from him recently.)
Transcript (sentences in brackets represent notes, i.e, things I wanted to say in the talk but didn’t have time for):
Hello, I’m Neil Bickford, and this is my talk on “Reversing the Game of Life for Fun and Profit”.
Now, the topic of reversing the Game of Life has been mentioned in a few talks here before, and they honestly almost gave me a heart attack.
I thought I was too young for that.
Anyways, for those who don’t know, the Game of Life is a simulation that you can play yourself on ordinary 2-dimensional graph paper. It goes like this: First, draw a pattern, then follow these rules:
- For each cell, consider its 8 surrounding neighbors.
- If the current cell is alive and more than 3 of those neighbors are alive, the cell dies due to overcrowding.
- If the cell is alive and less than 2 of those neighbors are alive, then the cell dies due to loneliness.
- If the cell is dead and exactly 3 of those neighbors are alive, then the cell is spontaneously born. (I don’t know why).
- Otherwise, the cell stays the same.
Anyways, so around the same time this game was proposed, somebody asked “Can we reverse the Game of Life”? I.e, going from that glider on the right, can we essentially move it backwards and get the glider on the left?
Well, as it turns out, we can, but it’s fairly hard. For example, suppose we start with a simple pattern, that anyone who has played video games should recognize, [It’s a Space Invader] and we try to find a particular predecessor of it. Well, here’s one you’d have to look through of the actual predecessors of it. Here are a few more- [this repeats a few times, the slides zooming out of a matrix of Space Invader predecessors], and if we zoom it out a bit- Here are all of them in a bounding box of radius 1. There are 2,680 of them, and it’s quite hard to find all of them.
That’s not my exchange gift. [It really wasn’t- this was my exchange gift. The purpose of these slides was to show that finding predecessors of GoL patterns is really hard for computers, and also to show that puzzles involving reversing GoL patterns, such as Yossi Elran’s Retrolife, can be really really hard!]
Anyways, suppose we rephrase the question, and ask instead “Can computers reverse the Game of Life”?
As it turns out, they can, but certainly not using the brute-force method!
Suppose we go back to this pattern [Space invader again]. We’d have to search through this number of cells [possibilities, whoops] if we just tried to brute-force all the possibilities, which Mathematica pronounces only as “A very large number”. [The number is 2^((11+2)(8+2)) = 1361129467683753853853498429727072845824]
How large is this number?
Here’s a comparison of three heavenly objects. The one on the right, right there is the sphere of size you would need if you printed out each of these test cases on a 1-inch by 1-inch piece of standard office paper.
The thing in the center is the Sun.
And also there is a single pixel right there which is the Earth. (Oversized actually, this screen isn’t large enough) [The sun in the picture shown was 20 pixels across, meaning that the Earth would have to fit in 1/5 x 1/5 of a pixel]
Anyways, so there have to be better methods, and luckily, there are. The first of them was published around 1970 by a guy named Don Woods. It goes like this:
- Take a pattern [rectangular] and split it up into cells.
- Then, for each of those cells, find predecessors of [it] in a 3×3 square.
- Then, start joining the cells together one after another.
- Once you’ve done that [for all the cells] you should have a list of the possible predecessors.
[Technically, it’s actually adding on cells to the parts of the pattern you already have, not merging a bunch of squares together. For a very verbose description on how exactly to do this, see the source code]
Now, this [method] is good for many patterns but bad for others, specifically big patterns. I won’t go into the reasons why though, since I don’t have enough time- [Actually, large patterns whose top rows have many many predecessors. Woods used his method to verify that Roger Banks’ Garden of Eden was actually a Garden of Eden (and an orphan too!), and in fact his method is great for verifying Gardens of Eden because Gardens of Eden usually don’t have very many predecessors for most rectangular subsets of the pattern] – but it led another person named J. Hardouin-Duparc to create a second algorithm, which I call Duparc’s method. It goes like this:
- Take a pattern, and then split it up into rows.
- Then, find predecessors to each of those rows using any method you like. [My program uses a variant of Woods’ algorithm to do this, but the binary method also works]
- Then, merge all the rows together and you should have all the possible predecessors.
[This is a complete lie- this method wasn’t actually proposed by Duparc! What this actually is is what I thought Duparc’s method was after reading the Wikipedia article on Gardens of Eden. Duparc’s actual algorithm as described in his paper involves starting with predecessors to the first row, then finding predecessors of the second row to tack on, then the third row, and so on. While Duparc’s original algorithm can be used for almost memory-free DFS search, the merging variation as implemented is usually much faster.]
This is only good for certain patterns [Tall, skinny patterns because of the atavising by rows- Duparc used his method to find a 6×119 and a 6×122 Garden of Eden] so I don’t recommend using it, but the merge operation in step 3 is particularly useful.
A few things about merging:
- It requires no Life function evaluations, which are actually rather slow,
- It’s fairly fast as long as you keep the sizes of the stacks you’re merging together about the same
and about 2 months ago I was wondering “Could you possibly make a predecessorifier [ataviser] using only merge operations?” [well, mostly merge operations!] Well, you can, but it’s a bit tricky.
To use this algorithm, which I call “QuadWoods” [well, it’s very much like Woods’ algorithm but performed on a quadtree] , you start with the pattern, then:
- Split it up into four squares,
- Split those up into four squares, and so on until each of those squares have become cells.
- Then, find predecessors to each of those cells and start merging things together in clumps of 4.
Once you’ve done that (and written about 300 lines of computer code [actually 800, with base routines]) you should have a list of all the possible predecessors. Now, this [algorithm] is quite fast, but anyone should be able to see that there’s a problem: it doesn’t work for a general rectangle! There is a solution, but it’s fairly interesting because it’s related to square packing: You have to split up the rectangle into squares of certain [2^n] sizes, and then merge all the parts together!
Since I think I’m running out of time, here’s some misleading statistics. [Why are these statistics misleading? Well, a sample size of 2 certainly shouldn’t be used for any published results! I ran a fairly long testing program to see how fast each of the algorithms were, testing 10 random patterns for each algorithm on a pattern size from 3×3 to 10×10 with a 10 minute time limit. It still took 3 days though! Here are the results, in ms:
24 38 30 71 109 321 206 1453 42 88 122 195 803 1332 1833 4598 215 268 749 2179 1630 69021 66163 43663 851 375 2599 1352 12006 69064 10060 103035 1681 9877 9611 13868 23431 154755 122122 149701 4450 28062 40948 135354 128119 215935 211178 357205 30752 51053 187093 187062 270526 474005 424836 490232 189641 365318 210208 195629 389105 499821 549801 471189
The version of Duparc’s method presented:
35 47 48 72 96 136 750 291 40 136 188 212 483 521 617 920 71 197 1083 1832 1441 9000 10684 6614 168 272 1284 3877 15458 25417 13298 83154 116 591 1785 7818 64424 142969 214087 138432 184 489 2239 22765 88580 595849 530842 574550 191 573 2846 16112 128053 535570 failed all failed all 961 3187 3740 17588 92127 518550 failed all failed all
46 55 74 107 291 451 1573 4192 45 32 72 78 272 338 1246 1448 90 81 410 702 1259 8891 27218 16237 184 73 710 413 2345 13550 13228 38855 262 426 981 1200 7965 12824 57472 76472 884 216 1482 10931 42164 51733 136175 164764 7131 1381 10606 14601 335295 208159 390242 493339 265880 6388 104134 60871 483578 170099 454376 509703
For easier reading, here’s a color-coded array showing which algorithm was found to be best in each case from 3×3 to 10×10. (Red is the presented variation of Duparc’s Method, Green is Woods’ Method, Blue is Duparc’s actual method, Purple is a merging variation of the faster version of Duparc’s Method, Orange is QuadWoods without square packing, and Yellow is QuadWoods with square packing.)
Thank you very much.
Some Additional Notes
First of all, while I said “Predecessorifier” in the talk, “Ataviser” seems to be the accepted word, coming from “Atavism”, which the online Merriam-Webster dictionary defines as “recurrence of or reversion to a past style, manner, outlook, approach, or activity”.
Additionally, the not-so-misleading statistics listed in the square brackets in the end may still be misleading because of a reason Richard Schroeppel kindly pointed out the day before I got on the airplane to Atlanta: Computers don’t like jumping around memory accessing completely random places, and programmers on early computers tried to prevent this from happening in order for their programs to run faster. My current implementation of these algorithms is almost entirely unoptimized, since I only got everything done a few weeks before G4GX, and is coded in C#, a language that, while it handles garbage collection automatically, easily gobbles up memory faster than Cookie Monster eats cookies. Perhaps if these algorithms were carefully coded in a language such as C++, Woods’ method, as well as Duparc’s, would be much faster.
Also, the algorithms presented work on a pattern, pattern being defined as a rectangle with a state (on or off) for each of the cells inside. While defining a pattern as a shape, possibly with holes, with a state for each of the cells inside can lead to some interesting theorems on Gardens of Eden and Orphans (that is, patterns that have no predecessor even when surrounded by additional on cells), it’s much easier to write programs that atavise rectangular patterns.
Related to this is a question I have using the rectangular definition for a pattern: Are all Gardens of Eden Orphans? That is, are there any Gardens of Eden which are not orphan patterns? It’s actually much easier to check if a pattern is a Garden of Eden, as you just have to find a pattern fitting within a n+2 by m+2 rectangle (if the original pattern is n by m cells) which generates the wanted pattern in the center, possibly with some extra stuff on the sides. If you can’t find one using an ataviser, then the pattern is an Orphan. To test if a pattern is a Garden of Eden, though, you have to either show that the pattern is an Orphan (from which it follows that the pattern is also a GoE), or show that there is no pattern which generates the wanted pattern. This can go on forever, because if there is no predecessor in an n+2 by m+2 rectangle, there might be one in an n+4 by m+4 rectangle, or n+6 by m+6, etc. Bram Cohen claims that if a pattern has an orphan predecessor in an n+2 by m+2 rectangle, then you can add on ON cells to the orphan predecessor to make a proper predecessor (i.e., one that turns into the wanted pattern with no extra on cells outside the bounding box of the wanted pattern). I have no idea how this would be done, but it would solve the are-GoEs-Orphans problem.
Lastly, here‘s a zipped archive containing the program source as well as a compiled 64-bit version, with a GUI! Documentation to be added shortly.
Thanks to Don Woods for information on atavising algorithms, and Mary Deignan for translation help with Duparc’s paper.