Joseph Luis Borges, Quantum Mechanics, and Lewis Carroll

While the fictional book in Jorge Luis Borges’ The Garden of Forking Paths resembles, and may have inspired, the many-worlds hypothesis in quantum mechanics, Borges was not the first to describe such an idea.

In Borges` short story, the fictional Sinologist Stephen Albert describes Ts`ui Pen’s “The Garden of Forking Paths” as essentially a “Choose Your Own Adventure” novel; the story branches based on choices the characters make. However, there are three things which set it apart from the modern version: Firstly, the reader can see everything happening in all the stories at once- if there were two stories being told, the book would list a part of the first story, then the same part, chronologically, of the second story, then the next part of the two stories, and so forth. (This style of writing is sometimes referred to as “hypertext fiction”) Another thing which is unconventional is that sometimes two stories merge. As Albert said to Dr. Tsun, in one path the protagonist came to his house as a friend; in another, as an enemy. In the book, an army marched through the mountains, and after acquiring a disdain for life, won the battle easily; alternatively, the same army passed through a palace in which a ball was being held, and because of this, won the battle the same way. Lastly, some things are inevitable: In life, this is death; in the story, this is the fact that Captain Richard Madden will capture the narrator.

To digress for now to the second topic: In the field of quantum mechanics, the Many-Worlds hypothesis (first printed by Hugh Everett in 1957, 16 years after Borges` story) is in its simplest state the idea that every time someone or something makes a choice, the universe branches off into many other universes, in each of which a different one of the possible choices was made. The usual example of this is the Schrödinger’s Cat thought experiment. Imagine a cat was placed in a box, and a device inside the box then flips a coin and measures whether it falls on heads or tails. If it falls on tails, it kills the cat; otherwise, it does nothing. After the first coin flip, the universe can be thought of as splitting into two different universes: one in which the cat is dead, and another in which the cat is alive. By observing the state of the cat, the experimenter can see which universe he’s in, but he can’t suddenly switch from the “plot” of one universe to the plot of another.

The Schrodinger’s Cat thought experiment, with a proton-spin measuring device instead of a coin flipper.
From user “Ramisses”, Wikimedia Commons.

Ts`ui Pen’s story shares many things with this theory: they both involve branching due to choices, the idea of separate timelines, and in fact Stephen Albert claimed “The Garden of Forking Paths” was, like Many-Worlds, “an incomplete, but not false, image of the universe”. The two ideas differ only in that Pen’s includes universes merging and inevitable events, while in the Many-Worlds hypothesis universes can’t merge, and there are somewhat extreme ways to keep otherwise inevitable events from happening. (See [2], pp. 26-27, 32-33, 54-57)

As it turns out, while the style of writing in Ts`ui Pen’s fictional book certainly was original, the most important ideas appeared more than half a century before Borges` story, in Lewis Carroll’s little-known novel Sylvie and Bruno, published in 1889. In the beginning of Chapter 23, the narrator witnesses a tragic event: a box is left in the street and a bicyclist crashes into the box and is flung over the handlebars, causing him to get very badly injured and go to the hospital. Using a magical watch, the narrator goes back to before the crash, removes the box, which causes the bicyclist not to crash. However, at precisely the time he wound the watch backwards, the scene instantaneously switches to the bicyclist in the hospital, with exactly the same injuries as when the narrator didn’t remove the box.

While this might initially seem strange, it can be interpreted as follows: At time, say, 1:00, the universe bifurcates into two different universes: A, in which the narrator does not remove the box; and B, in which he does. Initially, we see the narrator go down the timeline of universe A, but then at 1:30 (when the bicyclist is in the hospital of universe A) he goes back in time just before the bifurcation point. Here, he chooses to go down the path of universe B, and for a while he remains in that timeline, until 1:30, at which the narrator from universe A is returned to universe A. (It is unclear what the state of narrator B is during this timespan- are there two copies of the narrator, or is narrator A seeing the world through narrator B’s eyes?) In this way, Carroll not only gave the idea for the Many-Worlds hypothesis, but also presented how time travel might work in such a system. Carroll’s model shares the aspects of branching based on choices with Borges’ story and the Many-Worlds hypothesis, and shares merging and inevitability with Borges’ model.

Multiverse image

A flowchart of the interpretation above.

As with anything, there are a few objections that could be made- Carroll’s model switches back abruptly, as if a scene from a film had been replaced with another scene that just happened to match at the beginning, which seems unnatural; Ts`ui Pen’s novel shows every universe happening at once, which certainly the inhabitants of our universe don’t see; the description of universes ‘bifurcating’ is just a way to think of the evolution of the wave function, and so forth. However, one can definitely conclude from the evidence given that Borges’ Garden of Forking Paths, Carroll’s Magic Watch tale, and the Many-Worlds hypothesis resemble each other- and perhaps one even inspired another.

Sources

[0]          Joseph Luis Borges, “The Garden of Forking Paths” – from Collected Fictions, translated by Andrew Hurley. Published by the Penguin Group, 1998.

[1]          Lewis Carroll, “Sylvie and Bruno”. From The Collected Works of Lewis Carroll, 2005 Barnes and Noble edition. Illustrated by John Tenniel.

[2]          Jason Shiga, “Meanwhile”, Second Edition. Published by Amulet Books, 2010.

Reversing the Game of Life for Fun and Profit

This post is mostly a transcript with notes of a talk I did at G4GX, the 10th Gathering for Gardner. As such, it differs from the normal format a bit.

(Videography due to Peter Bickford- there was a cameraman recording all the talks there, but I haven’t heard from him recently.)

Transcript (sentences in brackets represent notes, i.e, things I wanted to say in the talk but didn’t have time for):

Continue reading

Sliding Block Puzzles, Part 3

This article may as well be part of a series: 1 2(read this first)

Additional notes: ‘Simple simple simple’ is referred to as this post as simple^3,

Warning: This article contains original research! (It’s also more papery than most of the other articles on this site)

In the previous post I mentioned several methods that could be used to speed up the finding and exhaustive searching of sliding block puzzles, as well as a candidate for the ‘hardest’ one-piece-goal diagonal-traverse 4×4 sliding block puzzle. I am pleased to say that the various bugs have been worked out of the sliding block puzzle searcher, and that 132 has been confirmed to be the maximum number of moves for a simple simple simple 4×4 sliding block puzzle with no internal walls!

(that’s not a lot of qualifiers at all)

As the reader may expect, this blog post will report the main results from the 4×4 search. Additionally, I’ll go over precisely how the algorithms very briefly mentioned in the previous post work, and some estimated running times for various methods. Following that, various metrics for determining the difficulty of a SBP will be discussed, and a new metric will be introduced which, while it is slow and tricky to implement, is expected to give results actually corresponding to the difficulty as measured by a human.

Search Results

Running SBPSearcher on all the 4x4s takes about 3 hours (processing 12295564 puzzles), which was quite useful for fixing hidden bugs in the search algorithm. Here are the best simple^3 4×4 n-piece SBPs, where n goes from 1 to 15:

For those of you with a text-only web browser, your lucky numbers are: 1,4,9,19,36,51,62,89,132,81,64,73,61,25,21

Click to view in full size

And for comparison, the move counts vs. the previously reported move counts:

   1   4   9  19  36  51  62  89 132  81  64  73  61  25  21
   1   4   9  24  36  52  68  90 132  81  64  73  61  25  21

    Notice that all the entries in the top row of the table are either less than or equal to their respective entry in the bottom row of the table, some (such as in the case of p=7 or p=4) being much less This is because the previous search (run, in fact, as a subfeature in the sliding block puzzle evolver) started with all possible positions and defined the goal piece to be the first piece encountered going left to right, top to bottom, across the board. As such, the search included both all the 4×4 simple^3 sliding block puzzles as well as a peculiar subclass of sliding block puzzles where the upper-left square is empty but the goal piece is often a single square away! This accounts for the 6-piece case and the 8-piece case (in which the first move is to move the goal piece to the upper-left), but what about  the other two cases?

    For the 4-piece case, the originally reported puzzle (see here for the whole list) wasn’t simple^3, and it isn’t possible to convert it into a simple^3 SBP by shifting blocks in less than 5 moves. Interestingly, the new ‘best’ puzzle for 4 pieces is just James Stephens’ Simplicity with a few blocks shifted, a different goal piece, and a different goal position! (There could be copyright issues, so unless stated, the puzzles in this article are public domain except for the one in the upper-right corner of this image) Additionally, the 7-piece 68-move puzzle in the previous article is impossible to solve! The upper-left triomino should be flipped vertically in place. I expect this to have been a typing error, but the question still stands: Why is there a 6-move difference?

    As mentioned before, the actual stating of what constitutes a simple simple simple puzzle is as such: “A sliding block puzzle where the piece to be moved to the goal is in the upper-left corner, and the goal is to move the goal piece to the lower-right corner of the board”. Unfortunately, there’s some ambiguity as to when a piece is in the lower-right corner of the board – is it when the lower-right cell is taken up by a cell in the goal piece, or is it when the goal piece is as far to the lower-right as it can be? If we take the latter to be the definition, then additional ambiguities pop up when faced with certain problems, such as the following one often encountered by SBPSearcher:

0000000001110102

Which piece has just moved into the lower-right?

Because of problems like these, SBPSearcher uses the former definition, which means that puzzles where the goal piece takes the shape of an R aren’t processed. (In actuality, it’s SBPFinder that does this checking, when it checks if a puzzle is in the ‘justsolved’ state). If we say that the first definition is stricter than the second, then it could be said that SBPSearcher searched only through the “Strict simple simple simple 4×4 Sliding Block Puzzles”. While I don’t think that any of the results would change other than the p=7 case, it would probably be worth it to modify a version of SBPSearcher so that it works with non-strict simple simple simple sliding block puzzles.

A few last interesting things: The 13-piece case appears to be almost exactly the same as the Time puzzle, listed in Hordern’s book as D50-59! (See also its entry in Rob Stegmann’s collection) Additionally, the same split-a-piece-and-rearrange-the-pieces-gives-you-extra-moves effect is still present, if only because of the chosen metric.

The Algorithm, In Slightly More Detail Than Last Time

There are only two ‘neat’ algorithms contained in the entire SBP collection of programs, these two in SBPFinder and SBPSearcher respectively. The first of them is used to find all possible sliding block puzzle ‘justsolved’ positions that fit in an NxM grid, and runs in approximately O(2*3^(N+M-2)*4^((N-1)*(M-1))) time. (Empirical guess; Due to the nature of the algorithm, the calculation of the running time is probably very hairy).

First of all, the grid is numbered like this:

 0  1  3  6 10
 2  4  7 11 15
 5  8 12 16 19
 9 13 17 20 22
14 18 21 23 24

where the numbers increase going from top-right to lower-left, and moving to the next column over every time the path hits the left or bottom edges. (Technical note: For square grids, the formula x+y<N?TriangleNumber(x + y + 1) – x- 1:N*M – TriangleNumber(N + M- x – y – 1) + N – x – 1 should generate this array)

Then, the algorithm starts at cell #0, and does the following:

Given a partially filled board: (at cell 0 this board is all holes)

  1. Take all the values of the cells from (cell_number-1) to the number of the cell to the left of me, and put that in list A.
  2. Remove all duplicates from A.
  3. If the following values do not exist in A, add them:  0 (aka a hole) and the smallest value between 1 and 4 inclusive that does not exist in A (if there is no smallest value between 1 and 4, don’t add anything except the 0)
  4. Run fillboard (that is, the current function), giving it cell_number+1 and the board given to this level of the function with the value of cell_number changed to n, for each value n in A.

However, if the board is all filled (i.e, cell_number=25) , check to see that the board has holes and that it is justsolved, and if so, standardize the piece numbering and make sure that you haven’t encountered it before, and if so, sort the board into the appropriate piece number “box”.

Once the fillboard algorithm finishes, you should have N*M-1 boxes with all the possible justsolved positions that can be made on an N*M grid! There are a number of other possible methods that do the same thing- all it basically needs to do is generate all possible ways that pieces fitting in an NxM grid can be placed in a grid of the same size.

For example, you could potentially go through all possible 5-colorings of the grid (4-colors and holes), and remove duplicates, but that would take O(5^(N*M)) time, which isn’t the most feasible option for even a 4×4 grid.  You could also proceed in a way that would generate the results for the next number of pieces based on the results of the current number of pieces and all possible rotations and positions of a single piece in an NxM grid by seeing which pieces can be added to the results of the current stage, but that would take O(single_piece_results*all_justsolved_results). While that number may seem small, taking into account the actual numbers for a 4×4 grid (single_piece_results=11505 and all_justsolved_results=12295564) reveals the expected running time to be about the same as the slow 5-coloring method. However, it may be possible to speed up this method using various tricks of reducing which pieces need to be checked as to if they can be added. Lastly, one could go through all possibilities of edges separating pieces, and then figuring out which shapes are holes. The time for this would be somewhere between O(2^(3NM-N-M)) and O(2^(2NM-N-M)), the first being clearly infeasible and the second being much too plausible for a 4×4 grid.

In practice, the fillboard algorithm needs to check about 1.5 times the estimated number of boards to make sure it hasn’t found them before, resulting in about half a billion hash table lookups for a 4×4 grid.

The second algorithm, which SBPSearcher is almost completely composed of, is much simpler! Starting from a list of all justsolved puzzles (which can be generated by the fillboard algorithm), the following is run for each board in the list:

  1. Run a diameter search from the current puzzle to find which other positions in the current puzzle’s graph have the goal piece in the same position;
  2. Remove the results from step 1 from the list;
  3. Run another diameter search from all the positions from step 1 (i.e consider all positions from step 1 to be 0 moves away from the start and work from there), and return the last position found where the goal piece is in the upper-left.

Step 2 is really where the speedup happens- Because each puzzle has a graph of positions that can be reached from it, and some of these positions are also in the big list of puzzles to be processed, you can find the puzzle furthest away from any of the goal positions by just searching away from them. Then, because the entire group has been solved, you don’t need to solve the group again for each of the other goal positions in it and those can be removed from the list. For a 4×4 board, the search can be done in 3 hours, 27 minutes on one thread on a computer with a Core I7-2600 @3.4 Ghz and a reasonable amount of memory. In total, the entire thing, finding puzzles and searching through all of the results, can be done in about 4 hours.

Of course, where there are algorithms, there are also problems that mess up the algorithms- for example, how would it be possible to modify SBPSearcher’s algorithm to do, say, simple simple puzzles? Or, is it possible to have the fillboard algorithm work with boards with internal walls or boards in weird shapes, or does it need to choose where the walls are? An interesting thing that would seem to point that the answer to the first question might be yes is that to find the pair of points furthest apart in a graph (which would be equivalent to finding the hardest compound SBP in a graph) requires only 2 diameter searches! Basically, you start with any point in the graph, then find the point furthest away from that, and let it be your new point. Then, find the point furthest away from your new point, and the two points, the one you’ve just found and the one before that, are the two points farthest away from each other. (See “Wennmacker’s Gadget”, page 98-99 and 7 in Ivan Moscovich’s “The Monty Hall Problem and Other Puzzles”)

Metrics

a.k.a. redefining the problem

Through the last 3 posts on sliding block puzzles, I have usually used the “Moves” metric for defining how hard a puzzle is. Just to be clear, an action in the Moves metric is defined as sliding a single piece to another location by a sequence of steps to the left, right, up and down, making sure not to pass through any other pieces or invoke any other quantum phenomena along the way. (The jury is out as to if sliding at the speed of light is allowed). While the majority of solvers use the Moves metric (my SBPSolver, Jimslide,KlotskiSolver, etc.), there are many other metrics for giving an approximation to the difficulty of a sliding block puzzle, such as the Steps metric, and the not-widely used sliding-line metric. The Steps metric is defined as just that- an action (or ‘step’) is sliding a single piece a single unit up, down, left, or right. The sliding-line metric is similar: an action is sliding a single piece any distance in a straight line up, down, left or right. So far as I know, only Analogbit’s online solver and the earliest version of my SBPSolver used the steps metric, and only Donald Knuth’s “SLIDING” program has support for the sliding-line metric. (It also has support for all the kinds of metric presented in this post except for the BB metrics!)

Demonstration of various metrics

Additionally, each of the 3 metrics described above has another version which has the same constraints but can move multiple pieces at a time in the same direction(s). For example, a ‘supermove’ version of the Steps metric would allow you to slide any set of pieces one square in any one direction. (As far as I know, only Donald Knuth’s SLIDING program and Soft Qui Peut’s SBPSolver have support for any of the supermove metrics) In total, combining the supermove metrics with the normal metrics, there are 6 different metrics and thus 6 different ways to express the difficulty of a puzzle as a number. Note however, that a difficulty in one metric can’t be converted into another, which means that for completeness when presenting results you need to solve each sliding block puzzle 6 different ways! Even worse, the solving paths in different metrics need not be the same!

For example, in the left part of the image above, where the goal is to get the red piece to the lower-right corner, the Moves metric would report 1, and the red piece would go around the left side of the large block in the center. However, the Steps metric would report 5, by moving the yellow block left and then the red block down 4 times. Also, in the right picture both the Moves and Steps metrics would report ∞, because the green block can’t be moved without intersecting with the blue, and the blue block can’t be moved without intersecting with the green, but any of the Supermove metrics would report a finite number by moving both blocks at once!

Various other metrics can be proposed, some with other restrictions (You may not move a 1×1 next to a triomino, etc.), some which, like the Supermove metrics and the second puzzle above, actually change the way the pieces move, and you can eventually get to the point where it’s hard to call the puzzle a sliding block puzzle anymore. (For example, Dries de Clerq’s “Flying Block Puzzles” can be written as a metric: “An action is defined as a move or rotation of one piece to another location. Pieces may pass through each other while moving, but only one piece may be moved at a time”.

Suppose, however, that for now we’re purist and only allow metrics which generate numbers based on the step, sliding-line, moves, super-step, super-sliding-line, and super-moves metrics. It can be seen , quite obviously in fact, that these metrics don’t in all cases show the actual difficulty of the puzzle being considered. For example, take a very large (say 128×128) grid, and add a 126×126 wall in the center. Fill the moat that forms with 507 1×1 cells, all different pieces, and make the problem be to bring the block in the upper-left down to the lower-right. If my calculations are correct, the resulting problem should take 254*507+254=129,032 steps, sliding-line actions, and moves to solve, which would seem to indicate that this is a very hard puzzle indeed! However, any person who knows the first thing about sliding block puzzles should be able to solve it -assuming they can stay awake the full 17 hours it would take!

Because of this discouraging fact- that is, 6 competing standards, none of which are quite right, I would like to introduce a 7th, this one based on a theoretical simulation of a robot that knows the first thing about sliding block puzzles, but nothing else.

The BB Metric

a.k.a. attempting not to invoke the xkcd reference

Nick Baxter and I have been working on a metric which should better approximate the difficulty of getting from one point to another in a position graph. The basic idea is that the difficulty of getting from node A to node B in a graph is about the same as the average difficulty of getting from node A to node B in all spanning trees of the graph. However, finding the difficulty of getting to node A to node B in a tree is nontrivial, or at least so it would seem at first glance.

Suppose you’re at the entrance of a maze, and the maze goes on for far enough and is tall enough such that you can only see the paths immediately leading off from where you are. If you know that the maze is a tree (i.e, it has no loops), then a reasonable method might be to choose a random pathway, and traverse that part of the maze. If you return from that part to the original node, then that part doesn’t contain the goal node and you can choose another random pathway to traverse, making sure of course not to go down the same paths you’ve gone down before. (Note that to be sure that the goal node isn’t in a part of the maze, you need to go down all the paths twice, to walk down a path and then back up the path). For now, we take the difficulty of the maze to be the average number of paths you’ll need to walk on to get to the goal node or decide that the maze has no goal(counting paths you walk down and up on as +2 to the difficulty). Because of the fact that if the node you’re in has no descendant nodes which are the goal node you’ll need to go down all of the paths leading from that node twice, the difficulty of the part of the maze tree branching off from a node A can be calculated as

sum(a_i+2,i=1 to n)     (eq. 1)

where n is the number of subnodes, and a_i is the difficulty of the ith subnode. Also, if the node A is on the solution path between the start and end nodes, then the difficulty of A can be calculated as

a_n+1+1/2 sum(a_i+2,i=1 to n-1)    (eq. 2)

where a_n is assumed to be the subnode which leads to the goal. This basically states that on average that you’ll have to go down half of the subpaths and the path leading to the goal to get to the goal. Because root nodes are assumed to have 0 difficulty, you can work up from the bottom of the maze, filling in difficulties of nodes as you go up the tree. After the difficulty of the root node has been calculated, the length of the path between start and end nodes should be subtracted to give labyrinths (mazes with only a single path) a BB difficulty of 0.

Perhaps surprisingly, it turns out that using this scheme, the difficulty of a tree with one goal node is always measured as V-1-m, where V is the number of nodes in the tree (and V-1 is the number of edges, but this is not true for graphs) and m is the number of steps needed to get from the start node to the end node in the tree! Because of this, the difficulty of getting from one point to another in a graph under the BB metric is just the number of vertices, minus one, minus the average path length between the start and end nodes in the graph.

A few things to note (and a disclaimer): First of all, the actual graph of which positions can be reached in 1 action from each of the other positions actually depends on the type of move chosen, so the BB metric doesn’t really remedy the problem of the 6 competing standards! Secondly, the problem of computing the average path length between two points in a graph is really really hard to do quickly, especially because an algorithm which would also give you the maximum path length between two points in polynomial time would allow you to test if a graph has a Hamiltonian Path in polynomial time, and since the Hamiltonian Path problem is NP-Complete, you could also do the Traveling Salesman Problem, Knapsack Problem, Graph Coloring Problem, etc. in polynomial time! Lastly, I haven’t tested this metric on any actual puzzles yet, and I’m also not certain that nobody else has come up with the same difficulty metric. If anybody knows, please tell me!

One last note: Humans don’t actually walk down mazes by choosing random paths- usually it’s possible to see if a path dead-ends, and often people choose the path leading closest to the finish first, as well as a whole host of other techniques that people use when trying to escape from a maze. Walter D. Pullen, author of the excellent maze-making program Daedalus, has a big long list of things that make a maze difficult here. (Many of these factors could be implemented by just adding weights to eqns. 1 and 2 above)

Open Problems

a.k.a. things for an idling programmer to do

  • What are the hardest simple simple simple 3×4 sliding block puzzles in different metrics? 2×8? 4×5? (Many, many popular sliding block puzzles fit in a 4×5 grid)
  • How much of a difference do the hardest strict simple^3 puzzles have with the hardest simple^3 SBPs?
  • How hard is it to search through all simple simple 4×4 SBPs? What about simple SBPs?
  • (Robert Smith) Is there any importance to the dual of the graph induced by an SBP?
  • Why hasn’t anybody found the hardest 15 puzzle position yet? (According to Karlemo and Östergård, only 1.3 TB would be required, which many large external hard drives today can hold! Unfortunately, there would be a lot of reading and writing to the hard disk, which would slow down the computation a bit.) (Or have they?)
  • Why 132?
  • What’s the hardest 2-piece simple sliding block puzzle in a square grid? Ziegler & Ziegler have shown a lower bound of 4n-16 for an nxn grid, n>6. How to do so is fairly easy, and is left as an exercise for the reader.
  • Is there a better metric for difficulty than the BB metric?
  • Is there a better way to find different sliding block puzzle positions? (i.e, improve the fillboard algorithm?)
  • Is it possible to tell if a SBP is solvable without searching through all possible positions? (This question was proposed in Martin Gardner’s article on Sliding Block Puzzles in the February 1964 issue of Scientific American)
  • (Robert Smith) How do solutions of SBPs vary when we make an atomic change to the puzzle?
  • Are 3-dimensional sliding block puzzles interesting?
  • Would it be worthwhile to create an online database of sliding block puzzles based on the OEIS and as a sort of spiritual successor to Edward Hordern’s Sliding Piece Puzzles?

Sources

Ed Pegg, “Math Games: sliding-block Puzzles”, http://www.maa.org/editorial/mathgames/mathgames_12_13_04.html

James Stephens, “Sliding Block Puzzles”, http://puzzlebeast.com/slidingblock/index.html (see also the entire website)

Rob Stegmann, “Rob’s Puzzle Page: Sliding Puzzles”, http://www.robspuzzlepage.com/sliding.htm

Dries De Clerq, “Sliding Block Puzzles” http://puzzles.net23.net/

Neil Bickford, “SbpUtilities”, http://github.com/Nbickford/SbpUtilities (meh)

Jim Leonard, “JimSlide”, http://xuth.net/jimslide/

The Mysterious Tim of Analogbit, “Sliding Block Puzzle Solver”, http://analogbit.com/software/puzzletools

Walter D. Pullen, “Think Labyrinth!”, http://www.astrolog.org/labyrnth.htm

Donald Knuth, “SLIDING”, http://www-cs-staff.stanford.edu/~uno/programs/sliding.w

Martin Gardner, “The hypnotic fascination of sliding-block puzzles”, Scientific American, 210:122-130, 1964.

L.E. Hordern, “Sliding Piece Puzzles”, published 1986 Oxford University Press

Ivan Moscovich, “The Monty Hall Problem and Other Puzzles”

R.W. Karlemo, R. J. Östergård, “On Sliding Block Puzzles”, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.7558

Robert Hearn, “Games, Puzzles, and Computation”, http://groups.csail.mit.edu/mac/users/bob/hearn-thesis-final.pdf

Ruben Grønning Spaans, “Improving sliding-block puzzle solvingusing meta-level reasoning”, http://daim.idi.ntnu.no/masteroppgave?id=5516

John Tromp and Rudi Cilibrasi, “”Limits on Rush Hour Logic Complexity”, arxiv.org/pdf/cs/0502068

David Singmaster et al.,  “Sliding Block Circular”, www.g4g4.com/pMyCD5/PUZZLES/SLIDING/SLIDE1.DOC

Thinkfun & Mark Engelberg, “The Inside Story of How We Created 2500 Great Rush Hour Challenges”, http://www.thinkfun.com/microsite/rushhour/creating2500challenges

Ghaith Tarawneh, “Rush Hour [Game AI]”, http://black-extruder.net/blog/rush-hour-game-ai.htm

Any answers to questions or rebukes of the data? Post a comment or email the author at (rot13) grpuvr314@tznvy.pbz

What I’ve Been Working on Lately

Readers of this blog may notice that I haven’t been updating for the last 4 months. The purpose of this “filler” post is for me to say why.

First of all, I could easily blame the season. Summer, as is well known, is a time to “kick back and relax” as well as forgetting about important things you should be doing and reading webcomics instead.

I could also blame the wonderful 3D printer I’ve bought, which has quickly filled up my desk and emptied my wallet with tens of tiny plastic models and puzzles.

However plausible that may seem, I would more truthfully blame the projects I’ve been working on, especially my “work” in the field of sliding block puzzles.

As you may know from a post I made about a year ago, I’m quite interested in really hard sliding block puzzles such as the Panex Puzzle (30,000? moves) or Sunshine (729? moves). James Stephens, of puzzlebeast.com, is certainly a pioneer in making really hard sliding block puzzles by computer. Using his “Puzzlebeast” software, he’s made at least 23 progressively harder sliding block puzzles, ranging from 18 to 148 moves! What’s more, he restricts many of these puzzles to only having a few pieces, and in at least 11 of them the goals are “Simple Simple”, that is, move a single piece to a single corner! Even the puzzle which fits on a 4×4 grid and uses only 4 pieces and is only 18 moves to a solution, “Simplicity”, is really quite hard for a human to solve! Oskar van Deventer, maker of many mechanical monstrosities, has designed a 3d-printable version of the Simplicity puzzle, calling it the “Hardest sliding piece puzzle on a 4×4 grid”.

Gauging from Stephens’ description of Puzzlebeast, Puzzlebeast uses a genetic algorithm to create its puzzles. So far as I can tell, it starts with a random set of puzzles. It picks the hardest out of that set, then for the next generation it creates random “mutations” of the best puzzles of the last generation. For example, it may add a piece, move a piece, modify a piece, or remove a piece. Repeat this process over and over again, and eventually you’ll have a hard sliding block puzzle. I think because it seemed to work very well for him, I eventually tried to make a sliding block puzzle evolver for myself.

And so the SBP project began: The earliest version of my SBP evolver were coded in Mathematica, and was very manual.  I would start with 5 randomly generated 4×5 rectangular puzzles, without any restrictions on the number of pieces, and then for each puzzle I interpreted it as a “Simple simple simple sliding block puzzle”(see footnote 1) and then fed it into Analogbit’s online sliding block puzzle solver, which counts moves in steps, meaning that each “step” is sliding one piece one grid space in the puzzle. Once that was finished, I took the best two puzzles, converted them to 1-d arrays, “interleaved” them, and then influenced random mutations in their cells (That is, each cell had about a 1/5 chance of being added 1 to or being subtracted 1 from). Since that probably isn’t too clear, here’s an example:

Suppose the best two puzzles from the last generation were

 0,19, 1, 2      0,19, 0, 3
 2, 1, 1, 1      2,19, 1, 1
 0, 2, 0, 2 and  1, 3, 2, 0
 1, 1, 0, 0      0, 1, 1, 0
 1, 1, 0, 1      0, 1,19, 2

Then the “interleaving” of the two would be

0,19,1,2,2,1,1,1,0,2,0,2,1,1,0,0,1,1,0,1 +

0,19,0,3,2,19,1,1,1,3,2,0,0,1,1,0,0,1,19,2=

0, 19, 1, 3, 2, 19, 1, 1, 0, 3, 0, 0, 1, 1, 0, 0, 1, 1, 0, 2

and the “mutants” would be generated from there, and the process would repeat.

Notes: 1. “Simple simple simple sliding block puzzle” is my own term, based off of Nick Baxter’s definitions at the Sliding Block Puzzle Page . For me, it means that each piece is different from another only by its shape, and that the goal is to get the top-left-most piece down to the lower-right. The original definition was somewhat ambiguous, it actually means that if there is no piece with a square occupying the upper-left corner, it is not a valid simple simple simple sliding block puzzle. Additionally, many of the puzzles in this first section are copied straight from the Mathematica notebook, and so there is a distinct lack of proper number parsing. Parsing manually is simple: If two numbered pieces have the same number and are orthogonally adjacent, they both belong to one piece. Otherwise, they belong to different pieces. (Eventually I got a parser working to make these puzzles conform to standards, so the reading should get easier). Lastly, steps and moves are different, as one involves moving one piece one space and the other involves moving one piece any number of spaces. Edward Hordern’s book, Sliding Piece Puzzles, does a good job to clear this up.

Anyways, the very first incarnation of the SBP Evolver produced quite fantastic results! In about 10 generations, with populations varying from 4 to 10, the puzzles had progressed from 4 impossible puzzles and 1 8-stepper to  3 impossible puzzles and 7 possible, one a 58-stepper! Now of course this is no big result, as such puzzles as Bob Henderson and Gil Dogon’s “Quzzle-Killer” take 121 steps to solve, and so this early experiment was really just a proof of concept.

Eventually, I got tired of feeding all these puzzles through Analogbit, so I decided to write a C# program to do it for me, except instead of using Analogbit, it would use Jim Leonard’s very speedy Jimslide software. Furthermore, it would have much larger populations (perhaps 100 or so) and do many more generations. Some time later, after writing many, many lines of code to process the puzzles for Jimslide’s use, and then processing the output, I finally got it working! Almost immediately, I set it going on a huge number of 4×5 puzzles (10 million) and let it run overnight. It actually took around 4 days, but the best it found was a 176-move puzzle!

1203-4253-4673-8879-ABCC

The 176-mover (by no means the best simple simple)

The SBP Evolver also displayed some behavior which I conjecture is common to most poorly-written genetic evolution programs: Once the program found a really good optimization, if there weren’t any tiny optimizations which would improve the puzzle, it would tend to stay with that puzzle for a while unless, by pure chance, the next generation happened not to contain that puzzle. As such, the 4 days might actually not have been long enough to find the best puzzle. Additionally, Bob Henderson’s Gauntlet #2, another simple simple sliding block puzzle, trumps the 176-mover with 235 moves!

As far as I can tell from his description on Andrea Gilbert’s clickmazes site, Bob Henderson uses a very different method of generating sliding block puzzles. He inputs a collection of shapes, and then his packing program generates all possible ways that those blocks could be placed in a sliding puzzle. He then feeds those through his sliding block puzzle solving program, which returns the best results. I expect it produces results much faster, albeit only with the blocks the designer chooses.

Having been thus discouraged, I set out to go through the much easier 4×4 puzzles instead. This time, after about a day, it generated a 125-move simple simple puzzle! However, it turned out that the puzzle could be backtracked to make a 132-move puzzle, presented below. Interestingly, when I ran the same computation again, it generated another descendant of the 132! Now, of course, the question was: Is this the best 4×4 simple simple simple SBP?

1123-4522-4678-0690

Red to lower-right.

The problem could be easily solved by just generating all possible 4×4 positions, interpreting a simple simple simple SBP out of them, and then solving each and every single one. But how would you do the former? After all, there can be 16 possible block types in a 4×4 (including holes) and each space could be one of the 16, so you’d have 16^16=18,446,744,073,709,551,616 possible combinations! My answer to this was to invoke the 4-color theorem, using 0 for the holes, and 1,2,and 3 for colors for the other pieces. After the 4-coloring of the board, a parser would assign each block a unique number, and then remove any duplicates (This was in fact WRONG; The holes could be separate, and so the 4-color theorem applied only to the blocks. I in fact had to eventually use 5-colors for the whole thing). After around a day or so, the puzzle-finder would process 4294967296 boards, and it returned only 52,825,604 different boards!

Two other methods which might have worked as well or better might be to either generate all 1-piece boards, then generate all 2-piece boards from all nonoverlapping combinations of the 1-piece boards, and so forth. There’s also a method based on dynamic programming which I currently use, which (while it is too long to write out here) is included in the SBPFinder source code, available at the bottom of the post.

After the 52,825,604 different boards were separated into files depending on the number of pieces, the host software would load up each file into a hash table, then solve the first puzzle. After the puzzle was solved, the software would process the output and remove the puzzles mentioned in the solution from the hash table. It would then solve the new first puzzle, and so on. While it isn’t very fast at all to use this algorithm (it took 48 days), it at least had an advantage: By separating the puzzles into files labeled with the number of pieces, it was possible to return the “hardest” puzzle for an arbitrary number of pieces in a 4×4 grid! Below is a table of the hardest simple simple simple SBPs in a 4×4 grid, from 2 to 15 pieces. Keep note, though, that since the computation missed out on a few million puzzles, all the below are unverified and the ones which are very dubious are removed. Also, because SBP determined which piece was the goal piece by going line by line, some of the puzzles below are simply simple simple SBPs and the ones which are just simple SBPs are removed. (Confused? Check http://www.johnrausch.com/SlidingBlockPuzzles/4×5.htm)

Note that these results are not to be trusted; for one, bugs have been found in both the problem finder and the puzzle-to-Jimslide converter itself, and so the results certainly aren’t rigorous. Also, the SBP Evolver (which also runs the puzzle-to-Jimslide converter and searcher) was originally designed to treat any puzzle as a simple simple simple, by choosing the goal piece as the first piece encountered going line by line through the board. Lastly, there’s a curious phenomenon around 12 pieces: Breaking a 1×2 into 2 1×1 pieces and shuffling the remainder about creates 9 more moves!

Since then, I’ve been trying to do a verification of the 4×4 computation, specifically by creating another program to rerun the entire search process, return the best results, and then check to see if the results are the same. The catch: I need it to be a few times faster. The optimizations I’m working on including are:

•Using one of the alternative methods to find potential SBPs faster,

•Only storing the “justsolved” puzzles, that is, puzzles in which the goal piece is already in the lower-right and can move out of it’s current position. (Based on an idea from “Rush Hour Complexity” , by John Tromp and Rudi Cilibrasi. It turns out that about 1/4 of 4×4 SBP positions are justsolved, a rather large value, mostly because in most positions there is a piece in the lower-right corner, and the only real limitation is that it must be able to move), and

•Using a custom-written sliding block puzzle solver to find the hardest puzzle in the entire group of positions the justsolved puzzle is linked to. (This can be done in 2 diameter searches, one to find the other justsolved puzzles in the group, and the other to find the starting position the furthest away from all the justsolved positions. My custom solver is about 3x as slow as Jimslide, but it makes up for it by that it’s solving the entire group of puzzles and removing the other justsolved puzzles from the list. There can be a stunning amount of other justsolved positions reachable from a puzzle- some have been found with over 14,000 !)

However, I’ve hit upon a gypsy moth in my solver causing it not to search the puzzle completely! Preliminary tests of the program though, have revealed the expected running time to be about a day, so stay posted!

Anyways, that’s my explanation as to what’s been going on, and I apologize for my relative silence.

The Minsky Circle Algorithm

Our tale starts with the PDP-1, a $120,000 room-sized system which, when released in 1960, was state-of-the art computer technology: 4096 18-bit words of memory, a 200 Khz clock speed, and came with only a Macro assembler and a DDT debugger, but what it did have was a paper tape reader, and a Type 30 precision CRT display. As such, many of the first programs of the PDP-1 were “Display hacks”: Programs using only a few lines of code, but when run, create intricate patterns on the screen. A prime example of this is the Munching Squares program by Jackson Wright, described in the MIT A.I. Lab’s HAKMEM:

foo, lat
adm bar
rcl 9s
xor bar
dpy
jmp foo
bar, .

This creates a sequence of images corresponding to where the bitwise XOR function of the x and y coordinates of every point on the screen is less than the frame number. If this happens to be a bit complicated, it’s a bit easier to understand when you see the animation:

Gif from Mathworld.

(A very good emulator of the original Munching Squares program is located at http://www.dpbsmith.com/munch.html)

A goal of a member of the Research Lab for Electronics (and later the head of the MIT AI lab), Marvin Minsky, was to develop a program to make curves in a square grid, in as few lines as possible. When he was attempting to get the system to draw a spiral, he stumbled across a most curious thing: A two-statement program which would draw stable circles! Decoded from machine language, it reads:

loop: y = y – 1/16 * x

x = x + 1/16 * y

Note the lack of temporary variables; In fact, with a temporary y variable, the points spiral out of the circle! However, the program does not draw a perfect circle, but rather a very round ellipse, which becomes rounder as 1/16 gets closer and closer to 0.

From the Computer History Museum

You can generalize the Minsky circle algorithm by replacing the first 1/16 by δ and the latter by ε, to get circles that take longer or shorter to generate:

x = x – δ * y

y = y + ε * x

It turns out that this can even be solved recursively! Using a substitution and a “guess and-by-gosh” method, the authors of the book “Minsky’s and Trinsky’s” (which most of the content for this article was lifted from, as of now privately published by the authors: Corey Ziegler Hunts, Julian Ziegler Hunts, R.W. Gosper and Jack Holloway) were able to prove that, for the Nth step:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where d=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) , which happens to actually be an ellipse! Note how if δ*ε>4 or δ*ε<0, ω becomes imaginary and the ellipse becomes a hyperbola. However, with anything so seemingly simple, there is something that makes the subject much more complex. In this case, it was the nearly imperceptible (but strictly periodic) wobble of the ellipse.

“The Worst Circles I Ever Drew”

In machine arithmetic, integers, no matter what happens to them, are always integers. While this may seem a triviality, machines with only integer arithmetic effectively use the floor function on each operation they do. For example, if you divide 1 by 2, in many programming languages the answer will be 0. The reason for this is because the bit routines for dividing integers always return integers: They work to no more than the precision given. In order for us to understand what’s going on, we can just pretend that the computer works out the number, then finds the largest integer less than or equal to the answer. No problem, right? Inserting floors into the Minsky recursion, like this:

x = x – floor(δ * y)

y = y + floor(ε * x)

shouldn’t hurt the overall mechanics, right? Wrong. When x or y is small enough, the floor will actually make the result of the multiplication be 0, losing precision. While this may not seem as big a problem, when we chart the plots for strange values of delta or epsilon we get something which is definitely not an ellipse. Corey Ziegler Hunts and Julian Ziegler Hunts discovered this behavior (discovered earlier by Steve Witham), and began to dig deeper. It turns out that if you calculate the Periods (how long it takes before the points reach X0 and Y0) of the Minsky recurrence starting at different X0,Y0, δ and ε, the periods can vary wildly. For example, when δ and ε both equal 1, all X and Y (other than 0,0) loop with period 6. On the other side, the X0,Y0,δ and ε corresponding to {-19/24,-1015/121,7381/5040,5040/7381} has a period of no less than 2,759,393! (Even with the floor function, the algorithm is exactly reversible, so it must return to (X0,Y0) or never repeat any point, unless the machine integers overflow.)

Period 9

This was actually the Zieglers first "Rugplot"

Another one, x=1,y=0,δ=9/17 and ε=15/2, has been followed for no less than 104 trillion steps in both directions (the Minsky recurrence can be modified to run backward) without looping! A logical question might be to ask: What do these periods look like when plotted? Well, since there are 4 variables: x,y,δ, and ε, there are 6 different ways to plot the periods: x and y, x and δ, x and ε, y and δ, y and ε, and lastly δ and ε. The Zieglers started with the x-y plots. Now, choose a constant δ and ε, such as 1 and 0.467911, and plot the periods mapped to colors  for integer x and y, and you get what may as well be a Persian rug! Okay, but maybe that’s just a special case of the Minsky recurrence, and if we try something like δ=1 and ε=0.91939, we’ll just see a blob? Wrong.

Readers may notice something at this point...

Blob it may be, but an intricate one nonetheless.

You can try other Minsky X-Y plots at http://openprocessing.org/visuals/?visualID=19109 , or click on the picture below:

Click to go to the applet (openprocessing.org)

Coloring methods also have effects on Minsky plots. For example, the Minsky plot δ=100, ε=1/99, which when rendered with a simple linear coloring from Dr.Fibdork’s version (programmers, see comments) looks like this: Made with a slightly modified version However, the authors of the book use a different method to bring out some of the finer details, which shows this: At some point, though, we have to stop messing with numbers and get down to math. If we return to the original Minsky solutions:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where δ=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) ,we notice that this ellipse returns to its original point when n*ω mod 2*Pi =0, because when n=0, n*ω=0, and also because the periods of the cos and sin functions are 2*Pi . Now, let us define the theoretical period (denoted as P) of a Minsky recurrence as the first n greater than 0 for which n*ω mod 2*Pi =0, which is the same as saying “The n for which n*ω=2 Pi” . (Note that n can be a non-integer) N can be trivially solved for, and expanding ω we get that P=Pi/arcsin(sqrt(δ ε)/2) . We can write this in two ways: The already mentioned one, or by solving for  δ ε we get δ ε=4 (Sin(Pi/P))^2, which we can use to get a product of δ and ε from the period. Earlier on, readers may have looked at the figures for δ and ε and seen something suspicious about them. As it turns out, the first “Rugplot” had a period of 9, and the second had a period of 2 Pi. In general, when P>4, we can generate very symmetrical plots by setting δ to 1 and ε to the value given by 4 (Sin(Pi/P))^2 . For P<5, the method generates very monotonous images (both delta and epsilon are integers), but when P=5 we get what Holloway terms “Rastermen”:  Koch Snowflake-like fractals but with only five arms repeated infinitely with sizes according to the Fibonacci numbers!

A larger one is avaliable at http://gosper.org/p5d1.png

It’s even possible to make animations of Minskyplots, such as along the line δ=3/2+t/200, P 40/139. It turns out that Minsky space is rather epileptic:

Different configurations arise with different functions of time for δ and ε, such as when δ=t and ε=1/(t-1), and δε approaches 1:

The horizontalness of the boundaries at the end are due to the fact that the slope of major axes of the ellipses in a Minsky x-y plot is approximately ε/δ , because in the original Minsky solutions the amplitude of Xn (roughly the “run” in rise/run) is larger when ε is larger,  and the amplitude of Yn (the “rise”) reacts the same way to δ.

Minskyspace

At this point you may be asking what the other 5 arrangements of Minsky plots look like. I don’t have the space in this post to talk about them all, but I can describe one of the most interesting Minsky plot methods, sort of the opposite of the x-y plot: The δ-ε plot. Recall from earlier that the simple Minsky solutions become imaginary if δε>4. It actually turns out that this is generally not the case. Suppose we use a simple Minsky period plotting method to draw the periods with x=1 and y=0 , where -4<δ<4 and -4<ε<4:

Rather large (4001x4001), so viewing in hi-res is recommended. 1px (in reality infintesimally thin) lines are due to scan granularity.

The gray areas are where there was no Minsky loop with a period less than 5000, and the white areas are period 1.  As you can see, although the outline of the colored region resembles the function y=4/x, in many places there are regions of periodicity extending out into the “dangerous” area, such as on the upper right corner, really small. (I should note here that it has been proven that the shapes of periods are always rectangles) Furthermore, the authors of Minsky’s and Trinsky’s have discovered that some areas inside the two “Islands” are nonperiodic, such as x=1,y=1/2,δ=9 and ε=1/3. (Plot) Even more, any Minsky point of the form x=1, y=1/2, δ=3^(n+1) , ε=3^(-n) , where n is an integer greater than 0, is unbounded.  Not much is known about these δ-ε plots: We can prove that the shapes are always rectangles, and we can find a few general periods, but in general it’s a bit like trying to find the value of an arbitrary point on the Mandelbrot Set. Which brings me to my next point: Unlike the Minsky x-y plots, you can take a δ and ε and zoom into it! The video below shows a sequence of period 5 x-y plots with δ ranging from 99/100 to close to 1:

Generated by Julian Ziegler Hunts.

Some of the areas can seem fractal, and it’s certainly possible to find patterns that seem to converge to a series of infinitely thin rectangles, such as the top-right edge of the x=1,y=1 δ-ε space (δ*ε≤4): Other areas, such as this one near x=0, y=8, δ=173/80 , ε=137/96 , display more localized behavior: However, in many of the places where the differences in size between rectangles go to 0, the periods appear to approach ∞, such as when you approach x=1, y=0, δ=1 , ε=1 from the upper-left-hand side. These places, called “Accumulation points”, seem to be ubiquitous throughout any δ-ε plot . As a great example of this, take the neighborhood of x0=0, y0=8, ε=10/7, δ=37/17 (zoomed in and recolored from the image above) , which Bill Gosper has termed the “Neighborhood of a Monster” because of the “monsters” (points with gigantic periods) that occur. In this case, even though the center appears at first to be devoid of accumulation points, there are some particularly nasty periods- right in between two large blocks!

Double-click to see in high resolution

Double-click to see in high resolution

There’s tons more to study of course, but in reality all of the pictures we see of Minsky plots, whether x-y,δ-ε, or some combination of the two, are all slices of a 4-dimensional, jagged, infinite object, called Minskyspace. The 4 dimensions come from the four parameters, and while slices from this object have not even been rendered in a dimension higher than 2, we can tell a few things about it:

  • It goes forever in the x,y,δ, and ε directions. However, in the δ and ε directions, it takes on a rather hyperbolic shape, due to the original, non-rounded Minsky circle algorithm.
  • Nearly half of it seems to be missing, due to δ*ε being less than 0.
  • Certain “congruence regions”, that is, areas in Minskyspace where the periods are the same which produce identical orbits modulo translation, are shaped like polyhedra instead of infinitely thin slices when sliced through with a 3-dimensional plane! (some of the faces are curved, though)
  • At irrational δ*ε , there can be infinitely fine structure around that area, but a conjecture is that there is no irrational δ*ε which has infinite period.
  • All accumulation points, infinitely near their centers, have unlimited period.
  • Conjecture: All congruence regions are Cartesian products of two polygons with at most 6 hyperbolically curved sides.
  • That’s a bit of what we know for sure. There are tons of conjectures, and I’ve hosted a version of the Minsky “To do” list, “Problems posed and solved”, at neilbickford.com/Minsky/problems.txt.

In summary: Although some algorithms may seem simple, there can be great areas of mathematics behind them. We still don’t know all about the shapes that the Minsky circle algorithm creates, and there are still many problems waiting to be uncovered and solved. Even if you don’t want to tackle recursive formulae, just discovering areas in Minskyspace is a fun and entertaining pastime.

Fractals

Fractals are a relatively new mathematical concept which are shapes that have detail at all levels. (i.e., you can keep zooming into the shape and always find new patterns) Fractals originated a few million years ago, but they have only been named and studied for less than 200 years, and less than 50 years if you don’t count the times when mathematicians were calling them “Monsters”.

The most simple type of fractal is one where you take a shape, and then turn the shape into a number of smaller copies of itself. Probably the most famous example of this technique is the Sierpinski Gasket, created by taking one triangle, and turning it into 3 smaller copies of itself:

Iterations of the Sierpinski Gasket

As you can see, at each succeeding iteration the new triangles are replaced, and so on. The same fractal can be made by progressively taking triangles out of triangles, and both ways can be easily done using some triangular graph paper.

A nearly as well-known fractal as the Sierpinski Gasket is the Sierpinski Carpet. To make this fractal, you take a square, divide it into 9, remove its center square, and repeat. Naturally, the same can be done by adding: Divide a square into 9 and replace each part by itself:

Iterations of the Sierpinski carpet

It’s possible to, as well as creating 2D self-similar fractals, to create 1-D self-similar fractals, which will be a line bending into two dimensions. An example of this would be the Koch Curve, discovered in 1904 by Niels Fabian Helge von Koch. To make it, start with a line, divide it into 3, and replace the middle section by 2 line segments. This can be shown better by the “generator” for the Koch curve, which shows the before and after:

If you repeat these steps, you’ll generate the following picture:

This fractal was originally called a “monster” due to the fact that it is continuous everywhere (there are no holes), but it is impossible to take a slope at a single point. In this way it is a shape that has no corners, which mathematicians regarded at the time as ridiculous. A similar fractal discovered earlier by the Italian mathematician Guiseppe Peano in 1890, the Peano curve, is similar but manages to fill space:

This was regarded as crazier by the mathematicians than the future Koch curve- Is it a line, or is it a square? It’s actually a schematic for converting one dimension into two! Even crazier than that was the Hilbert curve, discovered by David Hilbert in 1891: 

It consists mainly of taking the previous curve, adding 3 copies, and connecting those copies together.

Some of these 1-dimensional curves can be made 2-dimensional by taking multiple copies of them and putting them together. For example, there’s the Koch Snowflake, which comes from 3 copies of the Koch Curve:

Surprisingly enough, the total area of the Koch Snowflake  is not some infinite series, but rather 8/5 the area of the original triangle! Some 1-dimensional curves, such as the Peano curve, don’t need to be joined to seem to be 2-dimensional. However, the Peano curve creates what seems to be a square, and a square is certainly not a fractal. The Flowsnake, discovered by Bill Gosper and later renamed to “Peano-Gosper curve” achieves the object of having a bounding area that is both fractal and tileable!

3-Dimensional “Simple” Fractals

Up to this point we have been taking about 1 and 2-dimensional fractals, but in an effort not to make fractals too easy for computers, we now turn to 3-dimensional self-similar fractals.  To start off, the Sierpinski Carpet can be turned into a 3-dimensional version simply by replacing the squares with cubes. This causes us to use 20 times as many cubes for each step, but if you happen to have that much wood and glue around the house, you can easily make it by replacing each cube with 20 cubes, in the fashion shown below. This creates an object called the Menger Sponge, but you aren’t likely to see it on the sponge market anytime soon:

The sponge pictured above is a level 4 sponge, which would take 160,000 cubes to create. 8,000 is a much more manageable number, and so Dr. Jeannine Mosely decided to create a Level 3 Menger Sponge- out of business cards. There would be 6 cards per cube, which would then be linked, and finally paneled for a grand total of 66,048 business cards, which Dr. Mosely managed to create quite a while later:

As you can see, it’s large enough to crawl into, but as fun as it may seem, Dr. Mosely says that a Level 4 sponge made out of business cards would simply not be possible to make without structural support.

The Sierpinski Gasket also has a 3-dimensional analog: The Sierpinski Tetrahedron. To make it, you take the previous level, make 3 more copies of it, and join them by the corners.

George Hart has made a 3D model of it, and even has a good description of how to make it in his Math Monday post. With all this, you might think that there would be an interesting 3D version of the Koch Snowflake. However, when doing it like you normally would (tetrahedra on triangle) you get something quite unexpected…

Now join 4 of those together, and you get a cube. From a fractal.

All this time we have been referring to fractals as “1-dimensional”, “2D”, and “in 3 Dimensions” when in fact, as we have seen, they clearly aren’t. The Peano 1D fractal may as well be a square, and the Sierpinski 2D fractal may as well be a series of lines, and the Hilbert curve is somewhere in between. To make deciding easier, Felix Hausdorff and Abram Samoilovitch Besicovitch invented the concept of fractal dimension, which can not only be an integer, but also any number in between. To compute the fractal dimension, we use the formula D=Log(N)/Log(l), where N is the number of pieces that replace the original shape, and l is one over the scale factor. (The base of the logarithm does not matter) For example, in the Sierpinski gasket, N=3, and l=2, which means that D=Log(3)/Log(2), or about 1.584962501. This means that the Sierpinski gasket is slightly less than a square, and quite a bit more than a line. Similarly, for the “3D” Menger Sponge, its fractal dimension is Log(20)/Log(3)=2.726833028. Finally, the fractal dimension of the Peano curve is Log(9)/Log(3)= exactly 2, which means that the Peano curve, at the ∞th iteration, may as well be a square. A large list of fractals and their fractal dimensions can be found at Wikipedia.

Fractals Imitating Reality, or Vice Versa?

Self-similar fractals are not just an abstraction, though. Many plants, such as cauliflower and broccoli show self-similar behavior. On cauliflower, it can be seen by the lobes of the surface, and broccoli is a bit more chaotic, but still shows the same behavior:

Trees can also be simulated reasonably well using fractals:

 

A tree fractal

You can see the first branches by the dots

Escape-Time Fractals

Self-similar fractals get a bit boring after a while, so let’s explore another kind of fractal: Escape-time fractals. Escape-time fractals take place on the complex plane, which is an extension of the real line. Basically, it’s a plane of numbers of the form x+i*y, where i is the “imaginary number” sqrt(-1). Imaginary numbers (a+i*y) can have all the operations of real numbers done to them, such as addition (a+i*b+c+i*d=(a+c)+i(b+d)), multiplication ((a+i*b)*(c+i*d)=(ac-bd)+i(bc+ad)), as well as division, square roots, exponentiation, and every other function that can be applied to the real numbers. Now, consider a function repeatedly applied to an initial complex number until the point escapes a circle of radius r, and color it according to the number of iterations it takes to escape from the circle. If the point never escapes, or doesn’t escape after however many iterations, then color the point black.

Pierre Fatou and Gaston Julia first investigated, in the 1910s, iterations of this type, specifically iterations of the type z->z^2+c, where c is the initial point, and z is the point that changes. However, they simply noted the chaos of the system: Julia studied variations where c was a single number and z was anything, and Fatou studied the function for where c was a single number and where the initial value of z was 0. It wasn’t until 1979 that Benoit Mandelbrot would expand on Julia’s work using computer analysis.

Mandelbrot decided to use a computer to plot Julia sets using the color-by-point method, and was stunned by the results. Julia sets create self-similar fractals, but are much more interesting as they use color and are much more varied, as the following video of a set of Julia sets shows:

The next logical step after studying these would be to let c be equal to the initial value of z, and so he created what is now known as the Mandelbrot Set.

(right click and press view image for full size)

The Mandelbrot Set is unlike any of the fractals that we’ve come across so far in that it has no self-symmetry. Although there may be shapes farther in that look like the Mandelbrot Set, they aren’t quite the whole. There are many areas of the Mandelbrot set, such as the antennae-like left regions explored in the video “Trip to E214” e.g. 10^214 zoom, so large that the smallest particles postulated by physics would be nearly a googol times larger than the universe:

Or into the “Seahorse Valley” (I am not making these names up) ,where there are intricate spiral structures:

Or into a whole host of other places by using a fractal software such as the ones at the end of this post.

An interesting thing about the Mandelbrot Set is that, the farther you zoom into an area, the more it seems to look like the corresponding Julia set for that point, such as in this video, also into the Seahorse Valley area of the Mandelbrot Set:

What’s also interesting about the Mandelbrot Set is that no matter how far you zoom in, there always appears to be more intricate structures, which has led to the rise of groups specializing in computer zooms really far into the set. An example of this is the Trip to E214, and I believe the record for high definition is 10^275:

and for low definition there’s 10^999:

New structures can even pop up deep into the set, such as a long string of Xs:

(For those who don’t have the patience to watch the above video, the point can be seen at http://stardust4ever.deviantart.com/art/XX-Reactor-Core-Deep-Zoom-131573460)

By now you should have noticed a really interesting optical illusion: When you look away from a video, the space seems to be shrinking in!  It’s also the sign not to watch all of the videos all at once.

The fractal dimension for the Mandelbrot Set can also be computed, but it’s quite complicated. In fact, it was not until 1991 that Mitsuhiro Shishikura proved that the Hausdorff dimension of the Mandelbrot set equals… 2. The area of the Mandelbrot Set, however, is not so simple. Although nobody has figured out a way to calculate it precisely (The best formula I know of (i.e. only) is given in equations 7-10 on the Mathworld page), it is possible to get an estimate of it by counting pixels from -2+-2i to 2+2i, and finding what percent of them are in the set. The current best known value for the area is 1.50659177 +- 0.00000008 , given by Robert Munafo in 2003 on his page “Area of the Mandelbrot Set“. Cyril Soler, a researcher at the National de Recherche en Informatique et Automatique, conjectures that the value is exactly sqrt(6*pi-1)-e, but whether he is right or wrong is not known. It is also possible to calculate exact mathematical formulae for some of the subregions of the Mandelbrot Set, such as the large cardioid-shaped blob, which can be expressed in parametric form as

The Mandelbrot Set is also connected, which means that if you take any point inside the set, you can get to any other point inside the set by following some series of pathways.

Lastly, for programmers, you can easily make your own Mandelbrot set generator even if your programming language does not support complex numbers by iterating z_realtemp->z_real*z_real – z_imag*z_imag + c_real, z_imag->2*z_real*z_imag + c_imag, and z_real->z_realtemp.A good pseudocode example is at http://en.wikipedia.org/wiki/Mandelbrot_set#Computer_drawings.

Naturally, the Mandelbrot Set is not the only escape-time fractal there is. First of all, there are the Mandelbrot Generalizations, z->z^p+c: (0<p<20 for the video)

[http://www.youtube.com/watch?v=n-zmLPuQg6w]

There’s also the Phoenix Julia set, which not only relies on the previous point but the point before that, zn + 1 = zn2 + Re(c) + Im(c) * zn – 1, , where c is constant:

A good online program for exploring it on your own is at http://www.jamesh.id.au/fractals/mandel/Phoenix.html

There are an infinitude of others, so I won’t go through them all here, but a good gallery of escape-time fractal art is at http://fractalarts.com/ASF/galleries.html.

Escape-time fractals don’t have to have the escape condition be when the point goes outside a circle; In fractals such as the Newton fractal, based on the function x^3-1=0, the condition is when the point gets close enough to a root of the equation. Basically, what happens is that Newton’s method to find roots of an equation,

is iterated for f=(x^3-1=0), which creates the escape-time formula z->z-(x^3-1)/(3x^2). Once the point gets close enough to one of the roots of x^3-1: x=1,x=-(-1)^(1/3), x=(-1)^(2/3), it is colored according to the root it arrived at and the amount of time it took. This creates a Julia-like result:

Once again, this can be generalized to different functions and powers, such as in f(x)=x^5-1 :

It also turns out that most-if not all- self-similar fractals can be implemented as escape-time fractals. For example, the MilleniumFractal fractals page lists the formula for a Escape-Time version of the Sierpinski Gasket as being:

For each point c:
z0 = c

zn+1 = 2 zni, Im(zn) > 0.5
zn+1 = 2 zn – 1, Re(zn) > 0.5, Im(zn) ≤ 0.5
zn+1 = 2 zn, Re(zn) ≤ 0.5, Im(zn) ≤ 0.5

What happens is that for every point that is recursed upon, its imprecision is increased by a factor of 2 each iteration, eventually getting “thrown out” of the set.

Coloring Methods

As interesting as the fractals are the methods that can be used for visualization styles. For example, instead of coloring the Mandelbrot set by the number of iterations it takes for a point to escape, we could color the points that escape according to iterations+c_real, and the inside according to the magnitude of c=sqrt(real^2+imag^2), which would produce the following effect:

 

The bands are because of the palette

Many types of visualization for fractals have been discovered, such as “incoloring” and “outcoloring” methods. As well as the above example, one  such visualization method is Biomorphs, invented by Clifford Pickover, which makes the fractals into bacteria-like shapes. The method was based originally on an accidental bug made while programming a fractal program, which is perhaps why Mad Teddy’s code might be easier to use than my explanation!

Also, quite interesting results come from coloring the outside of the Mandelbrot Set a different color depending on whether the imaginary values of the points become negative after they escape:

Past that, there are more complicated colorings we can do, such as noticing that there is action inside the set as well as outside the set. Basically, if you iterate the Mandelbrot Set iteration on a single point over and over, the numbers will appear to converge to one number or another, showing the “orbit”. A good applet for seeing this is at http://math.hws.edu/xJava/MB/ (under Tools):

Now, suppose that at each point the point reaches, we check to see if it is within a certain area, and if so, immediately stop the iteration and color the initial point according to the place inside the trap it landed. For example, suppose we have a cross-shaped trap centered at 0+0i, colored with a gradient. Then we’d get pictures like this:

 

From fractaldomains.com

It turns out that if you take a stalk pattern like this and plot it over the entire Mandelbrot Set, stalks will appear inside the set as well as outside. These stalks are called Pickover stalks after Clifford Pickover, and often create nice spiraling patterns.

Other shapes for orbit traps can be made, with different results.  Circular orbit traps tend to show interesting detail in the Seahorse Valley regions:

 

Animations with orbit traps are especially interesting, because with animation you can not only zoom in, but you can also change the orbit trap as you’re zooming in!

A further explanation (and where a few of the images come from) is at http://www.fractaldomains.com/tutorial/orbit/index.html , and a large gallery of orbit traps is at http://softology.com.au/gallery/gallerymandelbrotorbittraps.htm !

Expanding on the idea of orbit traps, Melinda Green in 1993 proposed the following idea: Take a 2-dimensional array of integers, and then perform the standard Mandelbrot set iteration for each point, recording the places the point visits. If the point is inside the Mandelbrot Set, take the list of points the point visits and add 1 to the cells of the array corresponding to the points it visited. After you’ve computed all the points, you wind up with an array of pixels, which, when scaled and displayed, create what Lori Gardi calls the “Bhuddabrot”:

Bhuddabrots are much more computationally intensive than the standard  Mandelbrot set, because you need to sample more than 1 point per pixel and iterate thousands of times for each point to get good results, otherwise “noise” will appear around the main area. The current record for largest rendering of a Bhuddabrot is held by Johann Korndoerfer at 20,000*25,000 pixels, resulting in a Jpg file of 88 MB! He has an interesting write-up of his record at his blog, including the large image and a Firefox-friendly 5000×6000 pixel image. The image took 1,000,000 to 5,000,000 iterations per point, and took 16 hours using a custom Lisp program on an 8-core Xeon machine.

Back to Three Dimensions

At some point or another, people decided that fractals, as computationally intensive as they may be, were getting too easy for computers.On October 13, 2006,  Marco Vernaglione set out the challenge of finding a 3D analog of the Mandelbrot Set, and on 8/11/2009, Daniel White of skytopia.com succeeded, discovering a 3-dimensional version of the Mandelbrot Set: The Mandelbulb.

 

By rephrasing the Mandelbrot set as an iteration in polar coordinates, White managed to generalize the iteration to 3D polar coordinates, getting the iteration:

r = sqrt(x*x + y*y + z*z )
theta = atan2(sqrt(x*x + y*y) , z)
phi = atan2(y,x)

newx = r^n * sin(theta*n) * cos(phi*n)
newy = r^n * sin(theta*n) * sin(phi*n)
newz = r^n * cos(theta*n)

where n is the power. White originally tried n=2, but with discouraging results. Paul Nylander, however, suggested setting n=8, which created the Mandelbulb as we know it.

Using 3D graphics technology, we can zoom into the Mandelbulb and render scenes inside it, some of which can seem amazingly realistic, such as this which White calls the “Mandelbulb Spine”:

More renders are at Daniel White’s website, which I strongly encourage you to visit!

If it is a fractal, there is an animation of it. The same rule holds for the Mandelbulb, and a number of amazing zooms have been made. For example, there’s Daniel White’s zoom into the top part:

As you can see, it’s fairly hard to navigate in 3D using a 2D mouse.

Other sections of the Mandelbulb resemble the broccoli mentioned earlier, as in the end of this video:

As strange as the Mandelbulb may seem, it has some areas strikingly similar to the Mandelbrot Set. For example, here is a part of the Mandelbulb:

Here’s a Mandelbrot spiral:

After the Mandelbulb was discovered, other 3-dimensional fractals suddenly started to appear, many from FractalForums.com . A stunning example is the Mandelbox, which is like a much more complex version of the Mandelbulb:

On the interior, it can seem cavernous, and with the right coloring it can even seem like an ancient palace:

At last count, there are 354 versions of the Mandelbulb, such as polyhedral IFS, TGlad’s variations… This blog post, long as it may be, is simply too short to talk about all of them.

To 4D, and Beyond!

I’ve skipped ahead a bit by talking about the Mandelbulb and the Mandelbox, because in reality a 4-dimensional fractal, the Quaternion Julia Fractal, was discovered first. In 1843, while walking on a bridge in Dublin, Sir William Rowan Hamilton discovered a way to represent a “4-dimensional” complex number, made up of three complex parts: i, j, and k, and a real part, with the formula:

i² = j² = k² = i j k = −1

It turns out that quaternions are really quite complex (no pun intended), in that they are not commutative under addition. That is,

This makes some very complex formulae for squaring, multiplication, and other functions (see Paul Bourke’s article on pretty much everything about quaternions) . However, the formula for the Quaternion Julia fractal is the same as the normal Julia: z=z^2+c, where z is a quaternion, and c is another constant quaternion. However, in this case, if we choose the right slice of the 4D object to display on the screen, we get very strange self-similar fractals:

Videos of the quaternion Julia fractals changing are even more hard to comprehend, a bit of truth to A.Square’s story:

A 4-dimensional Mandelbrot set can also be made, but so far as I know nobody’s done a good rendering of it yet.

First, suppose we go back to the original Mandelbrot Set. For every point in the Mandelbrot Set, we can generate a Julia set by setting the variable c value of that point in the Mandelbrot Set to the c value of the Julia set. Now, suppose we take all of the Julia sets in one column of the Mandelbrot set, and layer them on top of each other like pages in a stack, thus creating a 3-dimensional object. Now, suppose we do that with all of the columns in the Mandelbrot Set, creating a bunch of 3-dimensional fractals. Lastly, we take all of the 3-fractals, and layer them on top of each other in 4-dimensional space, and you have the 4-dimensional version of the Mandelbrot Set (from http://www.superliminal.com/fractals/surfaces/index.html):

 

Best detail I could find. If you have a better one, feel free to post it in the comments!

Of course, you could also use quaternions and the formula z=z^2+c to compute another 4D Mandelbrot, but it turns out that all it does is spin the set around:

 

From Paul Nylander

Now, if it turns out that 4 dimensions isn’t enough, we can always generalize fractals to higher and higher dimensions. We can simulate 5-dimensional cauliflower, 7-dimensional Koch snowflakes, or we can even generalize the Quaternion Julia, Mandelbrot, and Mandelbulb  formulas to 8 dimensions or more.

But in the end, it all comes down to how fast we can draw. But whether by hand or by computer, fractals are still amazing.

 

PiCF version 1.0.0 is now out!

All major bugs have been fixed, the integrated help system is now functional, and it even comes with a tutorial! Described in more detail in my pi post

Currently only for x64 Windows: download here (0.416 MB)