Creating Fake Landscapes

Suppose, just for a moment, that you’re writing a program that allows you to explore a planet, similar to the Earth, but completely made up. The main problem in writing such a program would be generating the terrain of the planet- all the mountains, trenches, and everything in between. Sure, you could spend years modeling the entire landscape, meter by meter, kilometer by kilometer, but it’s much easier just to write a different program to automatically generate the terrain of the planet. (The former method, in fact, would take about 16 million years, even if you could create a square meter every second, without any breaks or sleep) It turns out that it’s possible to create very realistic landscapes, even by using methods which at first would seem to have no correspondence whatsoever with how terrain forms in real life. Continue reading

Reversing the Game of Life for Fun and Profit

This post is mostly a transcript with notes of a talk I did at G4GX, the 10th Gathering for Gardner. As such, it differs from the normal format a bit.

(Videography due to Peter Bickford- there was a cameraman recording all the talks there, but I haven’t heard from him recently.)

Transcript (sentences in brackets represent notes, i.e, things I wanted to say in the talk but didn’t have time for):

Continue reading

Sliding Block Puzzles, Part 3

This article may as well be part of a series: 1 2 (read this first)

A note on notation: We use the abbreviation “simple^3” in this post to refer to simple simple simple sliding block puzzles.

Warning: This article contains original research! (It’s also more papery than most of the other articles on this site.)

In the previous post I mentioned several methods that could be used to speed up the finding and exhaustive searching of sliding block puzzles, as well as a candidate for the ‘hardest’ one-piece-goal diagonal-traverse 4×4 sliding block puzzle. I am pleased to say that the various bugs have been worked out of the sliding block puzzle searcher, and that 132 has been confirmed to be the maximum number of moves for a simple simple simple 4×4 sliding block puzzle with no internal walls!

(that’s not a lot of qualifiers at all)

As the reader may expect, this blog post will report the main results from the 4×4 search. Additionally, I’ll go over precisely how the algorithms very briefly mentioned in the previous post work, and some estimated running times for various methods. Following that, various metrics for determining the difficulty of a SBP will be discussed, and a new metric will be introduced which, while it is slow and tricky to implement, is expected to give results actually corresponding to the difficulty as measured by a human.

Search Results

Running SBPSearcher on all the 4x4s takes about 3 hours (processing 12295564 puzzles), which was quite useful for fixing hidden bugs in the search algorithm. Here are the best simple^3 4×4 n-piece SBPs, where n goes from 1 to 15:

For those of you with a text-only web browser, your lucky numbers are: 1,4,9,19,36,51,62,89,132,81,64,73,61,25,21

Click to view in full size

And for comparison, the move counts vs. the previously reported move counts:

   1   4   9  19  36  51  62  89 132  81  64  73  61  25  21
   1   4   9  24  36  52  68  90 132  81  64  73  61  25  21

    Notice that all the entries in the top row of the table are either less than or equal to their respective entry in the bottom row of the table, some (such as in the case of p=7 or p=4) being much less. This is because the previous search (run, in fact, as a subfeature in the sliding block puzzle evolver) started with all possible positions and defined the goal piece to be the first piece encountered going left to right, top to bottom, across the board. As such, the search included both all the 4×4 simple^3 sliding block puzzles as well as a peculiar subclass of sliding block puzzles where the upper-left square is empty but the goal piece is often a single square away! This accounts for the 6-piece case and the 8-piece case (in which the first move is to move the goal piece to the upper-left), but what about  the other two cases?

    For the 4-piece case, the originally reported puzzle (see here for the whole list) wasn’t simple^3, and it isn’t possible to convert it into a simple^3 SBP by shifting blocks in less than 5 moves. Interestingly, the new ‘best’ puzzle for 4 pieces is just James Stephens’ Simplicity with a few blocks shifted, a different goal piece, and a different goal position! (There could be copyright issues, so unless stated, the puzzles in this article are public domain except for the one in the upper-right corner of this image) Additionally, the 7-piece 68-move puzzle in the previous article is impossible to solve! The upper-left triomino should be flipped vertically in place. I expect this to have been a typing error, but the question still stands: Why is there a 6-move difference?

    As mentioned before, the actual stating of what constitutes a simple simple simple puzzle is as such: “A sliding block puzzle where the piece to be moved to the goal is in the upper-left corner, and the goal is to move the goal piece to the lower-right corner of the board”. Unfortunately, there’s some ambiguity as to when a piece is in the lower-right corner of the board – is it when the lower-right cell is taken up by a cell in the goal piece, or is it when the goal piece is as far to the lower-right as it can be? If we take the latter to be the definition, then additional ambiguities pop up when faced with certain problems, such as the following one often encountered by SBPSearcher:

0000000001110102

Which piece has just moved into the lower-right?

Because of problems like these, SBPSearcher uses the former definition, which means that puzzles where the goal piece takes the shape of an R aren’t processed. (In actuality, it’s SBPFinder that does this checking, when it checks if a puzzle is in the ‘justsolved’ state). If we say that the first definition is stricter than the second, then it could be said that SBPSearcher searched only through the “Strict simple simple simple 4×4 Sliding Block Puzzles”. While I don’t think that any of the results would change other than the p=7 case, it would probably be worth it to modify a version of SBPSearcher so that it works with non-strict simple simple simple sliding block puzzles.

A few last interesting things: The 13-piece case appears to be almost exactly the same as the Time puzzle, listed in Hordern’s book as D50-59! (See also its entry in Rob Stegmann’s collection) Additionally, the same split-a-piece-and-rearrange-the-pieces-gives-you-extra-moves effect is still present, if only because of the chosen metric.

The Algorithm, In Slightly More Detail Than Last Time

There are only two ‘neat’ algorithms contained in the entire SBP collection of programs, these two in SBPFinder and SBPSearcher respectively. The first of them is used to find all possible sliding block puzzle ‘justsolved’ positions that fit in an NxM grid, and runs in at most O(2*3^(N+M-2)*4^((N-1)*(M-1))) time. (Empirical guess; Due to the nature of the algorithm, the calculation of the running time is probably very hairy).

First of all, the grid is numbered like this:

 0  1  3  6 10
 2  4  7 11 15
 5  8 12 16 19
 9 13 17 20 22
14 18 21 23 24

where the numbers increase going from top-right to lower-left, and moving to the next column over every time the path hits the left or bottom edges. (Technical note: For square grids, the formula x+y<N?TriangleNumber(x + y + 1) – x- 1:N*M – TriangleNumber(N + M- x – y – 1) + N – x – 1 should generate this array)

Then, the algorithm starts at cell #0, and does the following:

Given a partially filled board: (at cell 0 this board is all holes)

  1. Take all the values of the cells from (cell_number-1) to the number of the cell to the left of me, and put that in list A.
  2. Remove all duplicates from A.
  3. If the following values do not exist in A, add them:  0 (aka a hole) and the smallest value between 1 and 4 inclusive that does not exist in A (if there is no smallest value between 1 and 4, don’t add anything except the 0)
  4. Run fillboard (that is, the current function), giving it cell_number+1 and the board given to this level of the function with the value of cell_number changed to n, for each value n in A.

However, if the board is all filled (i.e, cell_number=25) , check to see that the board has holes and that it is justsolved, and if so, standardize the piece numbering and make sure that you haven’t encountered it before, and if so, sort the board into the appropriate piece number “box”.

Once the fillboard algorithm finishes, you should have N*M-1 boxes with all the possible justsolved positions that can be made on an N*M grid! There are a number of other possible methods that do the same thing- all it basically needs to do is generate all possible ways that pieces fitting in an NxM grid can be placed in a grid of the same size.

For example, you could potentially go through all possible 5-colorings of the grid (4-colors and holes), and remove duplicates, but that would take O(5^(N*M)) time, which isn’t the most feasible option for even a 4×4 grid.  You could also proceed in a way that would generate the results for the next number of pieces based on the results of the current number of pieces and all possible rotations and positions of a single piece in an NxM grid by seeing which pieces can be added to the results of the current stage, but that would take O(single_piece_results*all_justsolved_results). While that number may seem small, taking into account the actual numbers for a 4×4 grid (single_piece_results=11505 and all_justsolved_results=12295564) reveals the expected running time to be about the same as the slow 5-coloring method. However, it may be possible to speed up this method using various tricks of reducing which pieces need to be checked as to if they can be added. Lastly, one could go through all possibilities of edges separating pieces, and then figuring out which shapes are holes. The time for this would be somewhere between O(2^(3NM-N-M)) and O(2^(2NM-N-M)), the first being clearly infeasible and the second being much too plausible for a 4×4 grid.

In practice, the fillboard algorithm needs to check about 1.5 times the estimated number of boards to make sure it hasn’t found them before, resulting in about half a billion hash table lookups for a 4×4 grid.

The second algorithm, which SBPSearcher is almost completely composed of, is much simpler! Starting from a list of all justsolved puzzles (which can be generated by the fillboard algorithm), the following is run for each board in the list:

  1. Run a diameter search from the current puzzle to find which other positions in the current puzzle’s graph have the goal piece in the same position;
  2. Remove the results from step 1 from the list;
  3. Run another diameter search from all the positions from step 1 (i.e consider all positions from step 1 to be 0 moves away from the start and work from there), and return the last position found where the goal piece is in the upper-left.

Step 2 is really where the speedup happens- Because each puzzle has a graph of positions that can be reached from it, and some of these positions are also in the big list of puzzles to be processed, you can find the puzzle furthest away from any of the goal positions by just searching away from them. Then, because the entire group has been solved, you don’t need to solve the group again for each of the other goal positions in it and those can be removed from the list. For a 4×4 board, the search can be done in 3 hours, 27 minutes on one thread on a computer with a Core I7-2600 @3.4 Ghz and a reasonable amount of memory. In total, the entire thing, finding puzzles and searching through all of the results, can be done in about 4 hours.

Of course, where there are algorithms, there are also problems that mess up the algorithms- for example, how would it be possible to modify SBPSearcher’s algorithm to do, say, simple simple puzzles? Or, is it possible to have the fillboard algorithm work with boards with internal walls or boards in weird shapes, or does it need to choose where the walls are? An interesting thing that would seem to point that the answer to the first question might be yes is that to find the pair of points furthest apart in a graph (which would be equivalent to finding the hardest compound SBP in a graph) requires only 2 diameter searches! Basically, you start with any point in the graph, then find the point furthest away from that, and let it be your new point. Then, find the point furthest away from your new point, and the two points, the one you’ve just found and the one before that, are the two points farthest away from each other. (See “Wennmacker’s Gadget”, page 98-99 and 7 in Ivan Moscovich’s “The Monty Hall Problem and Other Puzzles”)

Metrics

a.k.a. redefining the problem

Through the last 3 posts on sliding block puzzles, I have usually used the “Moves” metric for defining how hard a puzzle is. Just to be clear, an action in the Moves metric is defined as sliding a single piece to another location by a sequence of steps to the left, right, up and down, making sure not to pass through any other pieces or invoke any other quantum phenomena along the way. (The jury is out as to if sliding at the speed of light is allowed). While the majority of solvers use the Moves metric (my SBPSolver, Jimslide, KlotskiSolver, etc.), there are many other metrics for giving an approximation to the difficulty of a sliding block puzzle, such as the Steps metric, and the not-widely used sliding-line metric. The Steps metric is defined as just that- an action (or ‘step’) is sliding a single piece a single unit up, down, left, or right. The sliding-line metric is similar: an action is sliding a single piece any distance in a straight line up, down, left or right. So far as I know, only Analogbit’s online solver and the earliest version of my SBPSolver used the steps metric, and only Donald Knuth’s “SLIDING” program has support for the sliding-line metric. (It also has support for all the kinds of metric presented in this post except for the BB metrics!)

Demonstration of various metrics

Additionally, each of the 3 metrics described above has another version which has the same constraints but can move multiple pieces at a time in the same direction(s). For example, a ‘supermove’ version of the Steps metric would allow you to slide any set of pieces one square in any one direction. (As far as I know, only Donald Knuth’s SLIDING program and Soft Qui Peut’s SBPSolver have support for any of the supermove metrics) In total, combining the supermove metrics with the normal metrics, there are 6 different metrics and thus 6 different ways to express the difficulty of a puzzle as a number. Note however, that a difficulty in one metric can’t be converted into another, which means that for completeness when presenting results you need to solve each sliding block puzzle 6 different ways! Even worse, the solving paths in different metrics need not be the same!

For example, in the left part of the image above, where the goal is to get the red piece to the lower-right corner, the Moves metric would report 1, and the red piece would go around the left side of the large block in the center. However, the Steps metric would report 5, by moving the yellow block left and then the red block down 4 times. Also, in the right picture both the Moves and Steps metrics would report ∞, because the green block can’t be moved without intersecting with the blue, and the blue block can’t be moved without intersecting with the green, but any of the Supermove metrics would report a finite number by moving both blocks at once!

Various other metrics can be proposed, some with other restrictions (You may not move a 1×1 next to a triomino, etc.), some which, like the Supermove metrics and the second puzzle above, actually change the way the pieces move, and you can eventually get to the point where it’s hard to call the puzzle a sliding block puzzle anymore. (For example, Dries de Clerq’s “Flying Block Puzzles” can be written as a metric: “An action is defined as a move or rotation of one piece to another location. Pieces may pass through each other while moving, but only one piece may be moved at a time”.

Suppose, however, that for now we’re purist and only allow metrics which generate numbers based on the step, sliding-line, moves, super-step, super-sliding-line, and super-moves metrics. It can be seen , quite obviously in fact, that these metrics don’t in all cases show the actual difficulty of the puzzle being considered. For example, take a very large (say 128×128) grid, and add a 126×126 wall in the center. Fill the moat that forms with 507 1×1 cells, all different pieces, and make the problem be to bring the block in the upper-left down to the lower-right. If my calculations are correct, the resulting problem should take 254*507+254=129,032 steps, sliding-line actions, and moves to solve, which would seem to indicate that this is a very hard puzzle indeed! However, any person who knows the first thing about sliding block puzzles should be able to solve it -assuming they can stay awake the full 17 hours it would take!

Because of this discouraging fact- that is, 6 competing standards, none of which are quite right, I would like to introduce a 7th, this one based on a theoretical simulation of a robot that knows the first thing about sliding block puzzles, but nothing else.

The BB Metric

a.k.a. attempting not to invoke the xkcd reference

Nick Baxter and I have been working on a metric which should better approximate the difficulty of getting from one point to another in a position graph. The basic idea is that the difficulty of getting from node A to node B in a graph is about the same as the average difficulty of getting from node A to node B in all spanning trees of the graph. However, finding the difficulty of getting to node A to node B in a tree is nontrivial, or at least so it would seem at first glance.

Suppose you’re at the entrance of a maze, and the maze goes on for far enough and is tall enough such that you can only see the paths immediately leading off from where you are. If you know that the maze is a tree (i.e, it has no loops), then a reasonable method might be to choose a random pathway, and traverse that part of the maze. If you return from that part to the original node, then that part doesn’t contain the goal node and you can choose another random pathway to traverse, making sure of course not to go down the same paths you’ve gone down before. (Note that to be sure that the goal node isn’t in a part of the maze, you need to go down all the paths twice, to walk down a path and then back up the path). For now, we take the difficulty of the maze to be the average number of paths you’ll need to walk on to get to the goal node or decide that the maze has no goal(counting paths you walk down and up on as +2 to the difficulty). Because of the fact that if the node you’re in has no descendant nodes which are the goal node you’ll need to go down all of the paths leading from that node twice, the difficulty of the part of the maze tree branching off from a node A can be calculated as

sum(a_i+2,i=1 to n)     (eq. 1)

where n is the number of subnodes, and a_i is the difficulty of the ith subnode. Also, if the node A is on the solution path between the start and end nodes, then the difficulty of A can be calculated as

a_n+1+1/2 sum(a_i+2,i=1 to n-1)    (eq. 2)

where a_n is assumed to be the subnode which leads to the goal. This basically states that on average that you’ll have to go down half of the subpaths and the path leading to the goal to get to the goal. Because root nodes are assumed to have 0 difficulty, you can work up from the bottom of the maze, filling in difficulties of nodes as you go up the tree. After the difficulty of the root node has been calculated, the length of the path between start and end nodes should be subtracted to give labyrinths (mazes with only a single path) a BB difficulty of 0.

Perhaps surprisingly, it turns out that using this scheme, the difficulty of a tree with one goal node is always measured as V-1-m, where V is the number of nodes in the tree (and V-1 is the number of edges, but this is not true for graphs) and m is the number of steps needed to get from the start node to the end node in the tree! Because of this, the difficulty of getting from one point to another in a graph under the BB metric is just the number of vertices, minus one, minus the average path length between the start and end nodes in the graph.

A few things to note (and a disclaimer): First of all, the actual graph of which positions can be reached in 1 action from each of the other positions actually depends on the type of move chosen, so the BB metric doesn’t really remedy the problem of the 6 competing standards! Secondly, the problem of computing the average path length between two points in a graph is really really hard to do quickly, especially because an algorithm which would also give you the maximum path length between two points in polynomial time would allow you to test if a graph has a Hamiltonian Path in polynomial time, and since the Hamiltonian Path problem is NP-Complete, you could also do the Traveling Salesman Problem, Knapsack Problem, Graph Coloring Problem, etc. in polynomial time! Lastly, I haven’t tested this metric on any actual puzzles yet, and I’m also not certain that nobody else has come up with the same difficulty metric. If anybody knows, please tell me!

One last note: Humans don’t actually walk down mazes by choosing random paths- usually it’s possible to see if a path dead-ends, and often people choose the path leading closest to the finish first, as well as a whole host of other techniques that people use when trying to escape from a maze. Walter D. Pullen, author of the excellent maze-making program Daedalus, has a big long list of things that make a maze difficult here. (Many of these factors could be implemented by just adding weights to eqns. 1 and 2 above)

Open Problems

a.k.a. things for an idling programmer to do

  • What are the hardest simple simple simple 3×4 sliding block puzzles in different metrics? 2×8? 4×5? (Many, many popular sliding block puzzles fit in a 4×5 grid)
  • How much of a difference do the hardest strict simple^3 puzzles have with the hardest simple^3 SBPs?
  • How hard is it to search through all simple simple 4×4 SBPs? What about simple SBPs?
  • (Robert Smith) Is there any importance to the dual of the graph induced by an SBP?
  • Why hasn’t anybody found the hardest 15 puzzle position yet? (According to Karlemo and Östergård, only 1.3 TB would be required, which many large external hard drives today can hold! Unfortunately, there would be a lot of reading and writing to the hard disk, which would slow down the computation a bit.) (Or have they?)
  • Why 132?
  • What’s the hardest 2-piece simple sliding block puzzle in a square grid? Ziegler-Hunts & Ziegler-Hunts have shown a lower bound of 4n-16 for an nxn grid, n>6. How to do so isn’t too difficult, and is left as a puzzle for the reader.
  • Is there a better metric for difficulty than the BB metric?
  • Is there a better way to find different sliding block puzzle positions? (i.e, improve the fillboard algorithm?)
  • Is it possible to tell if a SBP is solvable without searching through all possible positions? (This question was proposed in Martin Gardner’s article on Sliding Block Puzzles in the February 1964 issue of Scientific American)
  • (Robert Smith) How do solutions of SBPs vary when we make an atomic change to the puzzle?
  • Are 3-dimensional sliding block puzzles interesting?
  • Would it be worthwhile to create an online database of sliding block puzzles based on the OEIS and as a sort of spiritual successor to Edward Hordern’s Sliding Piece Puzzles?

Sources

Ed Pegg, “Math Games: sliding-block Puzzles”, http://www.maa.org/editorial/mathgames/mathgames_12_13_04.html

James Stephens, “Sliding Block Puzzles”, http://puzzlebeast.com/slidingblock/index.html (see also the entire website)

Rob Stegmann, “Rob’s Puzzle Page: Sliding Puzzles”, http://www.robspuzzlepage.com/sliding.htm

Dries De Clerq, “Sliding Block Puzzles” http://puzzles.net23.net/

Neil Bickford, “SbpUtilities”, http://github.com/Nbickford/SbpUtilities

Jim Leonard, “JimSlide”, http://xuth.net/jimslide/

The Mysterious Tim of Analogbit, “Sliding Block Puzzle Solver”, http://analogbit.com/software/puzzletools

Walter D. Pullen, “Think Labyrinth!”, http://www.astrolog.org/labyrnth.htm

Donald Knuth, “SLIDING”, http://www-cs-staff.stanford.edu/~uno/programs/sliding.w

Martin Gardner, “The hypnotic fascination of sliding-block puzzles”, Scientific American, 210:122-130, 1964.

L.E. Hordern, “Sliding Piece Puzzles”, published 1986 Oxford University Press

Ivan Moscovich, “The Monty Hall Problem and Other Puzzles”

R.W. Karlemo, R. J. Östergård, “On Sliding Block Puzzles”, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.7558

Robert Hearn, “Games, Puzzles, and Computation”, http://groups.csail.mit.edu/mac/users/bob/hearn-thesis-final.pdf

Ruben Grønning Spaans, “Improving sliding-block puzzle solvingusing meta-level reasoning”, http://daim.idi.ntnu.no/masteroppgave?id=5516

John Tromp and Rudi Cilibrasi, “”Limits on Rush Hour Logic Complexity”, arxiv.org/pdf/cs/0502068

David Singmaster et al.,  “Sliding Block Circular”, www.g4g4.com/pMyCD5/PUZZLES/SLIDING/SLIDE1.DOC

Thinkfun & Mark Engelberg, “The Inside Story of How We Created 2500 Great Rush Hour Challenges”, http://www.thinkfun.com/microsite/rushhour/creating2500challenges

Ghaith Tarawneh, “Rush Hour [Game AI]”, http://black-extruder.net/blog/rush-hour-game-ai.htm

Any answers to questions or rebukes of the data? Post a comment or email the author at (rot13) grpuvr314@tznvy.pbz

What I’ve Been Working on Lately

Readers of this blog may notice that I haven’t been updating for the last 4 months. The purpose of this “filler” post is for me to say why.

First of all, I could easily blame the season. Summer, as is well known, is a time to “kick back and relax” as well as forgetting about important things you should be doing and reading webcomics instead.

I could also blame the wonderful 3D printer I’ve bought, which has quickly filled up my desk and emptied my wallet with tens of tiny plastic models and puzzles.

However plausible that may seem, I would more truthfully blame the projects I’ve been working on, especially my “work” in the field of sliding block puzzles.

As you may know from a post I made about a year ago, I’m quite interested in really hard sliding block puzzles such as the Panex Puzzle (30,000? moves) or Sunshine (729? moves). James Stephens, of puzzlebeast.com, is certainly a pioneer in making really hard sliding block puzzles by computer. Using his “Puzzlebeast” software, he’s made at least 23 progressively harder sliding block puzzles, ranging from 18 to 148 moves! What’s more, he restricts many of these puzzles to only having a few pieces, and in at least 11 of them the goals are “Simple Simple”, that is, move a single piece to a single corner! Even the puzzle which fits on a 4×4 grid and uses only 4 pieces and is only 18 moves to a solution, “Simplicity”, is really quite hard for a human to solve! Oskar van Deventer, maker of many mechanical monstrosities, has designed a 3d-printable version of the Simplicity puzzle, calling it the “Hardest sliding piece puzzle on a 4×4 grid”.

Gauging from Stephens’ description of Puzzlebeast, Puzzlebeast uses a genetic algorithm to create its puzzles. So far as I can tell, it starts with a random set of puzzles. It picks the hardest out of that set, then for the next generation it creates random “mutations” of the best puzzles of the last generation. For example, it may add a piece, move a piece, modify a piece, or remove a piece. Repeat this process over and over again, and eventually you’ll have a hard sliding block puzzle. I think because it seemed to work very well for him, I eventually tried to make a sliding block puzzle evolver for myself.

And so the SBP project began: The earliest version of my SBP evolver were coded in Mathematica, and was very manual.  I would start with 5 randomly generated 4×5 rectangular puzzles, without any restrictions on the number of pieces, and then for each puzzle I interpreted it as a “Simple simple simple sliding block puzzle”(see footnote 1) and then fed it into Analogbit’s online sliding block puzzle solver, which counts moves in steps, meaning that each “step” is sliding one piece one grid space in the puzzle. Once that was finished, I took the best two puzzles, converted them to 1-d arrays, “interleaved” them, and then influenced random mutations in their cells (That is, each cell had about a 1/5 chance of being added 1 to or being subtracted 1 from). Since that probably isn’t too clear, here’s an example:

Suppose the best two puzzles from the last generation were

 0,19, 1, 2      0,19, 0, 3
 2, 1, 1, 1      2,19, 1, 1
 0, 2, 0, 2 and  1, 3, 2, 0
 1, 1, 0, 0      0, 1, 1, 0
 1, 1, 0, 1      0, 1,19, 2

Then the “interleaving” of the two would be

0,19,1,2,2,1,1,1,0,2,0,2,1,1,0,0,1,1,0,1 +

0,19,0,3,2,19,1,1,1,3,2,0,0,1,1,0,0,1,19,2=

0, 19, 1, 3, 2, 19, 1, 1, 0, 3, 0, 0, 1, 1, 0, 0, 1, 1, 0, 2

and the “mutants” would be generated from there, and the process would repeat.

Notes: 1. “Simple simple simple sliding block puzzle” is my own term, based off of Nick Baxter’s definitions at the Sliding Block Puzzle Page . For me, it means that each piece is different from another only by its shape, and that the goal is to get the top-left-most piece down to the lower-right. The original definition was somewhat ambiguous, it actually means that if there is no piece with a square occupying the upper-left corner, it is not a valid simple simple simple sliding block puzzle. Additionally, many of the puzzles in this first section are copied straight from the Mathematica notebook, and so there is a distinct lack of proper number parsing. Parsing manually is simple: If two numbered pieces have the same number and are orthogonally adjacent, they both belong to one piece. Otherwise, they belong to different pieces. (Eventually I got a parser working to make these puzzles conform to standards, so the reading should get easier). Lastly, steps and moves are different, as one involves moving one piece one space and the other involves moving one piece any number of spaces. Edward Hordern’s book, Sliding Piece Puzzles, does a good job to clear this up.

Anyways, the very first incarnation of the SBP Evolver produced quite fantastic results! In about 10 generations, with populations varying from 4 to 10, the puzzles had progressed from 4 impossible puzzles and 1 8-stepper to  3 impossible puzzles and 7 possible, one a 58-stepper! Now of course this is no big result, as such puzzles as Bob Henderson and Gil Dogon’s “Quzzle-Killer” take 121 steps to solve, and so this early experiment was really just a proof of concept.

Eventually, I got tired of feeding all these puzzles through Analogbit, so I decided to write a C# program to do it for me, except instead of using Analogbit, it would use Jim Leonard’s very speedy Jimslide software. Furthermore, it would have much larger populations (perhaps 100 or so) and do many more generations. Some time later, after writing many, many lines of code to process the puzzles for Jimslide’s use, and then processing the output, I finally got it working! Almost immediately, I set it going on a huge number of 4×5 puzzles (10 million) and let it run overnight. It actually took around 4 days, but the best it found was a 176-move puzzle!

1203-4253-4673-8879-ABCC

The 176-mover (by no means the best simple simple)

The SBP Evolver also displayed some behavior which I conjecture is common to most poorly-written genetic evolution programs: Once the program found a really good optimization, if there weren’t any tiny optimizations which would improve the puzzle, it would tend to stay with that puzzle for a while unless, by pure chance, the next generation happened not to contain that puzzle. As such, the 4 days might actually not have been long enough to find the best puzzle. Additionally, Bob Henderson’s Gauntlet #2, another simple simple sliding block puzzle, trumps the 176-mover with 235 moves!

As far as I can tell from his description on Andrea Gilbert’s clickmazes site, Bob Henderson uses a very different method of generating sliding block puzzles. He inputs a collection of shapes, and then his packing program generates all possible ways that those blocks could be placed in a sliding puzzle. He then feeds those through his sliding block puzzle solving program, which returns the best results. I expect it produces results much faster, albeit only with the blocks the designer chooses.

Having been thus discouraged, I set out to go through the much easier 4×4 puzzles instead. This time, after about a day, it generated a 125-move simple simple puzzle! However, it turned out that the puzzle could be backtracked to make a 132-move puzzle, presented below. Interestingly, when I ran the same computation again, it generated another descendant of the 132! Now, of course, the question was: Is this the best 4×4 simple simple simple SBP?

1123-4522-4678-0690

Red to lower-right.

The problem could be easily solved by just generating all possible 4×4 positions, interpreting a simple simple simple SBP out of them, and then solving each and every single one. But how would you do the former? After all, there can be 16 possible block types in a 4×4 (including holes) and each space could be one of the 16, so you’d have 16^16=18,446,744,073,709,551,616 possible combinations! My answer to this was to invoke the 4-color theorem, using 0 for the holes, and 1,2,and 3 for colors for the other pieces. After the 4-coloring of the board, a parser would assign each block a unique number, and then remove any duplicates (This was in fact WRONG; The holes could be separate, and so the 4-color theorem applied only to the blocks. I in fact had to eventually use 5-colors for the whole thing). After around a day or so, the puzzle-finder would process 4294967296 boards, and it returned only 52,825,604 different boards!

Two other methods which might have worked as well or better might be to either generate all 1-piece boards, then generate all 2-piece boards from all nonoverlapping combinations of the 1-piece boards, and so forth. There’s also a method based on dynamic programming which I currently use, which (while it is too long to write out here) is included in the SBPFinder source code, available at the bottom of the post.

After the 52,825,604 different boards were separated into files depending on the number of pieces, the host software would load up each file into a hash table, then solve the first puzzle. After the puzzle was solved, the software would process the output and remove the puzzles mentioned in the solution from the hash table. It would then solve the new first puzzle, and so on. While it isn’t very fast at all to use this algorithm (it took 48 days), it at least had an advantage: By separating the puzzles into files labeled with the number of pieces, it was possible to return the “hardest” puzzle for an arbitrary number of pieces in a 4×4 grid! Below is a table of the hardest simple simple simple SBPs in a 4×4 grid, from 2 to 15 pieces. Keep note, though, that since the computation missed out on a few million puzzles, all the below are unverified and the ones which are very dubious are removed. Also, because SBP determined which piece was the goal piece by going line by line, some of the puzzles below are simply simple simple SBPs and the ones which are just simple SBPs are removed. (Confused? Check http://www.johnrausch.com/SlidingBlockPuzzles/4×5.htm)

Note that these results are not to be trusted; for one, bugs have been found in both the problem finder and the puzzle-to-Jimslide converter itself, and so the results certainly aren’t rigorous. Also, the SBP Evolver (which also runs the puzzle-to-Jimslide converter and searcher) was originally designed to treat any puzzle as a simple simple simple, by choosing the goal piece as the first piece encountered going line by line through the board. Lastly, there’s a curious phenomenon around 12 pieces: Breaking a 1×2 into 2 1×1 pieces and shuffling the remainder about creates 9 more moves!

Since then, I’ve been trying to do a verification of the 4×4 computation, specifically by creating another program to rerun the entire search process, return the best results, and then check to see if the results are the same. The catch: I need it to be a few times faster. The optimizations I’m working on including are:

•Using one of the alternative methods to find potential SBPs faster,

•Only storing the “justsolved” puzzles, that is, puzzles in which the goal piece is already in the lower-right and can move out of it’s current position. (Based on an idea from “Rush Hour Complexity” , by John Tromp and Rudi Cilibrasi. It turns out that about 1/4 of 4×4 SBP positions are justsolved, a rather large value, mostly because in most positions there is a piece in the lower-right corner, and the only real limitation is that it must be able to move), and

•Using a custom-written sliding block puzzle solver to find the hardest puzzle in the entire group of positions the justsolved puzzle is linked to. (This can be done in 2 diameter searches, one to find the other justsolved puzzles in the group, and the other to find the starting position the furthest away from all the justsolved positions. My custom solver is about 3x as slow as Jimslide, but it makes up for it by that it’s solving the entire group of puzzles and removing the other justsolved puzzles from the list. There can be a stunning amount of other justsolved positions reachable from a puzzle- some have been found with over 14,000 !)

However, I’ve hit upon a gypsy moth in my solver causing it not to search the puzzle completely! Preliminary tests of the program though, have revealed the expected running time to be about a day, so stay posted!

Anyways, that’s my explanation as to what’s been going on, and I apologize for my relative silence.

Fractals

Fractals are a relatively new mathematical concept which are shapes that have detail at all levels. (i.e., you can keep zooming into the shape and always find new patterns) Fractals originated a few million years ago, but they have only been named and studied for less than 200 years, and less than 50 years if you don’t count the times when mathematicians were calling them “Monsters”.

The most simple type of fractal is one where you take a shape, and then turn the shape into a number of smaller copies of itself. Probably the most famous example of this technique is the Sierpinski Gasket, created by taking one triangle, and turning it into 3 smaller copies of itself:

Iterations of the Sierpinski Gasket

As you can see, at each succeeding iteration the new triangles are replaced, and so on. The same fractal can be made by progressively taking triangles out of triangles, and both ways can be easily done using some triangular graph paper.

A nearly as well-known fractal as the Sierpinski Gasket is the Sierpinski Carpet. To make this fractal, you take a square, divide it into 9, remove its center square, and repeat. Naturally, the same can be done by adding: Divide a square into 9 and replace each part by itself:

Iterations of the Sierpinski carpet

It’s possible to, as well as creating 2D self-similar fractals, to create 1-D self-similar fractals, which will be a line bending into two dimensions. An example of this would be the Koch Curve, discovered in 1904 by Niels Fabian Helge von Koch. To make it, start with a line, divide it into 3, and replace the middle section by 2 line segments. This can be shown better by the “generator” for the Koch curve, which shows the before and after:

If you repeat these steps, you’ll generate the following picture:

This fractal was originally called a “monster” due to the fact that it is continuous everywhere (there are no holes), but it is impossible to take a slope at a single point. In this way it is a shape that has no corners, which mathematicians regarded at the time as ridiculous. A similar fractal discovered earlier by the Italian mathematician Guiseppe Peano in 1890, the Peano curve, is similar but manages to fill space:

This was regarded as crazier by the mathematicians than the future Koch curve- Is it a line, or is it a square? It’s actually a schematic for converting one dimension into two! Even crazier than that was the Hilbert curve, discovered by David Hilbert in 1891: 

It consists mainly of taking the previous curve, adding 3 copies, and connecting those copies together.

Some of these 1-dimensional curves can be made 2-dimensional by taking multiple copies of them and putting them together. For example, there’s the Koch Snowflake, which comes from 3 copies of the Koch Curve:

Surprisingly enough, the total area of the Koch Snowflake  is not some infinite series, but rather 8/5 the area of the original triangle! Some 1-dimensional curves, such as the Peano curve, don’t need to be joined to seem to be 2-dimensional. However, the Peano curve creates what seems to be a square, and a square is certainly not a fractal. The Flowsnake, discovered by Bill Gosper and later renamed to “Peano-Gosper curve” achieves the object of having a bounding area that is both fractal and tileable!

3-Dimensional “Simple” Fractals

Up to this point we have been taking about 1 and 2-dimensional fractals, but in an effort not to make fractals too easy for computers, we now turn to 3-dimensional self-similar fractals.  To start off, the Sierpinski Carpet can be turned into a 3-dimensional version simply by replacing the squares with cubes. This causes us to use 20 times as many cubes for each step, but if you happen to have that much wood and glue around the house, you can easily make it by replacing each cube with 20 cubes, in the fashion shown below. This creates an object called the Menger Sponge, but you aren’t likely to see it on the sponge market anytime soon:

The sponge pictured above is a level 4 sponge, which would take 160,000 cubes to create. 8,000 is a much more manageable number, and so Dr. Jeannine Mosely decided to create a Level 3 Menger Sponge- out of business cards. There would be 6 cards per cube, which would then be linked, and finally paneled for a grand total of 66,048 business cards, which Dr. Mosely managed to create quite a while later:

As you can see, it’s large enough to crawl into, but as fun as it may seem, Dr. Mosely says that a Level 4 sponge made out of business cards would simply not be possible to make without structural support.

The Sierpinski Gasket also has a 3-dimensional analog: The Sierpinski Tetrahedron. To make it, you take the previous level, make 3 more copies of it, and join them by the corners.

George Hart has made a 3D model of it, and even has a good description of how to make it in his Math Monday post. With all this, you might think that there would be an interesting 3D version of the Koch Snowflake. However, when doing it like you normally would (tetrahedra on triangle) you get something quite unexpected…

Now join 4 of those together, and you get a cube. From a fractal.

All this time we have been referring to fractals as “1-dimensional”, “2D”, and “in 3 Dimensions” when in fact, as we have seen, they clearly aren’t. The Peano 1D fractal may as well be a square, and the Sierpinski 2D fractal may as well be a series of lines, and the Hilbert curve is somewhere in between. To make deciding easier, Felix Hausdorff and Abram Samoilovitch Besicovitch invented the concept of fractal dimension, which can not only be an integer, but also any number in between. To compute the fractal dimension, we use the formula D=Log(N)/Log(l), where N is the number of pieces that replace the original shape, and l is one over the scale factor. (The base of the logarithm does not matter) For example, in the Sierpinski gasket, N=3, and l=2, which means that D=Log(3)/Log(2), or about 1.584962501. This means that the Sierpinski gasket is slightly less than a square, and quite a bit more than a line. Similarly, for the “3D” Menger Sponge, its fractal dimension is Log(20)/Log(3)=2.726833028. Finally, the fractal dimension of the Peano curve is Log(9)/Log(3)= exactly 2, which means that the Peano curve, at the ∞th iteration, may as well be a square. A large list of fractals and their fractal dimensions can be found at Wikipedia.

Fractals Imitating Reality, or Vice Versa?

Self-similar fractals are not just an abstraction, though. Many plants, such as cauliflower and broccoli show self-similar behavior. On cauliflower, it can be seen by the lobes of the surface, and broccoli is a bit more chaotic, but still shows the same behavior:

Trees can also be simulated reasonably well using fractals:

A tree fractal

You can see the first branches by the dots

Escape-Time Fractals

Self-similar fractals get a bit boring after a while, so let’s explore another kind of fractal: Escape-time fractals. Escape-time fractals take place on the complex plane, which is an extension of the real line. Basically, it’s a plane of numbers of the form x+i*y, where i is the “imaginary number” sqrt(-1). Imaginary numbers (a+i*y) can have all the operations of real numbers done to them, such as addition (a+i*b+c+i*d=(a+c)+i(b+d)), multiplication ((a+i*b)*(c+i*d)=(ac-bd)+i(bc+ad)), as well as division, square roots, exponentiation, and every other function that can be applied to the real numbers. Now, consider a function repeatedly applied to an initial complex number until the point escapes a circle of radius r, and color it according to the number of iterations it takes to escape from the circle. If the point never escapes, or doesn’t escape after however many iterations, then color the point black.

Pierre Fatou and Gaston Julia first investigated, in the 1910s, iterations of this type, specifically iterations of the type z->z^2+c, where c is the initial point, and z is the point that changes. However, they simply noted the chaos of the system: Julia studied variations where c was a single number and z was anything, and Fatou studied the function for where c was a single number and where the initial value of z was 0. It wasn’t until 1979 that Benoit Mandelbrot would expand on Julia’s work using computer analysis.

Mandelbrot decided to use a computer to plot Julia sets using the color-by-point method, and was stunned by the results. Julia sets create self-similar fractals, but are much more interesting as they use color and are much more varied, as the following video of a set of Julia sets shows:

The next logical step after studying these would be to let c be equal to the initial value of z, and so he created what is now known as the Mandelbrot Set.

(right click and press view image for full size)

The Mandelbrot Set is unlike any of the fractals that we’ve come across so far in that it has no self-symmetry. Although there may be shapes farther in that look like the Mandelbrot Set, they aren’t quite the whole. There are many areas of the Mandelbrot set, such as the antennae-like left regions explored in the video “Trip to E214” e.g. 10^214 zoom, so large that the smallest particles postulated by physics would be nearly a googol times larger than the universe:

Or into the “Seahorse Valley” (I am not making these names up) ,where there are intricate spiral structures:

Or into a whole host of other places by using a fractal software such as the ones at the end of this post.

An interesting thing about the Mandelbrot Set is that, the farther you zoom into an area, the more it seems to look like the corresponding Julia set for that point, such as in this video, also into the Seahorse Valley area of the Mandelbrot Set:

What’s also interesting about the Mandelbrot Set is that no matter how far you zoom in, there always appears to be more intricate structures, which has led to the rise of groups specializing in computer zooms really far into the set. An example of this is the Trip to E214, and I believe the record for high definition is 10^275:

and for low definition there’s 10^999:

New structures can even pop up deep into the set, such as a long string of Xs:

(For those who don’t have the patience to watch the above video, the point can be seen at http://stardust4ever.deviantart.com/art/XX-Reactor-Core-Deep-Zoom-131573460)

By now you should have noticed a really interesting optical illusion: When you look away from a video, the space seems to be shrinking in!  It’s also the sign not to watch all of the videos all at once.

The fractal dimension for the Mandelbrot Set can also be computed, but it’s quite complicated. In fact, it was not until 1991 that Mitsuhiro Shishikura proved that the Hausdorff dimension of the Mandelbrot set equals… 2. The area of the Mandelbrot Set, however, is not so simple. Although nobody has figured out a way to calculate it precisely (The best formula I know of (i.e. only) is given in equations 7-10 on the Mathworld page), it is possible to get an estimate of it by counting pixels from -2+-2i to 2+2i, and finding what percent of them are in the set. The current best known value for the area is 1.50659177 +- 0.00000008 , given by Robert Munafo in 2003 on his page “Area of the Mandelbrot Set“. Cyril Soler, a researcher at the National de Recherche en Informatique et Automatique, conjectures that the value is exactly sqrt(6*pi-1)-e, but whether he is right or wrong is not known. It is also possible to calculate exact mathematical formulae for some of the subregions of the Mandelbrot Set, such as the large cardioid-shaped blob, which can be expressed in parametric form as

The Mandelbrot Set is also connected, which means that if you take any point inside the set, you can get to any other point inside the set by following some series of pathways.

Lastly, for programmers, you can easily make your own Mandelbrot set generator even if your programming language does not support complex numbers by iterating z_realtemp->z_real*z_real – z_imag*z_imag + c_real, z_imag->2*z_real*z_imag + c_imag, and z_real->z_realtemp.A good pseudocode example is at http://en.wikipedia.org/wiki/Mandelbrot_set#Computer_drawings.

Naturally, the Mandelbrot Set is not the only escape-time fractal there is. First of all, there are the Mandelbrot Generalizations, z->z^p+c: (0<p<20 for the video)

[http://www.youtube.com/watch?v=n-zmLPuQg6w]

There’s also the Phoenix Julia set, which not only relies on the previous point but the point before that, zn + 1 = zn2 + Re(c) + Im(c) * zn – 1, , where c is constant:

A good online program for exploring it on your own is at http://www.jamesh.id.au/fractals/mandel/Phoenix.html

There are an infinitude of others, so I won’t go through them all here, but a good gallery of escape-time fractal art is at http://fractalarts.com/ASF/galleries.html.

Escape-time fractals don’t have to have the escape condition be when the point goes outside a circle; In fractals such as the Newton fractal, based on the function x^3-1=0, the condition is when the point gets close enough to a root of the equation. Basically, what happens is that Newton’s method to find roots of an equation,

is iterated for f=(x^3-1=0), which creates the escape-time formula z->z-(x^3-1)/(3x^2). Once the point gets close enough to one of the roots of x^3-1: x=1,x=-(-1)^(1/3), x=(-1)^(2/3), it is colored according to the root it arrived at and the amount of time it took. This creates a Julia-like result:

Once again, this can be generalized to different functions and powers, such as in f(x)=x^5-1 :

It also turns out that most-if not all- self-similar fractals can be implemented as escape-time fractals. For example, the MilleniumFractal fractals page lists the formula for a Escape-Time version of the Sierpinski Gasket as being:

For each point c:
z0 = c

zn+1 = 2 zni, Im(zn) > 0.5
zn+1 = 2 zn – 1, Re(zn) > 0.5, Im(zn) ≤ 0.5
zn+1 = 2 zn, Re(zn) ≤ 0.5, Im(zn) ≤ 0.5

What happens is that for every point that is recursed upon, its imprecision is increased by a factor of 2 each iteration, eventually getting “thrown out” of the set.

Coloring Methods

As interesting as the fractals are the methods that can be used for visualization styles. For example, instead of coloring the Mandelbrot set by the number of iterations it takes for a point to escape, we could color the points that escape according to iterations+c_real, and the inside according to the magnitude of c=sqrt(real^2+imag^2), which would produce the following effect:

The bands are because of the palette

Many types of visualization for fractals have been discovered, such as “incoloring” and “outcoloring” methods. As well as the above example, one  such visualization method is Biomorphs, invented by Clifford Pickover, which makes the fractals into bacteria-like shapes. The method was based originally on an accidental bug made while programming a fractal program, which is perhaps why Mad Teddy’s code might be easier to use than my explanation!

Also, quite interesting results come from coloring the outside of the Mandelbrot Set a different color depending on whether the imaginary values of the points become negative after they escape:

Past that, there are more complicated colorings we can do, such as noticing that there is action inside the set as well as outside the set. Basically, if you iterate the Mandelbrot Set iteration on a single point over and over, the numbers will appear to converge to one number or another, showing the “orbit”. A good applet for seeing this is at http://math.hws.edu/xJava/MB/ (under Tools):

Now, suppose that at each point the point reaches, we check to see if it is within a certain area, and if so, immediately stop the iteration and color the initial point according to the place inside the trap it landed. For example, suppose we have a cross-shaped trap centered at 0+0i, colored with a gradient. Then we’d get pictures like this:

From fractaldomains.com

It turns out that if you take a stalk pattern like this and plot it over the entire Mandelbrot Set, stalks will appear inside the set as well as outside. These stalks are called Pickover stalks after Clifford Pickover, and often create nice spiraling patterns.

Other shapes for orbit traps can be made, with different results.  Circular orbit traps tend to show interesting detail in the Seahorse Valley regions:

Animations with orbit traps are especially interesting, because with animation you can not only zoom in, but you can also change the orbit trap as you’re zooming in!

A further explanation (and where a few of the images come from) is at http://www.fractaldomains.com/tutorial/orbit/index.html , and a large gallery of orbit traps is at http://softology.com.au/gallery/gallerymandelbrotorbittraps.htm !

Expanding on the idea of orbit traps, Melinda Green in 1993 proposed the following idea: Take a 2-dimensional array of integers, and then perform the standard Mandelbrot set iteration for each point, recording the places the point visits. If the point is inside the Mandelbrot Set, take the list of points the point visits and add 1 to the cells of the array corresponding to the points it visited. After you’ve computed all the points, you wind up with an array of pixels, which, when scaled and displayed, create what Lori Gardi calls the “Bhuddabrot”:

Bhuddabrots are much more computationally intensive than the standard  Mandelbrot set, because you need to sample more than 1 point per pixel and iterate thousands of times for each point to get good results, otherwise “noise” will appear around the main area. The current record for largest rendering of a Bhuddabrot is held by Johann Korndoerfer at 20,000*25,000 pixels, resulting in a Jpg file of 88 MB! He has an interesting write-up of his record at his blog, including the large image and a Firefox-friendly 5000×6000 pixel image. The image took 1,000,000 to 5,000,000 iterations per point, and took 16 hours using a custom Lisp program on an 8-core Xeon machine.

Back to Three Dimensions

At some point or another, people decided that fractals, as computationally intensive as they may be, were getting too easy for computers.On October 13, 2006,  Marco Vernaglione set out the challenge of finding a 3D analog of the Mandelbrot Set, and on 8/11/2009, Daniel White of skytopia.com succeeded, discovering a 3-dimensional version of the Mandelbrot Set: The Mandelbulb.

By rephrasing the Mandelbrot set as an iteration in polar coordinates, White managed to generalize the iteration to 3D polar coordinates, getting the iteration:

r = sqrt(x*x + y*y + z*z )
theta = atan2(sqrt(x*x + y*y) , z)
phi = atan2(y,x)

newx = r^n * sin(theta*n) * cos(phi*n)
newy = r^n * sin(theta*n) * sin(phi*n)
newz = r^n * cos(theta*n)

where n is the power. White originally tried n=2, but with discouraging results. Paul Nylander, however, suggested setting n=8, which created the Mandelbulb as we know it.

Using 3D graphics technology, we can zoom into the Mandelbulb and render scenes inside it, some of which can seem amazingly realistic, such as this which White calls the “Mandelbulb Spine”:

More renders are at Daniel White’s website, which I strongly encourage you to visit!

If it is a fractal, there is an animation of it. The same rule holds for the Mandelbulb, and a number of amazing zooms have been made. For example, there’s Daniel White’s zoom into the top part:

As you can see, it’s fairly hard to navigate in 3D using a 2D mouse.

Other sections of the Mandelbulb resemble the broccoli mentioned earlier, as in the end of this video:

As strange as the Mandelbulb may seem, it has some areas strikingly similar to the Mandelbrot Set. For example, here is a part of the Mandelbulb:

Here’s a Mandelbrot spiral:

After the Mandelbulb was discovered, other 3-dimensional fractals suddenly started to appear, many from FractalForums.com . A stunning example is the Mandelbox, which is like a much more complex version of the Mandelbulb:

On the interior, it can seem cavernous, and with the right coloring it can even seem like an ancient palace:

At last count, there are 354 versions of the Mandelbulb, such as polyhedral IFS, TGlad’s variations… This blog post, long as it may be, is simply too short to talk about all of them.

To 4D, and Beyond!

I’ve skipped ahead a bit by talking about the Mandelbulb and the Mandelbox, because in reality a 4-dimensional fractal, the Quaternion Julia Fractal, was discovered first. In 1843, while walking on a bridge in Dublin, Sir William Rowan Hamilton discovered a way to represent a “4-dimensional” complex number, made up of three complex parts: i, j, and k, and a real part, with the formula:

i² = j² = k² = i j k = −1

It turns out that quaternions are really quite complex (no pun intended), in that they are not commutative under addition. That is,

This makes some very complex formulae for squaring, multiplication, and other functions (see Paul Bourke’s article on pretty much everything about quaternions) . However, the formula for the Quaternion Julia fractal is the same as the normal Julia: z=z^2+c, where z is a quaternion, and c is another constant quaternion. However, in this case, if we choose the right slice of the 4D object to display on the screen, we get very strange self-similar fractals:

Videos of the quaternion Julia fractals changing are even more hard to comprehend, a bit of truth to A.Square’s story:

A 4-dimensional Mandelbrot set can also be made, but so far as I know nobody’s done a good rendering of it yet.

First, suppose we go back to the original Mandelbrot Set. For every point in the Mandelbrot Set, we can generate a Julia set by setting the variable c value of that point in the Mandelbrot Set to the c value of the Julia set. Now, suppose we take all of the Julia sets in one column of the Mandelbrot set, and layer them on top of each other like pages in a stack, thus creating a 3-dimensional object. Now, suppose we do that with all of the columns in the Mandelbrot Set, creating a bunch of 3-dimensional fractals. Lastly, we take all of the 3-fractals, and layer them on top of each other in 4-dimensional space, and you have the 4-dimensional version of the Mandelbrot Set (from http://www.superliminal.com/fractals/surfaces/index.html):

Best detail I could find. If you have a better one, feel free to post it in the comments!

Of course, you could also use quaternions and the formula z=z^2+c to compute another 4D Mandelbrot, but it turns out that all it does is spin the set around:

From Paul Nylander

Now, if it turns out that 4 dimensions isn’t enough, we can always generalize fractals to higher and higher dimensions. We can simulate 5-dimensional cauliflower, 7-dimensional Koch snowflakes, or we can even generalize the Quaternion Julia, Mandelbrot, and Mandelbulb  formulas to 8 dimensions or more.

But in the end, it all comes down to how fast we can draw. But whether by hand or by computer, fractals are still amazing.

Pi

Pi is one of the greatest numbers of all time, having been known for thousands of years and over that time gaining quite a bit of popularity in the form of celebrations such as Pi Day and others, all from a number that came from the simplest smooth object: A circle. Suppose you have a circle with a width of 1 inch, and then take a measuring tape and measure around the edge. You’ll find that it comes out to 3 inches and a bit, and if you increase the inch to a foot, you might get 3.14 if you look carefully enough. On the more extreme scale, you could go out to a crop circle, measure it, and perhaps get 3.1415926 . Now, suppose you have a perfect circle, and an infinitely precise ruler (for lengths shorter than an atom) , and do the same thing once again. You would get the number 3.141592653589793238462643383… which is expressed as the Greek symbol

One of the first mentions of  pi is in the Bible, where in Kings 7:23-26 it states:

And he [Hiram] made a molten sea, ten cubits from the one rim to the other it was round all about, and…a line of thirty cubits did compass it round about….And it was an hand breadth thick….”
This states that pi=3, a definite approximation, but a terrible one nonetheless. A slightly better approximation was made by Archimedes, when he developed a formula for computing pi by using polygons with large numbers of sides, and getting two approximations for the area of a circle ( pi*r^2) , like this:

5,6, and 8 edges

Using this method, he drew 2 96-sided polygons and got 3 10/71<pi<3 1/7 , an approximation accurate to 2 decimal places: 3.14… Ptolemy later updated this with 3.141… and this was updated by Tsu Ch’ung Chi to 355/113 , correct to 6 places. Later on, in the 1600s, Gottfried Leibniz/James Gregory found an infinite sum for pi: pi=4*(1-1/3+1/5-1/7…) The proof of this requires calculus, but takes up less than a page. Leibniz’s/Gregory’s formula is rarely used because it takes exponentially many terms to create more digits, which would slow down even the fastest computers. A slightly better formula, but much more amazing, was found by Francois Viete in 1593, using only the number 2!

A quite beautiful formula for pi was found by John Wallis, in the form of

Notice how the numerators and the denominators appear to “carry over” to the next fraction!

Shortly thereafter, a much better formula was found by  John Machin in 1706:

Pi/4=4*Arccot(5)-Arccot(239)=4*Arctan(1/5)-Arctan(1/239)

This formula, when expressed in radians, can be computed rapidly using Arccot(x)=1/x-1/(3x^3)+1/(5x^5)-1/(7x^7)… Formulas of this type, arctans of fractions, are now called “Machin-like formulae”.  The simplest of these is Pi/4=Arctan(1), followed by

and

The arctans with bigger denominators produce more digits per series term, so the efficiency of a Machin-like formula is limited by the arctan with the smallest denominator. For example, the 2002 Pi decimal places record was set by Yasumasa Kanada on a supercomputer using Kikuko Takano’s

and F. C. W. Störmer‘s

Even more complicated Machin-like formulae exist, such as Hwang Chien-Lih’s 2002

However, in the computer age, the length or the elegance of the formula don’t count: it’s the rate at which the formula converges. Snirvasa Ramanujan, Indian matematician and nemesis of Bill Gosper (“Every time I find an identity, he’s found it before me!”), created a number of formulae for pi,  including the following:

where

denotes f(a)+f(a+1)+f(a+2)…+f(b). Note not only the factorials (n!=1*2*3*4*5…*n) but also the large terms both on the outside and on the inside, especially the factorial to the 4th power and the 396^(4k), which can be shown to mean that the sum converges exponentially rapidly (digits/term), as opposed to exponentially slowly as in the Gregory-Leibniz formula, which makes it one of the fastest algorithms known for computing pi. An even faster algorithm, which has been used to break the pi record many times, is the formula found by the Chudnovsky brothers in 1987:

This rather monstrous formula gives about  14 digits per term, and was used most recently by Shigeru Kondo and Alexander Yee to calculate 5 trillion digits of pi, billions of times more than enough to estimate the area of your wading pool to the atom. There are even formulae that give an exponential number of digits per iteration, with the drawback that each calculation is exponentially hard. One of these, the Brent-Salamin algorithm, only uses simple arithmetic and would take about 35 iterations to break the record:

First, start with a_0=1,b_0=1/sqrt(2),t_0=1/4,and p_0=1. Then iterate: a_(n+1)=(a_n+b_n)/2, b_(n+1)= sqrt(a_n*b_n), t_(n+1)=t_n-p_n(a_n+a_(n+1))^2, and p_(n+1)=2 p_n. Then when you’ve iterated enough, the estimate for pi is given by (a_n+b_n)^2/(4 t_n).The best of these iterative formulas that I know of is Borwein and Borwein’s, which converges like 9^n (Effectively, it requires about 11 iterations to beat the current record):

Start with

and then iterate

Then the estimate for pi is given by 1/a_n .

A fairly significant formula, found in 1995 by Simon Plouffe, is the Bailey-Borwein-Plouffe formula, which can be used to compute any bit in the hexadecimal representation of pi-without needing to know the previous digits, which can then be used to compute binary bits. In decimal-finding form, it is:

This formula was used by PiHex, an ended distributed computing program, to determine that the 1 quadrillionth bit of pi was 0. Yahoo later used the same to find that the 2 quadrillionth bit of pi was also 0.

Of course, the reasons of finding decimal digits of pi are not only to show how great your new supercomputer is, but also to attempt to find a pattern. In base 10, this is probably unlikely, as there are an infinite number of other bases to test, including the non-integer bases(i.e. 7/5ths, sqrt(2),6*e/19…) This makes it practically impossible, and even if base 10 or base 11 or base 16 had a pattern, we might have to look any number of places to find it, as in Carl Sagan’s novel Contact, where (spoiler) after a few trillion digits in base 11, one of the main characters finds a field of 0s and 1s the size of two prime numbers multiplied together. Plotting the 0s and 1s as black and white dots, she plots it on her computer screen to find- a picture of a circle! This is actually possible (though very unlikely) as one of Hardy and Wright’s theorems state that any sequence of digits you can think of can be found in pi. In fact, there’s a website (http://www.dr-mikes-maths.com/pisearch.html) which will search for names in pi expressed in base 26! (end spoiler)

However, there’s a way to express pi in such a way that it doesn’t depend on the base: Continued fractions! Continued fractions are “infinite fractions” which are in the form of

and are usually expressed as [a0,a1,a2,a3,a4,a5,…] or as [a0;a1,a2,a3,a4,a5,…] with all an positive integers. Many numbers, such as integers and fractions, have rational continued fractions: For example, 1=[1], and 355/113=[3,7,15,1]. Of course, if 355/113 were expressed in decimal, you’d have to use an infinite number of digits to get the actual fraction. A significant advantage that continued fractions have over decimal notation is that often irrational numbers can be expressed as repeating continued fractions. For example,

sqrt(2)=1.4142135623730950488016887242097… but in continued fraction notation

sqrt(2)=[1;2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,…]

Much simpler. In fact, you can go to your friends, claim you know more digits of the square root of 2 than them, and you can simply expand the continued fraction out to beat them no matter how many decimal digits they know! Possibly the most elegant of these repeating continued fractions is the one for the Golden Ratio, (1+sqrt(5))/2:golden3

Also, sometimes transcendental numbers can be expressed as simple continued fractions. For example, Euler’s Number, e, is equal to lim(n->infinity) (1+1/n)^n and is often used in exponentiation and calculus. In continued fraction form, it is equal to [2,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1…]! Decimal is not as elegant, e being about 2.71828182845904523536…

However, despite any hope, Pi is not as pretty in continued fraction form, though it is invariant of base: [3,7,15,1,292 (ack!),1,1,12,1,3,1,14,2,1,1,2…] There have been only a few attempts for the continued fraction of pi; Tsu Ch’ung Chi’s 355/113=[3,7,15,1] was the first nontrivial one, and Euclid’s algorithm can be used for computing the continued fraction of pi, though his GCD algorithm just throws the terms away. The first major record that I know of was made by Bill Gosper on August 19,1977 when he computed 204103 terms using his own algorithm in Macsyma, an early computer algebra system. Later, he beat his own record  in 1985 with a whopping 17001303 terms, again using his algorithm. Later, in 1999 Hans Havermann beat Gosper’s record by using Mathematica to compute 20,000,000 terms. He later beat this in March 2002 to make 180,000,000 terms, the previous record.

Now might be a good time to tell why I haven’t been blogging recently.

Over the past few months, I have been working on a C# program, PiCF (not released yet, current alpha source code here) which can calculate the continued fraction of any number, not just pi, using Gosper’s algorithm. On October 17th, I calculated approximately 458,000,000 terms of pi in about 3 hours on a 64-bit machine running Windows on a Core 2 Duo @ 3.00 Ghz. This was later verified using Mathematica (taking up much more memory than the calculation did!). The program was coded in C#, has a command-line interface (with menus!), and uses Emil Stefanov’s wrapper of GNU MP for the BigInteger multiplications. The maximum term is still 878783625, originally found by Bill Gosper during the 17M calculation.  Other stats: The minimum term is one (naturally),  the terms take a 1.4 GB file (download here if you dare) and I was very nervous.

Pi has, over the years, gained a huge following: March 14th is Pi Day (in decimal) on which MIT ships their student acceptments/declines and on which people make pie, 1:59:26 of that day is Pi Second; May 22nd is Pi Approximation Day, and Kate Bush even wrote a song about pi. Many jokes about pi have surfaced on the Internet, including that:

Pi-thumb

This may be because over the thousands of years, pi has become so far removed from its original purpose: measuring circles. To start out, almost every round object has a volume and surface area that involves pi. The volume of a cone is one-third r^2 *h*pi, where r is the radius of the base and h is the height. The volume of a torus is (pi^2)/4*(b+a)(b-a)^2 where a is the inner radius and b is the total radius.

What about the volume of a crescent? Well, that’s quite a different story…

Arc formula

From Murderous Maths: The Perfect Sausage by Kjartan Poskitt