Archive for the ‘ Uncategorized ’ Category

The Minsky Circle Algorithm

Our tale starts with the PDP-1, a \$120,000 room-sized system which, when released in 1960, was state-of-the art computer technology: 4096 18-bit words of memory, a 200 Khz clock speed, and came with only a Macro assembler and a DDT debugger, but what it did have was a paper tape reader, and a Type 30 precision CRT display. As such, many of the first programs of the PDP-1 were “Display hacks”: Programs using only a few lines of code, but when run, create intricate patterns on the screen. A prime example of this is the Munching Squares program by Jackson Wright, described in the MIT A.I. Lab’s HAKMEM:

```foo, lat
adm bar
rcl 9s
xor bar
dpy
jmp foo
bar, .```

This creates a sequence of images corresponding to where the bitwise XOR function of the x and y coordinates of every point on the screen is less than the frame number. If this happens to be a bit complicated, it’s a bit easier to understand when you see the animation:

Gif from Mathworld.

(A very good emulator of the original Munching Squares program is located at http://www.dpbsmith.com/munch.html)

A goal of a member of the Research Lab for Electronics (and later the head of the MIT AI lab), Marvin Minsky, was to develop a program to make curves in a square grid, in as few lines as possible. When he was attempting to get the system to draw a spiral, he stumbled across a most curious thing: A two-statement program which would draw stable circles! Decoded from machine language, it reads:

loop: y = y – 1/16 * x

x = x + 1/16 * y

Note the lack of temporary variables; In fact, with a temporary y variable, the points spiral out of the circle! However, the program does not draw a perfect circle, but rather a very round ellipse, which becomes rounder as 1/16 gets closer and closer to 0.

From the Computer History Museum

You can generalize the Minsky circle algorithm by replacing the first 1/16 by δ and the latter by ε, to get circles that take longer or shorter to generate:

x = x – δ * y

y = y + ε * x

It turns out that this can even be solved recursively! Using a substitution and a “guess and-by-gosh” method, the authors of the book “Minsky’s and Trinsky’s” (which most of the content for this article was lifted from, as of now privately published by the authors: Corey Ziegler Hunts, Julian Ziegler Hunts, R.W. Gosper and Jack Holloway) were able to prove that, for the Nth step:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where d=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) , which happens to actually be an ellipse! Note how if δ*ε>4 or δ*ε<0, ω becomes imaginary and the ellipse becomes a hyperbola. However, with anything so seemingly simple, there is something that makes the subject much more complex. In this case, it was the nearly imperceptible (but strictly periodic) wobble of the ellipse.

“The Worst Circles I Ever Drew”

In machine arithmetic, integers, no matter what happens to them, are always integers. While this may seem a triviality, machines with only integer arithmetic effectively use the floor function on each operation they do. For example, if you divide 1 by 2, in many programming languages the answer will be 0. The reason for this is because the bit routines for dividing integers always return integers: They work to no more than the precision given. In order for us to understand what’s going on, we can just pretend that the computer works out the number, then finds the largest integer less than or equal to the answer. No problem, right? Inserting floors into the Minsky recursion, like this:

x = x – floor(δ * y)

y = y + floor(ε * x)

shouldn’t hurt the overall mechanics, right? Wrong. When x or y is small enough, the floor will actually make the result of the multiplication be 0, losing precision. While this may not seem as big a problem, when we chart the plots for strange values of delta or epsilon we get something which is definitely not an ellipse. Corey Ziegler Hunts and Julian Ziegler Hunts discovered this behavior (discovered earlier by Steve Witham), and began to dig deeper. It turns out that if you calculate the Periods (how long it takes before the points reach X0 and Y0) of the Minsky recurrence starting at different X0,Y0, δ and ε, the periods can vary wildly. For example, when δ and ε both equal 1, all X and Y (other than 0,0) loop with period 6. On the other side, the X0,Y0,δ and ε corresponding to {-19/24,-1015/121,7381/5040,5040/7381} has a period of no less than 2,759,393! (Even with the floor function, the algorithm is exactly reversible, so it must return to (X0,Y0) or never repeat any point, unless the machine integers overflow.)

This was actually the Zieglers first "Rugplot"

Another one, x=1,y=0,δ=9/17 and ε=15/2, has been followed for no less than 104 trillion steps in both directions (the Minsky recurrence can be modified to run backward) without looping! A logical question might be to ask: What do these periods look like when plotted? Well, since there are 4 variables: x,y,δ, and ε, there are 6 different ways to plot the periods: x and y, x and δ, x and ε, y and δ, y and ε, and lastly δ and ε. The Zieglers started with the x-y plots. Now, choose a constant δ and ε, such as 1 and 0.467911, and plot the periods mapped to colors  for integer x and y, and you get what may as well be a Persian rug! Okay, but maybe that’s just a special case of the Minsky recurrence, and if we try something like δ=1 and ε=0.91939, we’ll just see a blob? Wrong.

Blob it may be, but an intricate one nonetheless.

You can try other Minsky X-Y plots at http://openprocessing.org/visuals/?visualID=19109 , or click on the picture below:

Click to go to the applet (openprocessing.org)

Coloring methods also have effects on Minsky plots. For example, the Minsky plot δ=100, ε=1/99, which when rendered with a simple linear coloring from Dr.Fibdork’s version (programmers, see comments) looks like this: However, the authors of the book use a different method to bring out some of the finer details, which shows this: At some point, though, we have to stop messing with numbers and get down to math. If we return to the original Minsky solutions:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where δ=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) ,we notice that this ellipse returns to its original point when n*ω mod 2*Pi =0, because when n=0, n*ω=0, and also because the periods of the cos and sin functions are 2*Pi . Now, let us define the theoretical period (denoted as P) of a Minsky recurrence as the first n greater than 0 for which n*ω mod 2*Pi =0, which is the same as saying “The n for which n*ω=2 Pi” . (Note that n can be a non-integer) N can be trivially solved for, and expanding ω we get that P=Pi/arcsin(sqrt(δ ε)/2) . We can write this in two ways: The already mentioned one, or by solving for  δ ε we get δ ε=4 (Sin(Pi/P))^2, which we can use to get a product of δ and ε from the period. Earlier on, readers may have looked at the figures for δ and ε and seen something suspicious about them. As it turns out, the first “Rugplot” had a period of 9, and the second had a period of 2 Pi. In general, when P>4, we can generate very symmetrical plots by setting δ to 1 and ε to the value given by 4 (Sin(Pi/P))^2 . For P<5, the method generates very monotonous images (both delta and epsilon are integers), but when P=5 we get what Holloway terms “Rastermen”:  Koch Snowflake-like fractals but with only five arms repeated infinitely with sizes according to the Fibonacci numbers!

A larger one is avaliable at http://gosper.org/p5d1.png

It’s even possible to make animations of Minskyplots, such as along the line δ=3/2+t/200, P 40/139. It turns out that Minsky space is rather epileptic:

Different configurations arise with different functions of time for δ and ε, such as when δ=t and ε=1/(t-1), and δε approaches 1:

The horizontalness of the boundaries at the end are due to the fact that the slope of major axes of the ellipses in a Minsky x-y plot is approximately ε/δ , because in the original Minsky solutions the amplitude of Xn (roughly the “run” in rise/run) is larger when ε is larger,  and the amplitude of Yn (the “rise”) reacts the same way to δ.

Minskyspace

At this point you may be asking what the other 5 arrangements of Minsky plots look like. I don’t have the space in this post to talk about them all, but I can describe one of the most interesting Minsky plot methods, sort of the opposite of the x-y plot: The δ-ε plot. Recall from earlier that the simple Minsky solutions become imaginary if δε>4. It actually turns out that this is generally not the case. Suppose we use a simple Minsky period plotting method to draw the periods with x=1 and y=0 , where -4<δ<4 and -4<ε<4:

Rather large (4001x4001), so viewing in hi-res is recommended. 1px (in reality infintesimally thin) lines are due to scan granularity.

The gray areas are where there was no Minsky loop with a period less than 5000, and the white areas are period 1.  As you can see, although the outline of the colored region resembles the function y=4/x, in many places there are regions of periodicity extending out into the “dangerous” area, such as on the upper right corner, really small. (I should note here that it has been proven that the shapes of periods are always rectangles) Furthermore, the authors of Minsky’s and Trinsky’s have discovered that some areas inside the two “Islands” are nonperiodic, such as x=1,y=1/2,δ=9 and ε=1/3. (Plot) Even more, any Minsky point of the form x=1, y=1/2, δ=3^(n+1) , ε=3^(-n) , where n is an integer greater than 0, is unbounded.  Not much is known about these δ-ε plots: We can prove that the shapes are always rectangles, and we can find a few general periods, but in general it’s a bit like trying to find the value of an arbitrary point on the Mandelbrot Set. Which brings me to my next point: Unlike the Minsky x-y plots, you can take a δ and ε and zoom into it! The video below shows a sequence of period 5 x-y plots with δ ranging from 99/100 to close to 1:

Generated by Julian Ziegler Hunts.

Some of the areas can seem fractal, and it’s certainly possible to find patterns that seem to converge to a series of infinitely thin rectangles, such as the top-right edge of the x=1,y=1 δ-ε space (δ*ε≤4): Other areas, such as this one near x=0, y=8, δ=173/80 , ε=137/96 , display more localized behavior: However, in many of the places where the differences in size between rectangles go to 0, the periods appear to approach ∞, such as when you approach x=1, y=0, δ=1 , ε=1 from the upper-left-hand side. These places, called “Accumulation points”, seem to be ubiquitous throughout any δ-ε plot . As a great example of this, take the neighborhood of x0=0, y0=8, ε=10/7, δ=37/17 (zoomed in and recolored from the image above) , which Bill Gosper has termed the “Neighborhood of a Monster” because of the “monsters” (points with gigantic periods) that occur. In this case, even though the center appears at first to be devoid of accumulation points, there are some particularly nasty periods- right in between two large blocks!

Double-click to see in high resolution

There’s tons more to study of course, but in reality all of the pictures we see of Minsky plots, whether x-y,δ-ε, or some combination of the two, are all slices of a 4-dimensional, jagged, infinite object, called Minskyspace. The 4 dimensions come from the four parameters, and while slices from this object have not even been rendered in a dimension higher than 2, we can tell a few things about it:

• It goes forever in the x,y,δ, and ε directions. However, in the δ and ε directions, it takes on a rather hyperbolic shape, due to the original, non-rounded Minsky circle algorithm.
• Nearly half of it seems to be missing, due to δ*ε being less than 0.
• Certain “congruence regions”, that is, areas in Minskyspace where the periods are the same which produce identical orbits modulo translation, are shaped like polyhedra instead of infinitely thin slices when sliced through with a 3-dimensional plane! (some of the faces are curved, though)
• At irrational δ*ε , there can be infinitely fine structure around that area, but a conjecture is that there is no irrational δ*ε which has infinite period.
• All accumulation points, infinitely near their centers, have unlimited period.
• Conjecture: All congruence regions are Cartesian products of two polygons with at most 6 hyperbolically curved sides.
• That’s a bit of what we know for sure. There are tons of conjectures, and I’ve hosted a version of the Minsky “To do” list, “Problems posed and solved”, at neilbickford.com/Minsky/problems.txt.

In summary: Although some algorithms may seem simple, there can be great areas of mathematics behind them. We still don’t know all about the shapes that the Minsky circle algorithm creates, and there are still many problems waiting to be uncovered and solved. Even if you don’t want to tackle recursive formulae, just discovering areas in Minskyspace is a fun and entertaining pastime.

PiCF version 1.0.0 is now out!

All major bugs have been fixed, the integrated help system is now functional, and it even comes with a tutorial! Described in more detail in my pi post

Currently only for x64 Windows: download here (0.416 MB)

Pi

Pi is one of the greatest numbers of all time, having been known for thousands of years and over that time gaining quite a bit of popularity in the form of celebrations such as Pi Day and others, all from a number that came from the simplest smooth object: A circle. Suppose you have a circle with a width of 1 inch, and then take a measuring tape and measure around the edge. You’ll find that it comes out to 3 inches and a bit, and if you increase the inch to a foot, you might get 3.14 if you look carefully enough. On the more extreme scale, you could go out to a crop circle, measure it, and perhaps get 3.1415926 . Now, suppose you have a perfect circle, and an infinitely precise ruler (for lengths shorter than an atom) , and do the same thing once again. You would get the number 3.141592653589793238462643383… which is expressed as the Greek symbol

One of the first mentions of  pi is in the Bible, where in Kings 7:23-26 it states:

And he [Hiram] made a molten sea, ten cubits from the one rim to the other it was round all about, and…a line of thirty cubits did compass it round about….And it was an hand breadth thick….”
This states that pi=3, a definite approximation, but a terrible one nonetheless. A slightly better approximation was made by Archimedes, when he developed a formula for computing pi by using polygons with large numbers of sides, and getting two approximations for the area of a circle ( pi*r^2) , like this:

5,6, and 8 edges

Using this method, he drew 2 96-sided polygons and got 3 10/71<pi<3 1/7 , an approximation accurate to 2 decimal places: 3.14… Ptolemy later updated this with 3.141… and this was updated by Tsu Ch’ung Chi to 355/113 , correct to 6 places. Later on, in the 1600s, Gottfried Leibniz/James Gregory found an infinite sum for pi: pi=4*(1-1/3+1/5-1/7…) The proof of this requires calculus, but takes up less than a page. Leibniz’s/Gregory’s formula is rarely used because it takes exponentially many terms to create more digits, which would slow down even the fastest computers. A slightly better formula, but much more amazing, was found by Francois Viete in 1593, using only the number 2!

A quite beautiful formula for pi was found by John Wallis, in the form of

Notice how the numerators and the denominators appear to “carry over” to the next fraction!

Shortly thereafter, a much better formula was found by  John Machin in 1706:

Pi/4=4*Arccot(5)-Arccot(239)=4*Arctan(1/5)-Arctan(1/239)

This formula, when expressed in radians, can be computed rapidly using Arccot(x)=1/x-1/(3x^3)+1/(5x^5)-1/(7x^7)… Formulas of this type, arctans of fractions, are now called “Machin-like formulae”.  The simplest of these is Pi/4=Arctan(1), followed by

and

The arctans with bigger denominators produce more digits per series term, so the efficiency of a Machin-like formula is limited by the arctan with the smallest denominator. For example, the 2002 Pi decimal places record was set by Yasumasa Kanada on a supercomputer using Kikuko Takano’s

and F. C. W. Störmer‘s

Even more complicated Machin-like formulae exist, such as Hwang Chien-Lih’s 2002

However, in the computer age, the length or the elegance of the formula don’t count: it’s the rate at which the formula converges. Snirvasa Ramanujan, Indian matematician and nemesis of Bill Gosper (“Every time I find an identity, he’s found it before me!”), created a number of formulae for pi,  including the following:

where

denotes f(a)+f(a+1)+f(a+2)…+f(b). Note not only the factorials (n!=1*2*3*4*5…*n) but also the large terms both on the outside and on the inside, especially the factorial to the 4th power and the 396^(4k), which can be shown to mean that the sum converges exponentially rapidly (digits/term), as opposed to exponentially slowly as in the Gregory-Leibniz formula, which makes it one of the fastest algorithms known for computing pi. An even faster algorithm, which has been used to break the pi record many times, is the formula found by the Chudnovsky brothers in 1987:

This rather monstrous formula gives about  14 digits per term, and was used most recently by Shigeru Kondo and Alexander Yee to calculate 5 trillion digits of pi, billions of times more than enough to estimate the area of your wading pool to the atom. There are even formulae that give an exponential number of digits per iteration, with the drawback that each calculation is exponentially hard. One of these, the Brent-Salamin algorithm, only uses simple arithmetic and would take about 35 iterations to break the record:

First, start with a_0=1,b_0=1/sqrt(2),t_0=1/4,and p_0=1. Then iterate: a_(n+1)=(a_n+b_n)/2, b_(n+1)= sqrt(a_n*b_n), t_(n+1)=t_n-p_n(a_n+a_(n+1))^2, and p_(n+1)=2 p_n. Then when you’ve iterated enough, the estimate for pi is given by (a_n+b_n)^2/(4 t_n).The best of these iterative formulas that I know of is Borwein and Borwein’s, which converges like 9^n (Effectively, it requires about 11 iterations to beat the current record):

Start with

and then iterate

Then the estimate for pi is given by 1/a_n .

A fairly significant formula, found in 1995 by Simon Plouffe, is the Bailey-Borwein-Plouffe formula, which can be used to compute any bit in the hexadecimal representation of pi-without needing to know the previous digits, which can then be used to compute binary bits. In decimal-finding form, it is:

This formula was used by PiHex, an ended distributed computing program, to determine that the 1 quadrillionth bit of pi was 0. Yahoo later used the same to find that the 2 quadrillionth bit of pi was also 0.

Of course, the reasons of finding decimal digits of pi are not only to show how great your new supercomputer is, but also to attempt to find a pattern. In base 10, this is probably unlikely, as there are an infinite number of other bases to test, including the non-integer bases(i.e. 7/5ths, sqrt(2),6*e/19…) This makes it practically impossible, and even if base 10 or base 11 or base 16 had a pattern, we might have to look any number of places to find it, as in Carl Sagan’s novel Contact, where (spoiler) after a few trillion digits in base 11, one of the main characters finds a field of 0s and 1s the size of two prime numbers multiplied together. Plotting the 0s and 1s as black and white dots, she plots it on her computer screen to find- a picture of a circle! This is actually possible (though very unlikely) as one of Hardy and Wright’s theorems state that any sequence of digits you can think of can be found in pi. In fact, there’s a website (http://www.dr-mikes-maths.com/pisearch.html) which will search for names in pi expressed in base 26! (end spoiler)

However, there’s a way to express pi in such a way that it doesn’t depend on the base: Continued fractions! Continued fractions are “infinite fractions” which are in the form of

and are usually expressed as [a0,a1,a2,a3,a4,a5,...] or as [a0;a1,a2,a3,a4,a5,...] with all an positive integers. Many numbers, such as integers and fractions, have rational continued fractions: For example, 1=[1], and 355/113=[3,7,15,1]. Of course, if 355/113 were expressed in decimal, you’d have to use an infinite number of digits to get the actual fraction. A significant advantage that continued fractions have over decimal notation is that often irrational numbers can be expressed as repeating continued fractions. For example,

sqrt(2)=1.4142135623730950488016887242097… but in continued fraction notation

sqrt(2)=[1;2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,...]

Much simpler. In fact, you can go to your friends, claim you know more digits of the square root of 2 than them, and you can simply expand the continued fraction out to beat them no matter how many decimal digits they know! Possibly the most elegant of these repeating continued fractions is the one for the Golden Ratio, (1+sqrt(5))/2:

Also, sometimes transcendental numbers can be expressed as simple continued fractions. For example, Euler’s Number, e, is equal to lim(n->infinity) (1+1/n)^n and is often used in exponentiation and calculus. In continued fraction form, it is equal to [2,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1...]! Decimal is not as elegant, e being about 2.71828182845904523536…

However, despite any hope, Pi is not as pretty in continued fraction form, though it is invariant of base: [3,7,15,1,292 (ack!),1,1,12,1,3,1,14,2,1,1,2...] There have been only a few attempts for the continued fraction of pi; Tsu Ch’ung Chi’s 355/113=[3,7,15,1] was the first nontrivial one, and Euclid’s algorithm can be used for computing the continued fraction of pi, though his GCD algorithm just throws the terms away. The first major record that I know of was made by Bill Gosper on August 19,1977 when he computed 204103 terms using his own algorithm in Macsyma, an early computer algebra system. Later, he beat his own record  in 1985 with a whopping 17001303 terms, again using his algorithm. Later, in 1999 Hans Havermann beat Gosper’s record by using Mathematica to compute 20,000,000 terms. He later beat this in March 2002 to make 180,000,000 terms, the previous record.

Now might be a good time to tell why I haven’t been blogging recently.

Over the past few months, I have been working on a C# program, PiCF (not released yet, current alpha source code here) which can calculate the continued fraction of any number, not just pi, using Gosper’s algorithm. On October 17th, I calculated approximately 458,000,000 terms of pi in about 3 hours on a 64-bit machine running Windows on a Core 2 Duo @ 3.00 Ghz. This was later verified using Mathematica (taking up much more memory than the calculation did!). The program was coded in C#, has a command-line interface (with menus!), and uses Emil Stefanov’s wrapper of GNU MP for the BigInteger multiplications. The maximum term is still 878783625, originally found by Bill Gosper during the 17M calculation.  Other stats: The minimum term is one (naturally),  the terms take a 1.4 GB file (download here if you dare) and I was very nervous.

Pi has, over the years, gained a huge following: March 14th is Pi Day (in decimal) on which MIT ships their student acceptments/declines and on which people make pie, 1:59:26 of that day is Pi Second; May 22nd is Pi Approximation Day, and Kate Bush even wrote a song about pi. Many jokes about pi have surfaced on the Internet, including that:

This may be because over the thousands of years, pi has become so far removed from its original purpose: measuring circles. To start out, almost every round object has a volume and surface area that involves pi. The volume of a cone is one-third r^2 *h*pi, where r is the radius of the base and h is the height. The volume of a torus is (pi^2)/4*(b+a)(b-a)^2 where a is the inner radius and b is the total radius.

What about the volume of a crescent? Well, that’s quite a different story…

From Murderous Maths: The Perfect Sausage by Kjartan Poskitt

Perpetual Motion Machines

Perpetual motion machines are machines that produce an unlimited supply of energy, or just never stop. Inventors have tried for centuries to find a working perpetual motion machine, but all have failed, sometimes producing novel ideas, though. It can be shown using the First (conservation of energy) and Second (entropy can never decrease) )Laws of Thermodynamics that any perpetual motion machine is impossible, even though people still try in vain to find them.

Most perpetual motion machines are based on a simple machine, the overbalanced wheel, which was invented during or before the Middle Ages. The idea is that the weighted rods are on hinges so that they stay close to the wheel while going up, and stick out when going down on the wheel, so that the right side will always have more weight, and the left side will always have less, and so the wheel, once started, should never stop.

Villard's Wheel

The problem with this idea is that there are more rods on the left side, and so the weights balance out, and friction eventually brings the wheel to a stop. Many other types of overbalanced wheel have been tried, such as rolling balls inside a wheel, rolling balls on tracks outside a wheel, adding more hinges, and adding complex linkages which don’t really do anything.

Many attempts also have been made with buoyancy as a factor, such as a chain of  ping-pong balls entering a cylinder of water on the right side of the chain, which would supposedly provide more than enough lift for the next ball to enter the chamber, and so on. The problem of having no water spill out of the chamber when the balls enter it is, of course, another problem!

Some perpetual motion machines are so crazy that they appear to be jokes. For example, there’s Zimara’s windmill, a machine that goes roughly as follows: The windmill is turned, which squeezes the bellows, which makes air travel through the pipe ,powering the windmill, which squeezes the bellows… and so on.

Zimara's windmill

Of course, on each run, some air particles will miss the windmill, resulting in energy escaping from the system, which eventually stops.

Another great example of a “joke” perpetual motion machine is F.G. Woodward’s wheel. The wheel, supposedly unbalanced on the top roller, will try to rotate counter-clockwise and go down, except that it will not go down because of the bottom wheel, so it will rotate forever.

Any checking with a physics engine, doing force and torque analysis, or just rotating the entire thing will show that it will eventually stop due to friction, like all of the other machines.

Despite the number of proofs of impossibility and failed attempts, a quick look at the internet easily shows that people are still trying to find the right configuration of springs and gears for an impossibility.

Sliding Block Puzzles

Note: Software to simulate these sliding block puzzles will be given and reviewed at the end of the post.

Recently I’ve gotten interested in various sliding block puzzles (or klotski) , which are sequential movement puzzles where you have to move a certain block to another position, often involving moving other blocks out of the way.

I mentioned in an earlier post the 15 Puzzle, which is one of the most famous examples of puzzles. In the late 1800′s, it sparked a 15 puzzle craze which apparently “drove hundreds of people mad”. In it, you would scramble up the blocks  and then try to reorder them to it’s initial position. It’s moderately hard, but easy if you know the 1 algorithm needed to solve it. (This is unlike the Rubik’s Cube, where you may need to learn lots of algorithms) The thing which sparked the puzzle craze, though, was that Sam Loyd, the famous puzzlist, offered a \$1,000 dollar reward for simply interchanging the positions of the 14 and the 15, while keeping all the other pieces the same.. At this point, if you have a 15 puzzle, or can make one, you should definitely try to solve this problem. Software for simulating the 15 puzzle is everywhere, and shouldn’t be too hard to find. An online version is here.

The 15 Puzzle with the 14 and the 15 interchanged.

Loyd’s money was safe, though, because the puzzle is impossible. Mathematicians have proved that you can only obtain half of the solutions from any starting position.

Soon after that came sliding block puzzles with differing shapes. Some good examples of these are Dad’s Puzzle, Get My Goat, and the Tit-Bits Teaser#4. These puzzles could often be easy, or they could be extremely hard, requiring hundreds of moves for the smallest solution.There are even 3-dimensional sliding block puzzles, called burr puzzles, of which are sometimes easier than sliding block puzzles, but some, like Gordian’s Knot (by Thinkfun) are harder, having much more pitfalls and requiring 63 moves to complete.

Sliding block puzzles are still popular today, as there are many recent products. The most well-known is Rush Hour by ThinkFun, which has cars sliding only in straight lines on a rectangular grid, where the object is to remove the red car through an opening. The Rush Hour set comes with 40 challenges as well as an opportunity to make your own. ThinkFun also makes Gordian’s Knot, an insanely hard burr puzzle, and a version of the 15 puzzle.

One of the puzzles of Rush Hour

The hardest sliding block puzzle that has recently been made, though, is the Panex Puzzle. It’s like the Tower of Hanoi except that the pieces can go no lower down than their original starting position, and the object is to switch the two towers. It’s so hard that it requires approximately 30,000 moves for the minimum solution, and even that is not known.

There are many software programs for simulating, solving, and making Sliding Block Puzzles, and so I will list the best ones that I have found. Sliding block puzzle solvers have to work by trial and error, as an algorithm to solve sliding block puzzles in even polynomial time has not yet been found.

Taniguchi’s Sliding Block Puzzle Solver- A good program for quickly solving sliding block puzzles with any shape, usually solving even the hardest ones in a matter of seconds, and supports constraints, and comes with a number of  built-in puzzles. However,  the designer and the solver are separate programs, and there is not a playing program for the sliding block puzzles that you make. You can download it here.

Klotski- (as done by a person named Phil) – A program mainly for creating and playing sliding block puzzles, which it does rather well, and has a great collection of puzzles to start out with, including the evil, evil Sunshine Two-part puzzle, which I have not even got close to solving. The way of making new puzzles lacks a good GUI, though (you have to edit a file), and the hosting servers are often down. Download it here.

AAAUUUGGGHHH!!!

SBPSolver- A program which designs, plays, and solves sliding block puzzles. Almost perfect except for the fact that some of it (“Oui” or “Non”?) is in French, and that the pieces can only be rectangular. It comes with a huge library of  puzzles, though, and solves puzzles insanely quickly. Download it here.

Online Collections—

John Rausch’s Sliding Block Page- Humongous collection of sliding block puzzles, old and new, with least known solutions to them. Some of them are easier than they look (Junk’s Hanoi, even though requiring 161 moves, is super-duper easy), and some of them are much, much harder.

New and Old Sliding Block Puzzles- Small collection of some of the hardest sliding block puzzles there are, from Dad’s Puzzler to the Devil’s Nightcap, requiring well over 600 moves. Could use a few more puzzles, though. (NOTE: The site, as of 11/26/2010, appears to be somewhat hazardous to visit (malware). All of the puzzles, though, are either on one of the above links or on Dries’s site, puzzles.net23.net, under Bob Henderson)

I end this blog post with a small competition. Who can make the puzzle requiring the most moves, with the next three rules:

1. Empty spaces must take up 1/5 of the total area of the puzzle, not including walls.

2. You may use any constraints you like, as long as you follow these rules.

3. The puzzle must fit inside a 20×20 grid.

You can email your submissions to me at techie314 [at+at-at] gmail [dot*dash/dash] com

I will reveal the winner and runner-ups in a future blog post, the winner being the one who’s puzzle requires the most moves.

For an example, here’s my terrible example which takes three moves:

Super-easy

I hope that you can come up with a better one.

The Rubik’s Cube

One of the presents I got for Christmas was the 3x3x3 Rubik’s Cube, possibly the most famous sequential movement puzzle ever, next to the 15 puzzle. It’s remarkably hard, and the best I can do (without a guide) is to either solve the top face of cubies, the 2*2*2 section of it, and any number of corner cubes. With the guide, I have only  managed to solve it a few times, certainly less than the number of times I’ve read a long scientific book.

There are any number of methods for solving the Rubik’s Cube, with some being able to solve any cube in at most 30 moves, (The minimum is thought to be 22) and one of the easiest to understand is Dedmore’s method, shown here. Computers are much better at solving the Cube, with some examples being Rob’s Rubix Repair, a service that solves Rubik’s cubes in near-minimum moves depending on how long you are willing to wait, A Rubik’s Cube-solving Lego Robot:

A program called ACube solving a 1000*1000*1000 Cube:

And MagicCube4D and 5D, programs for making and solving various 4-dimensional and 5-dimensional Rubik’s Cubes:

Of course, people aren’t too bad with solving various Rubik’s Cube’s, as shown by Guinness World Records (about 10 seconds for the 3*3*3), and the cubes that are available. Companies have made up to 11*11*11 cubes, and there are videos ,such as the one following, of people solving a 20*20*20 cube.

People have even solved the 4-dimensional and 5-dimensional cubes, as shown by the MagicCube5D Hall of Insanity, where even my computer lags on solving some of them.

I, of course, still can’t solve the 3x3x3.

Even More Animations

Ever since my last post, I’ve been making more and more animations, which, in their uncompressed form, has been taking a strain on my hard drive. However, these have produced some amazing results, and at some point I should  upgrade to the newer version of my simulation software so that I can make animations of  Wada basins and the newly discovered Mandelbulb,  a 3-dimensional version of the Mandelbrot Set discovered by Daniel White.

The first one this time around is an animation of Video Feedback, sort of what you get when you point a videocamera at the screen of the T.V.:

I also made one showing fractal patterns with gravity sources:

A repeatable one “ping-ponging” the Mandelbrot Set:

and one showing a part of the Mandelbrot Set as the number of iterations is changed.

Of course, I can also make animations which are rendered in 3D, such as the  following one showing DLA, or diffusion-limited-aggregation, which is a bunch of particles moving around randomly until one touches a stationary point, at which point it also becomes stationary:

And lastly, a flyover of the Mandelbrot Set, where iterations are mapped to height:

Some more software by the same person involves making timelapses via webcam, which can be pretty interesting when made.

The first one of these is a timelapse of Mount Fuji in Japan, which shows some very interesting details if you look at it closely, such as people getting out of a bus:

And lastly, the most interesting, a timelapse of the buildings in Los Angeles:

I might take a break from making all these animations, and focus on something else for a while, such as puzzles and recreational mathematics.

Of course, I’d make animations of those.

Websites and the Konami Code

UPDATE: The Konami code on neilbickford.com no longer  points to Kelly’s website, but to something else… Kelly’s website can still be acsessed from neilbickford.com/kelly.htm

Just recently, my friend Mitchell Riley showed me his new site, Interesting Numbers. It’s still in development, as some screens currently bring up errors. (not 404,though)

The idea behind Interesting Numbers is that people can submit facts for numbers, eventually creating a huge database of number facts. On the front page it has the first number that no facts have been entered in about, which would come in handy, when the screens are full with numbers, the newest entered in numbers, and the newest facts.

I was able to test his website for bugs, and relatively little showed with the Over-Customized-Browser-Test, which consists of me tweaking the browser, turning Java and JavaScript on and off, changing the window resolution and size, the character encoding, and even more… The only problem was that when JavaScript was off, the “Jump to…” drop down didn’t work.

Then I tried entering in random words in the number field. It didn’t accept it, which was good. Then I tried a fraction that I knew was important, 866/455, which was the average distance between points in a Sierpinski gasket.

Uh oh.

Decimals also didn’t work.

I notified Mitchell about this, and so there might be datatype errors while he’s fixing the bug, which should be quick.

Recently, I came across a javascript routine for seeing if the user enters in a code. For example, you could make it so that when the user types in “EULER LIKES PIE” a window pops up that has Euler eating pie. The code’s called Konami-JS , and you can guess what the default code is.

Of course, I had to try it out, so now on my website, there are 3 easter eggs, one being the simple “Press this!”  button, which brings up digits of pi, the second being when you type in he Konami Code on a computer (up,up,down,down,left,right,left,right,b,a,enter), which redirects you to my little sister’s home page, and the third is a secret.

Of course, you could look at the source.

The Computer History Museum

A few weeks ago I invited Bill Gosper to meet at the Computer History Museum in Mountain View, at about noon, to talk about computers. As it turned out, the timing was perfect, and the museum was great.

At the entrance of the Computer History Museum there’s Charles Babbage’s Difference Engine  no.2. It’s absolutely humongous, filling up an entire room,and is insanely complex, even having a printing mechanism attatched. This many gears and cogs, though, only serves one purpose, to evaluate up to seventh-degree polynomials.

This is about 8 feet tall

Later in the day, we were able to see it in action, and it turns out that it has built-in n-core processing: Those rows of digits can evaluate more than 1 polynomial.

Oh, and it’s being put into Nathan Myhrvold’s (a CTO of Microsoft)

living room.

Soon, we went into the next room, dedicated to machines that played chess through the ages, which had cartoons, samples of code from the program, and the Deep Blue computer that beat Gary Kasparov.

We then went into the biggest room, Visual Storage, which was this humongous collection of computers, like a portion of the ENIAC, the PDP-1 (which used the Minsky circle algorithm that Bill Gosper and Julian Ziegler Hunts are trying to prove things about), the Utah Teapot, and even the legendary… Speak n’ Spell.

After that, we went to a pizza place across the street, where I showed Gosper some of my animations that I had put on my flash drive, and (almost) had a chocolate Italian soda. (Not carbonated chocolate milk, that tastes terrible).

We went back to the Computer History Museum just in time for a guided tour of Visual Storage, with Bill Gosper chiming in at points where he had actually used that computer, such as the PDP-1. Apparently they have a warehouse where 90% of their computers are kept, including the full ENIAC, the early army radar system, and other things that would make it a computer geek’s dream.

Overall, we had a great time, and Gosper liked the animations I showed him.

Processing

Update: the executable for Windows is now on my website and is available from the downloads page. The program even has it’s own subpage!

Recently I went on another search for free programs, which always ends in the compiler of a computer language. Last time it was Java and Context Free, this time it’s Processing.

Processing is a minimalistic computer language commonly used to make really cool simulations and art, which has a whole bunch of functions for math and cellular automata. It comes with a humongous number of examples for Penrose tiles, sinwave spheres, and so on, including a bunch of tutorials aimed at the non-programmer, which makes it really easy for someone who hasn’t programmed much to get started.

The very first program I made with it was a line that starts from the upper left-hand corner and follows your mouse. Eventually, while going through one of the tutorials where it had me make a mistake and then undo it, where the mistake was that the circles that followed the mouse wouldn’t disappear, so there would be a trail of circles following the cursor.

This immediately made me think of a paint program, and so I tried to create a simple one in Processing. It’s a fun experience to make a program, but due to the minimalisticness of Processing (here’s every single function in it), and the fact that I wasn’t using any pre-made libraries, made some parts very difficult. For example, there’s the problem of telling if the mouse has clicked a button. I had to create a special function that, given the coordinates of the button, will tell you if the mouse is clicking it or not. So far I’ve managed to get up to version 0.1 Alpha in 3 days, and the program’s very easy to use and makes some quite good stuff.

Sketches made in the program:

A simple waterfall

Cubes

The program’s open source, and the source code and .exe files for Windows and Linux will appear on my website (neilbickford.com) as soon as I can get them there.

Processing’s really fun to play around with, so I encourage anybody to download a version of it.