## The Minsky Circle Algorithm

Our tale starts with the PDP-1, a \$120,000 room-sized system which, when released in 1960, was state-of-the art computer technology: 4096 18-bit words of memory, a 200 Khz clock speed, and came with only a Macro assembler and a DDT debugger, but what it did have was a paper tape reader, and a Type 30 precision CRT display. As such, many of the first programs of the PDP-1 were “Display hacks”: Programs using only a few lines of code, but when run, create intricate patterns on the screen. A prime example of this is the Munching Squares program by Jackson Wright, described in the MIT A.I. Lab’s HAKMEM:

```foo, lat
rcl 9s
xor bar
dpy
jmp foo
bar, .```

This creates a sequence of images corresponding to where the bitwise XOR function of the x and y coordinates of every point on the screen is less than the frame number. If this happens to be a bit complicated, it’s a bit easier to understand when you see the animation:

Gif from Mathworld.

(A very good emulator of the original Munching Squares program is located at http://www.dpbsmith.com/munch.html)

A goal of a member of the Research Lab for Electronics (and later the head of the MIT AI lab), Marvin Minsky, was to develop a program to make curves in a square grid, in as few lines as possible. When he was attempting to get the system to draw a spiral, he stumbled across a most curious thing: A two-statement program which would draw stable circles! Decoded from machine language, it reads:

loop: y = y – 1/16 * x

x = x + 1/16 * y

Note the lack of temporary variables; In fact, with a temporary y variable, the points spiral out of the circle! However, the program does not draw a perfect circle, but rather a very round ellipse, which becomes rounder as 1/16 gets closer and closer to 0.

From the Computer History Museum

You can generalize the Minsky circle algorithm by replacing the first 1/16 by δ and the latter by ε, to get circles that take longer or shorter to generate:

x = x – δ * y

y = y + ε * x

It turns out that this can even be solved recursively! Using a substitution and a “guess and-by-gosh” method, the authors of the book “Minsky’s and Trinsky’s” (which most of the content for this article was lifted from, as of now privately published by the authors: Corey Ziegler Hunts, Julian Ziegler Hunts, R.W. Gosper and Jack Holloway) were able to prove that, for the Nth step:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where d=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) , which happens to actually be an ellipse! Note how if δ*ε>4 or δ*ε<0, ω becomes imaginary and the ellipse becomes a hyperbola. However, with anything so seemingly simple, there is something that makes the subject much more complex. In this case, it was the nearly imperceptible (but strictly periodic) wobble of the ellipse.

## “The Worst Circles I Ever Drew”

In machine arithmetic, integers, no matter what happens to them, are always integers. While this may seem a triviality, machines with only integer arithmetic effectively use the floor function on each operation they do. For example, if you divide 1 by 2, in many programming languages the answer will be 0. The reason for this is because the bit routines for dividing integers always return integers: They work to no more than the precision given. In order for us to understand what’s going on, we can just pretend that the computer works out the number, then finds the largest integer less than or equal to the answer. No problem, right? Inserting floors into the Minsky recursion, like this:

x = x – floor(δ * y)

y = y + floor(ε * x)

shouldn’t hurt the overall mechanics, right? Wrong. When x or y is small enough, the floor will actually make the result of the multiplication be 0, losing precision. While this may not seem as big a problem, when we chart the plots for strange values of delta or epsilon we get something which is definitely not an ellipse. Corey Ziegler Hunts and Julian Ziegler Hunts discovered this behavior (discovered earlier by Steve Witham), and began to dig deeper. It turns out that if you calculate the Periods (how long it takes before the points reach X0 and Y0) of the Minsky recurrence starting at different X0,Y0, δ and ε, the periods can vary wildly. For example, when δ and ε both equal 1, all X and Y (other than 0,0) loop with period 6. On the other side, the X0,Y0,δ and ε corresponding to {-19/24,-1015/121,7381/5040,5040/7381} has a period of no less than 2,759,393! (Even with the floor function, the algorithm is exactly reversible, so it must return to (X0,Y0) or never repeat any point, unless the machine integers overflow.)

This was actually the Zieglers first "Rugplot"

Another one, x=1,y=0,δ=9/17 and ε=15/2, has been followed for no less than 104 trillion steps in both directions (the Minsky recurrence can be modified to run backward) without looping! A logical question might be to ask: What do these periods look like when plotted? Well, since there are 4 variables: x,y,δ, and ε, there are 6 different ways to plot the periods: x and y, x and δ, x and ε, y and δ, y and ε, and lastly δ and ε. The Zieglers started with the x-y plots. Now, choose a constant δ and ε, such as 1 and 0.467911, and plot the periods mapped to colors  for integer x and y, and you get what may as well be a Persian rug! Okay, but maybe that’s just a special case of the Minsky recurrence, and if we try something like δ=1 and ε=0.91939, we’ll just see a blob? Wrong.

Blob it may be, but an intricate one nonetheless.

You can try other Minsky X-Y plots at http://openprocessing.org/visuals/?visualID=19109 , or click on the picture below:

Click to go to the applet (openprocessing.org)

Coloring methods also have effects on Minsky plots. For example, the Minsky plot δ=100, ε=1/99, which when rendered with a simple linear coloring from Dr.Fibdork’s version (programmers, see comments) looks like this: However, the authors of the book use a different method to bring out some of the finer details, which shows this: At some point, though, we have to stop messing with numbers and get down to math. If we return to the original Minsky solutions:

Xn=X0 cos(n ω)+(X0/2-Y0/ε) sin(n ω)/d

Yn=Y0 cos(n ω)+(X0/δ-Y0/2) sin(n ω)/d

where δ=sqrt(1/(δ ε)-1/4) and ω=2 arcsin(sqrt(δ ε)/2) ,we notice that this ellipse returns to its original point when n*ω mod 2*Pi =0, because when n=0, n*ω=0, and also because the periods of the cos and sin functions are 2*Pi . Now, let us define the theoretical period (denoted as P) of a Minsky recurrence as the first n greater than 0 for which n*ω mod 2*Pi =0, which is the same as saying “The n for which n*ω=2 Pi” . (Note that n can be a non-integer) N can be trivially solved for, and expanding ω we get that P=Pi/arcsin(sqrt(δ ε)/2) . We can write this in two ways: The already mentioned one, or by solving for  δ ε we get δ ε=4 (Sin(Pi/P))^2, which we can use to get a product of δ and ε from the period. Earlier on, readers may have looked at the figures for δ and ε and seen something suspicious about them. As it turns out, the first “Rugplot” had a period of 9, and the second had a period of 2 Pi. In general, when P>4, we can generate very symmetrical plots by setting δ to 1 and ε to the value given by 4 (Sin(Pi/P))^2 . For P<5, the method generates very monotonous images (both delta and epsilon are integers), but when P=5 we get what Holloway terms “Rastermen”:  Koch Snowflake-like fractals but with only five arms repeated infinitely with sizes according to the Fibonacci numbers!

A larger one is avaliable at http://gosper.org/p5d1.png

It’s even possible to make animations of Minskyplots, such as along the line δ=3/2+t/200, P 40/139. It turns out that Minsky space is rather epileptic:

Different configurations arise with different functions of time for δ and ε, such as when δ=t and ε=1/(t-1), and δε approaches 1:

The horizontalness of the boundaries at the end are due to the fact that the slope of major axes of the ellipses in a Minsky x-y plot is approximately ε/δ , because in the original Minsky solutions the amplitude of Xn (roughly the “run” in rise/run) is larger when ε is larger,  and the amplitude of Yn (the “rise”) reacts the same way to δ.

## Minskyspace

At this point you may be asking what the other 5 arrangements of Minsky plots look like. I don’t have the space in this post to talk about them all, but I can describe one of the most interesting Minsky plot methods, sort of the opposite of the x-y plot: The δ-ε plot. Recall from earlier that the simple Minsky solutions become imaginary if δε>4. It actually turns out that this is generally not the case. Suppose we use a simple Minsky period plotting method to draw the periods with x=1 and y=0 , where -4<δ<4 and -4<ε<4:

Rather large (4001x4001), so viewing in hi-res is recommended. 1px (in reality infintesimally thin) lines are due to scan granularity.

The gray areas are where there was no Minsky loop with a period less than 5000, and the white areas are period 1.  As you can see, although the outline of the colored region resembles the function y=4/x, in many places there are regions of periodicity extending out into the “dangerous” area, such as on the upper right corner, really small. (I should note here that it has been proven that the shapes of periods are always rectangles) Furthermore, the authors of Minsky’s and Trinsky’s have discovered that some areas inside the two “Islands” are nonperiodic, such as x=1,y=1/2,δ=9 and ε=1/3. (Plot) Even more, any Minsky point of the form x=1, y=1/2, δ=3^(n+1) , ε=3^(-n) , where n is an integer greater than 0, is unbounded.  Not much is known about these δ-ε plots: We can prove that the shapes are always rectangles, and we can find a few general periods, but in general it’s a bit like trying to find the value of an arbitrary point on the Mandelbrot Set. Which brings me to my next point: Unlike the Minsky x-y plots, you can take a δ and ε and zoom into it! The video below shows a sequence of period 5 x-y plots with δ ranging from 99/100 to close to 1:

Generated by Julian Ziegler Hunts.

Some of the areas can seem fractal, and it’s certainly possible to find patterns that seem to converge to a series of infinitely thin rectangles, such as the top-right edge of the x=1,y=1 δ-ε space (δ*ε≤4): Other areas, such as this one near x=0, y=8, δ=173/80 , ε=137/96 , display more localized behavior: However, in many of the places where the differences in size between rectangles go to 0, the periods appear to approach ∞, such as when you approach x=1, y=0, δ=1 , ε=1 from the upper-left-hand side. These places, called “Accumulation points”, seem to be ubiquitous throughout any δ-ε plot . As a great example of this, take the neighborhood of x0=0, y0=8, ε=10/7, δ=37/17 (zoomed in and recolored from the image above) , which Bill Gosper has termed the “Neighborhood of a Monster” because of the “monsters” (points with gigantic periods) that occur. In this case, even though the center appears at first to be devoid of accumulation points, there are some particularly nasty periods- right in between two large blocks!

Double-click to see in high resolution

There’s tons more to study of course, but in reality all of the pictures we see of Minsky plots, whether x-y,δ-ε, or some combination of the two, are all slices of a 4-dimensional, jagged, infinite object, called Minskyspace. The 4 dimensions come from the four parameters, and while slices from this object have not even been rendered in a dimension higher than 2, we can tell a few things about it:

• It goes forever in the x,y,δ, and ε directions. However, in the δ and ε directions, it takes on a rather hyperbolic shape, due to the original, non-rounded Minsky circle algorithm.
• Nearly half of it seems to be missing, due to δ*ε being less than 0.
• Certain “congruence regions”, that is, areas in Minskyspace where the periods are the same which produce identical orbits modulo translation, are shaped like polyhedra instead of infinitely thin slices when sliced through with a 3-dimensional plane! (some of the faces are curved, though)
• At irrational δ*ε , there can be infinitely fine structure around that area, but a conjecture is that there is no irrational δ*ε which has infinite period.
• All accumulation points, infinitely near their centers, have unlimited period.
• Conjecture: All congruence regions are Cartesian products of two polygons with at most 6 hyperbolically curved sides.
• That’s a bit of what we know for sure. There are tons of conjectures, and I’ve hosted a version of the Minsky “To do” list, “Problems posed and solved”, at neilbickford.com/Minsky/problems.txt.

In summary: Although some algorithms may seem simple, there can be great areas of mathematics behind them. We still don’t know all about the shapes that the Minsky circle algorithm creates, and there are still many problems waiting to be uncovered and solved. Even if you don’t want to tackle recursive formulae, just discovering areas in Minskyspace is a fun and entertaining pastime.

1. As a note on the coloring methods above, Dr. Fibdork uses the Processing code
```double LogMax=log(maximum); for(int i=0;i<xSize;i++){ for(int j=0;j<ySize;j++){ result=(int)(16777216*(log(periods[i][j])/LogMax)); //Log transformation fill(result%256,((result/256)%256),result/65536); //Choose the color rect(i,j,1,1); //Draw a pixel with that color } }```

where maximum is the maximum value encountered, periods is a matrix containing the periods of the points, and xSize and ySize are the sizes of the window.
This coloring is pretty fast, but leaves out many small features. However, Corey and Julian use the Mathematica code
```(*Returns a function for converting Minsky periods into colors. maxperiod is the largest period in the list of periods the function is to be applied to.*) MinskyColorAlgorithm[maxperiod_]/;And[IntegerQ[maxperiod],maxperiod>0]:= Block[{d,L}, FromDigits[Partition[Rest[IntegerDigits[8^d+Floor[(8^d-1)*(1-Log[Min[maxperiod,#]]/L)],2]],3],2]/2.^d&/.{L->Log[N[Max[maxperiod,2]]],d->Max[6,Ceiling[Log[8,maxperiod]]]} ]```
They then apply this function to the array of periods, and then plot the colors they get. I don’t know if either is the best coloring method possible, so I would encourage others to come up with their own.
–Neil Bickford

2. That is a fascinating post. Yesterday at U of Arkansas, I gave a talk encouraging people to use computers to look at obscure problems. The computers of today are a million times as powerful as computers from a few decades ago. Thus, if a problem vulnerable to computer study hasn’t been looked at recently, it basically hasn’t been looked at.
Stage 2 in a modern computer analysis is to generate some big numbers or number sequences that come out of the computer searches, and to post results online. Okay, that is definitely done.
These are some beautiful results here.

• Marc LeBrun
• April 8th, 2011

Wonderful write up! (Just one minor request: please put a pause button on the munching squares animation!) Thanks!

• Ralph
• May 18th, 2011

Several decades ago I spent a good bit of time performing computer experiments with 2D cellular automata. I went on a long journey of testing various grid geometries, systems of phased updates and rules modeled on chemical reactions.

Also, coincidentally, I also spent a good bit of time working on very close relatives of the Minsky algorithm you’ve discussed above, though in the context of physics simulations rather than a study of the detailed patterns one can observe.

There is no end to the experimental fun of that kind of activity. But does it all lead anywhere? The combination of empirical observations interspersed with some mathematical deductions now makes me a little bit jittery, because it seems to spread out, both geometrically and intellectually, in all directions without any horizon or sight of land.

In the not so distant past, number theorists spent disturbingly long weeks and months creating lists of primes, catalogs of polynomials and so on. Presumably they didn’t embark on such time-consuming studies without some idea about where they hoped to arrive. Now that we have these marvelous computers, the possibilities are (countably) infinite, but the goals are off somewhere in the mist.

This is decidedly not meant as a critique. It is just my idea of occasionally putting up a periscope to look around and see where we might be heading. Any comments would be appreciated.

• Elto Desukane
• January 24th, 2012

The inexhaustibility of mathematics going off in all directions can be exhilarating but also disheartening. I was reading about ways to add/multiply 1s to build up any integer. So many variations can be done, using other digits, operations, +/-, ^, mod, binary, bases, etc…, in how many ways, min number of symbols, of 1s, of parentheses, …