# Announcing a new blogging website, Page Fault

After nearly five years (and nearly 10 years since I started blogging), I’m starting a new blog, Page Fault, at https://www.neilbickford.com/blog/. The first new article is a tutorial that shows you how to build a real-time denoiser in JSFX, REAPER’s audio programming language. It has some nice behavior, and it’s been used in production for several videos at the UCLA Game Music Ensemble.

You’ll also notice that the last two posts have been copied over to Page Fault. Since there are many websites that link to https://nbickford.wordpress.com, this blog will also remain online to avoid breaking those hyperlinks. Please note that some articles on this website are down for maintenance at the moment.

# Hyperbolic Self-Similarity This texture shrinks on one axis while stretching on the other – and yet somehow loops every three seconds.

Here’s the full code of a (very) heavily modified version, which fits in one and a half tweets (you can also view it on Shadertoy):

```#define p exp2(fract(iDate.w/3.)+float(i))
void mainImage( out vec4 fragColor, in vec2 fragCoord ){
for(int i=-9;i<9;i++){
fragColor+=min(p,1./p)*texture2D(iChannel0,fragCoord*.004*vec2(1./p,p))/3.;
}
}
```

# Images from “Marble Runs and Turing Machines”

I just returned from this year’s G4G11, where I presented my work on the computability of marble runs (with some success- while I did finish my slides, I went almost 2 and a half minutes over time despite editing and practicing my talk. PROTIP: Don’t go on a two-day vacation 4 days before a conference whose third major theme was sleep deprivation!) I promised I would post the full construction for a marble-run computer here, which not only shows that the double-switch marble run halting problem is PSPACE-Complete, but also should enable simple proofs of universal computability for many other systems. But enough with the jargon; this is going to be more of a slideshow-style post, although I’ll soon post a more in-depth explanation of my talk. (The original slides are also available here, which also includes notes on the script.)

But first, some acknowledgements. The very first image, an example of a marble machine, was taken directly from denha’s fantastic video, “Marble machine chronicle“. While denha’s Youtube channel focuses primarily on marble machines, the electronics videos are certainly interesting as well, so definitely check it out. I used Blender for most of the non-schematic animations, so thanks to the people behind the animation engine and Cycles Render. And finally, the proof would undoubtedly have been much more difficult without the ideas of Demaine, Hearn, and Demaine (choose your own ordering), not to mention the many other people who’ve done work on computational complexity theory and all the things that led up to the field. (I’m sure some would prefer I name-checked everybody involved, but then I’d have to include my kindergarten teacher and the crew behind the Broome Bridge and, well, this would be a lot longer.)

So, without further ado, here are the various images and videos from my talk, presented in substantially more than 6 minutes.

This is, as mentioned above, denha’s “Marble machine chronicle”, for those who have never heard of marble runs under that particular name before.

I made (with assistance from Peter Bickford) a short video demonstrating a few recurring elements in marble machines- specifically, the ones I would be analyzing later in the talk. The original marble machine, from the Tech Museum of Innovation,  also contains a few other elements (such as loop-de-loops, chimes, and randomized switches), some of which do nothing to the computational ability of the machine, others which actually do change the problem a bit. Additionally, I consider problems with exactly one marble or pool ball, although Hilarie Orman suggested that it might be possible to simplify the construction using two or more pool balls.

This is a decoy that looks like a duck, used for a quick gag and counterexample to the Duck test. This was actually the shortest clip and the longest render for the entire project; the good news was, it rendered at 24 times the speed of a Pixar film. The bad news was that it rendered at only 24 times the speed of a Pixar film. A short slide made to demonstrate the proof that single-switch marble machines cannot emulate a computer (unless NP=PSPACE, which is really unlikely). Up top, we have the problem we’re trying to solve, while on the lower-left is a randomly generate marble run with edges annotated and on the right is a system of equations representing said marble run. (Click through to see the rest of the images)

# PiCF version 1.0.0 is now out!

All major bugs have been fixed, the integrated help system is now functional, and it even comes with a tutorial! Described in more detail in my pi post

# Pi

Pi is one of the greatest numbers of all time, having been known for thousands of years and over that time gaining quite a bit of popularity in the form of celebrations such as Pi Day and others, all from a number that came from the simplest smooth object: A circle. Suppose you have a circle with a width of 1 inch, and then take a measuring tape and measure around the edge. You’ll find that it comes out to 3 inches and a bit, and if you increase the inch to a foot, you might get 3.14 if you look carefully enough. On the more extreme scale, you could go out to a crop circle, measure it, and perhaps get 3.1415926 . Now, suppose you have a perfect circle, and an infinitely precise ruler (for lengths shorter than an atom) , and do the same thing once again. You would get the number 3.141592653589793238462643383… which is expressed as the Greek symbol One of the first mentions of  pi is in the Bible, where in Kings 7:23-26 it states:

And he [Hiram] made a molten sea, ten cubits from the one rim to the other it was round all about, and…a line of thirty cubits did compass it round about….And it was an hand breadth thick….”
This states that pi=3, a definite approximation, but a terrible one nonetheless. A slightly better approximation was made by Archimedes, when he developed a formula for computing pi by using polygons with large numbers of sides, and getting two approximations for the area of a circle ( pi*r^2) , like this: 5,6, and 8 edges

Using this method, he drew 2 96-sided polygons and got 3 10/71<pi<3 1/7 , an approximation accurate to 2 decimal places: 3.14… Ptolemy later updated this with 3.141… and this was updated by Tsu Ch’ung Chi to 355/113 , correct to 6 places. Later on, in the 1600s, Gottfried Leibniz/James Gregory found an infinite sum for pi: pi=4*(1-1/3+1/5-1/7…) The proof of this requires calculus, but takes up less than a page. Leibniz’s/Gregory’s formula is rarely used because it takes exponentially many terms to create more digits, which would slow down even the fastest computers. A slightly better formula, but much more amazing, was found by Francois Viete in 1593, using only the number 2! A quite beautiful formula for pi was found by John Wallis, in the form of Notice how the numerators and the denominators appear to “carry over” to the next fraction!

Shortly thereafter, a much better formula was found by  John Machin in 1706:

Pi/4=4*Arccot(5)-Arccot(239)=4*Arctan(1/5)-Arctan(1/239)

This formula, when expressed in radians, can be computed rapidly using Arccot(x)=1/x-1/(3x^3)+1/(5x^5)-1/(7x^7)… Formulas of this type, arctans of fractions, are now called “Machin-like formulae”.  The simplest of these is Pi/4=Arctan(1), followed by  and The arctans with bigger denominators produce more digits per series term, so the efficiency of a Machin-like formula is limited by the arctan with the smallest denominator. For example, the 2002 Pi decimal places record was set by Yasumasa Kanada on a supercomputer using Kikuko Takano’s and F. C. W. Störmer‘s Even more complicated Machin-like formulae exist, such as Hwang Chien-Lih’s 2002 However, in the computer age, the length or the elegance of the formula don’t count: it’s the rate at which the formula converges. Snirvasa Ramanujan, Indian matematician and nemesis of Bill Gosper (“Every time I find an identity, he’s found it before me!”), created a number of formulae for pi,  including the following: where denotes f(a)+f(a+1)+f(a+2)…+f(b). Note not only the factorials (n!=1*2*3*4*5…*n) but also the large terms both on the outside and on the inside, especially the factorial to the 4th power and the 396^(4k), which can be shown to mean that the sum converges exponentially rapidly (digits/term), as opposed to exponentially slowly as in the Gregory-Leibniz formula, which makes it one of the fastest algorithms known for computing pi. An even faster algorithm, which has been used to break the pi record many times, is the formula found by the Chudnovsky brothers in 1987: This rather monstrous formula gives about  14 digits per term, and was used most recently by Shigeru Kondo and Alexander Yee to calculate 5 trillion digits of pi, billions of times more than enough to estimate the area of your wading pool to the atom. There are even formulae that give an exponential number of digits per iteration, with the drawback that each calculation is exponentially hard. One of these, the Brent-Salamin algorithm, only uses simple arithmetic and would take about 35 iterations to break the record:

First, start with a_0=1,b_0=1/sqrt(2),t_0=1/4,and p_0=1. Then iterate: a_(n+1)=(a_n+b_n)/2, b_(n+1)= sqrt(a_n*b_n), t_(n+1)=t_n-p_n(a_n+a_(n+1))^2, and p_(n+1)=2 p_n. Then when you’ve iterated enough, the estimate for pi is given by (a_n+b_n)^2/(4 t_n).The best of these iterative formulas that I know of is Borwein and Borwein’s, which converges like 9^n (Effectively, it requires about 11 iterations to beat the current record): and then iterate Then the estimate for pi is given by 1/a_n .

A fairly significant formula, found in 1995 by Simon Plouffe, is the Bailey-Borwein-Plouffe formula, which can be used to compute any bit in the hexadecimal representation of pi-without needing to know the previous digits, which can then be used to compute binary bits. In decimal-finding form, it is: This formula was used by PiHex, an ended distributed computing program, to determine that the 1 quadrillionth bit of pi was 0. Yahoo later used the same to find that the 2 quadrillionth bit of pi was also 0.

Of course, the reasons of finding decimal digits of pi are not only to show how great your new supercomputer is, but also to attempt to find a pattern. In base 10, this is probably unlikely, as there are an infinite number of other bases to test, including the non-integer bases(i.e. 7/5ths, sqrt(2),6*e/19…) This makes it practically impossible, and even if base 10 or base 11 or base 16 had a pattern, we might have to look any number of places to find it, as in Carl Sagan’s novel Contact, where (spoiler) after a few trillion digits in base 11, one of the main characters finds a field of 0s and 1s the size of two prime numbers multiplied together. Plotting the 0s and 1s as black and white dots, she plots it on her computer screen to find- a picture of a circle! This is actually possible (though very unlikely) as one of Hardy and Wright’s theorems state that any sequence of digits you can think of can be found in pi. In fact, there’s a website (http://www.dr-mikes-maths.com/pisearch.html) which will search for names in pi expressed in base 26! (end spoiler)

However, there’s a way to express pi in such a way that it doesn’t depend on the base: Continued fractions! Continued fractions are “infinite fractions” which are in the form of and are usually expressed as [a0,a1,a2,a3,a4,a5,…] or as [a0;a1,a2,a3,a4,a5,…] with all an positive integers. Many numbers, such as integers and fractions, have rational continued fractions: For example, 1=, and 355/113=[3,7,15,1]. Of course, if 355/113 were expressed in decimal, you’d have to use an infinite number of digits to get the actual fraction. A significant advantage that continued fractions have over decimal notation is that often irrational numbers can be expressed as repeating continued fractions. For example,

sqrt(2)=1.4142135623730950488016887242097… but in continued fraction notation

sqrt(2)=[1;2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,…]

Much simpler. In fact, you can go to your friends, claim you know more digits of the square root of 2 than them, and you can simply expand the continued fraction out to beat them no matter how many decimal digits they know! Possibly the most elegant of these repeating continued fractions is the one for the Golden Ratio, (1+sqrt(5))/2: Also, sometimes transcendental numbers can be expressed as simple continued fractions. For example, Euler’s Number, e, is equal to lim(n->infinity) (1+1/n)^n and is often used in exponentiation and calculus. In continued fraction form, it is equal to [2,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1…]! Decimal is not as elegant, e being about 2.71828182845904523536…

However, despite any hope, Pi is not as pretty in continued fraction form, though it is invariant of base: [3,7,15,1,292 (ack!),1,1,12,1,3,1,14,2,1,1,2…] There have been only a few attempts for the continued fraction of pi; Tsu Ch’ung Chi’s 355/113=[3,7,15,1] was the first nontrivial one, and Euclid’s algorithm can be used for computing the continued fraction of pi, though his GCD algorithm just throws the terms away. The first major record that I know of was made by Bill Gosper on August 19,1977 when he computed 204103 terms using his own algorithm in Macsyma, an early computer algebra system. Later, he beat his own record  in 1985 with a whopping 17001303 terms, again using his algorithm. Later, in 1999 Hans Havermann beat Gosper’s record by using Mathematica to compute 20,000,000 terms. He later beat this in March 2002 to make 180,000,000 terms, the previous record.

Now might be a good time to tell why I haven’t been blogging recently.

Over the past few months, I have been working on a C# program, PiCF (not released yet, current alpha source code here) which can calculate the continued fraction of any number, not just pi, using Gosper’s algorithm. On October 17th, I calculated approximately 458,000,000 terms of pi in about 3 hours on a 64-bit machine running Windows on a Core 2 Duo @ 3.00 Ghz. This was later verified using Mathematica (taking up much more memory than the calculation did!). The program was coded in C#, has a command-line interface (with menus!), and uses Emil Stefanov’s wrapper of GNU MP for the BigInteger multiplications. The maximum term is still 878783625, originally found by Bill Gosper during the 17M calculation.  Other stats: The minimum term is one (naturally),  the terms take a 1.4 GB file (download here if you dare) and I was very nervous.

Pi has, over the years, gained a huge following: March 14th is Pi Day (in decimal) on which MIT ships their student acceptments/declines and on which people make pie, 1:59:26 of that day is Pi Second; May 22nd is Pi Approximation Day, and Kate Bush even wrote a song about pi. Many jokes about pi have surfaced on the Internet, including that: This may be because over the thousands of years, pi has become so far removed from its original purpose: measuring circles. To start out, almost every round object has a volume and surface area that involves pi. The volume of a cone is one-third r^2 *h*pi, where r is the radius of the base and h is the height. The volume of a torus is (pi^2)/4*(b+a)(b-a)^2 where a is the inner radius and b is the total radius.

What about the volume of a crescent? Well, that’s quite a different story…

# Sliding Block Puzzles

Note: Software to simulate these sliding block puzzles will be given and reviewed at the end of the post.

Recently I’ve gotten interested in various sliding block puzzles (or klotski) , which are sequential movement puzzles where you have to move a certain block to another position, often involving moving other blocks out of the way.

I mentioned in an earlier post the 15 Puzzle, which is one of the most famous examples of puzzles. In the late 1800’s, it sparked a 15 puzzle craze which apparently “drove hundreds of people mad”. In it, you would scramble up the blocks  and then try to reorder them to it’s initial position. It’s moderately hard, but easy if you know the 1 algorithm needed to solve it. (This is unlike the Rubik’s Cube, where you may need to learn lots of algorithms) The thing which sparked the puzzle craze, though, was that Sam Loyd, the famous puzzlist, offered a \$1,000 dollar reward for simply interchanging the positions of the 14 and the 15, while keeping all the other pieces the same.. At this point, if you have a 15 puzzle, or can make one, you should definitely try to solve this problem. Software for simulating the 15 puzzle is everywhere, and shouldn’t be too hard to find. An online version is here. The 15 Puzzle with the 14 and the 15 interchanged.

Loyd’s money was safe, though, because the puzzle is impossible. Mathematicians have proved that you can only obtain half of the solutions from any starting position.

Soon after that came sliding block puzzles with differing shapes. Some good examples of these are Dad’s Puzzle, Get My Goat, and the Tit-Bits Teaser#4. These puzzles could often be easy, or they could be extremely hard, requiring hundreds of moves for the smallest solution.There are even 3-dimensional sliding block puzzles, called burr puzzles, of which are sometimes easier than sliding block puzzles, but some, like Gordian’s Knot (by Thinkfun) are harder, having much more pitfalls and requiring 63 moves to complete.

Sliding block puzzles are still popular today, as there are many recent products. The most well-known is Rush Hour by ThinkFun, which has cars sliding only in straight lines on a rectangular grid, where the object is to remove the red car through an opening. The Rush Hour set comes with 40 challenges as well as an opportunity to make your own. ThinkFun also makes Gordian’s Knot, an insanely hard burr puzzle, and a version of the 15 puzzle. One of the puzzles of Rush Hour

The hardest sliding block puzzle that has recently been made, though, is the Panex Puzzle. It’s like the Tower of Hanoi except that the pieces can go no lower down than their original starting position, and the object is to switch the two towers. It’s so hard that it requires approximately 30,000 moves for the minimum solution, and even that is not known. There are many software programs for simulating, solving, and making Sliding Block Puzzles, and so I will list the best ones that I have found. Sliding block puzzle solvers have to work by trial and error, as an algorithm to solve sliding block puzzles in even polynomial time has not yet been found.

Taniguchi’s Sliding Block Puzzle Solver- A good program for quickly solving sliding block puzzles with any shape, usually solving even the hardest ones in a matter of seconds, and supports constraints, and comes with a number of  built-in puzzles. However,  the designer and the solver are separate programs, and there is not a playing program for the sliding block puzzles that you make. You can download it here.

Klotski– (as done by a person named Phil) – A program mainly for creating and playing sliding block puzzles, which it does rather well, and has a great collection of puzzles to start out with, including the evil, evil Sunshine Two-part puzzle, which I have not even got close to solving. The way of making new puzzles lacks a good GUI, though (you have to edit a file), and the hosting servers are often down. Download it here.

SBPSolver– A program which designs, plays, and solves sliding block puzzles. Almost perfect except for the fact that some of it (“Oui” or “Non”?) is in French, and that the pieces can only be rectangular. It comes with a huge library of  puzzles, though, and solves puzzles insanely quickly. Download it here.

Online Collections—

John Rausch’s Sliding Block PageHumongous collection of sliding block puzzles, old and new, with least known solutions to them. Some of them are easier than they look (Junk’s Hanoi, even though requiring 161 moves, is super-duper easy), and some of them are much, much harder.

New and Old Sliding Block Puzzles- Small collection of some of the hardest sliding block puzzles there are, from Dad’s Puzzler to the Devil’s Nightcap, requiring well over 600 moves. Could use a few more puzzles, though. (NOTE: The site, as of 11/26/2010, appears to be somewhat hazardous to visit (malware). All of the puzzles, though, are either on one of the above links or on Dries’s site, puzzles.net23.net, under Bob Henderson)

I end this blog post with a small competition. Who can make the puzzle requiring the most moves, with the next three rules:

1. Empty spaces must take up 1/5 of the total area of the puzzle, not including walls.

2. You may use any constraints you like, as long as you follow these rules.

3. The puzzle must fit inside a 20×20 grid.

You can email your submissions to me at techie314 [at+at-at] gmail [dot*dash/dash] com

I will reveal the winner and runner-ups in a future blog post, the winner being the one who’s puzzle requires the most moves.

For an example, here’s my terrible example which takes three moves:

I hope that you can come up with a better one.

# Welcome to My Blog!

This is my first blog post, and it is mainly to welcome you to my blog!

Welcome!