# An Idea for Improving Hashlife

(This was written with the priority of making sure my thoughts don’t just stay in my head forever over explaining anything well. Except some parts to be cryptic or badly phrased.)

Hashlife is currently the best algorithm for simulating large structured patterns in Conway’s game of life and other cellular automata for long periods of time. It is described here. Basicly, it memoized recursive algorithm for computing the evolution of a $2^n \times 2^n$ block.

Memoization means that whenever the algorithm encounters a $2^n \times 2^n$ block that it has seen before it would instantly be able to use the answer it previously computed. This is what gives the algorithm its power. On the other hand, the algorithm can only detect this if the two configurations are aligned exactly the same on the $2^n \times 2^n$ blocks which it divides the grid into. In other words, it doesn’t take full advantage of translational symmetry, but only takes advantage of it when it’s a translation by a multiple of the block size. Due to the way Hashlife calculates many time-steps of a pattern at once there is a similar alignment problem in time.

For example, the Caterpillar is a humongous spaceship that move forward 17 tiles every 45 steps. It has a lot of repeating components, but they all move in this speed, so they are rarely in the same alignment. Here Hashlife runs really slowly.

So I’ve been thinking about how to make a better version of Hashlife which doesn’t have these constraints. Then the problem is to recognize a pattern if it was seen previously with a different alignment. The first idea I eventually came up with is to use what I call a translation-invariant hash. If you take this hash on two blocks of tiles that almost completely overlap, this function should return the same or a similar value. Clearly this is not a good hash function in the conventional point of view, but it is very useful here: If you make a hash table based on a translation-invariant hash, then a lookup for a block B could also return a block B’ which contains a translation of B. This means you can find that the same pattern was calculated already even if it is out of alignment.

Here is a simple example of a translation-invariant hash: Let H be an ordinary hash function on 8×8 blocks. For some large block B, one can define $H_T (B)$ to be the sum of $H (X)$ for every 8×8 X that is contained in B. Then a translated block will only differ in terms of the hashes on the boundary, which on a large block will be a minority. By truncating the last digits of this you get a hash that’s completely identical for most small translations.

Now, one problem that can come up is: now that we found two blocks that are approximately translates, how do tell by how much one is a translate of the other? In this case there is an easy method. Alongside the function $H_T$, one can also calculate two other functions $H_{T X}$, $H_{T Y}$, such that $H_{T X} (B)$ (respectively, $H_{T Y} (B)$) is the sum of $x H (C)$ (resp. $y H (C)$) where $C$ is an 8×8 block contained in $B$ whose northwest corner has coordinates $(x, y)$ (Here 8 is an arbitrarily chosen number, in this case because it’s a small power of 2). Then if $B$ and $B'$ satisfy $H_T (B) \sim H_T (B')$ and they really are close translates, the position of $B'$ relative to $B$ would be approximately $(\frac {H_{T X} (B') - H_{T X} (B)} {H_T (B)}, \frac {H_{T Y} (B') - H_{T Y} (B)} {H_T (B)})$

then the data structure for a block B will store along with $H_T (B)$ these “integral hashes” $H_{T X} (B)$, $H_{T Y} (B)$.

I will not discuss how to take advantage of the overlapping blocks found this way to speed up the computation.of the cellular automaton.

This in itself may already be an improvement (I haven’t written any code so I can’t benchmark this), but $H_T$ has some weaknesses. The problem is that it is way too loose. It produces a collision for two overlapping blocks, but it also produces a hash collision in loads of other situations. For instance, it produces an almost identical value for the empty block and an almost empty block except for a small object. These are closer to each other than most of the combinations of overlapping blocks, which are the things what are supposed to collide. Worse, if there are two small objects on an otherwise empty block which are far away from each other, then $H_T$ returns an exactly identical hash. If you want any algorithm based on this hash function to work, it is necessary to check a block found by the hash table to verify it actually overlaps. This adds to the computation time.

The problem is that the hash function is too local: it only depends on the properties of a random 8×8 region in a block. Perhaps a better idea would be to use larger subregions, for instance, sum the hashes of $\sqrt{N} \times \sqrt{N}$ subregions when the block is $N \times N$. However, this would take too long to compute (asymptotically $O (N^3)$, around the same it would take to calculate the evolution of the pattern directly for $O (N)$ steps). Instead, it would be better to look at the hashes of only some of the subregions, which are determined in a translation-invariant way. Here is my second idea: define an $i$-focal point as follows:

• Every point is 0-focal.
• The $i$-hash of an $i$-focal point is the (ordinary) hash of the $2^{i+3} \times 2^{i+3}$ rectangle which has that point as its southwest corner. This rectangle is called the region associated with that point.
• An $i+1$-focal point is an $i$-focal point whose $i$-hash is greater the $i$-hash of all $i$-focal points up to $2^{i+3}$ tiles south and up to $2^{i+3}$ tiles east of it.

Then considering only the $i$-hashes of $i$-focal points is translation-invariant and feasible to compute.

However, once we have these $i$-focal points there’s something even better we can do. Remember that the goal of the whole translation-invariant hash was so that we’d be able to recognize a pattern we’ve already encountered even when it’s translated. However, these $i$-focal points and their corresponding hashes do the job even better: The same region will have the same $i$-focal points no matter how it is translated, and no coarse-graining is necessary from any averaging process. So it is a good idea to make a hash table for caching all the regions associated $i$-focal points to recognize translates, and not at all using the original averaging idea with translation-invariant hashes. However, I only came up with that simplification while I was writing this up I decided to still include the original idea. I know that makes this description pretty messy.

All this doesn’t mention time. The original Hashlife also has a feature where it evaluates blocks for $2^n$ generations at once. This causes temporal alignment problems similar to the spatial alignment problems I’ve already discussed. I expect pretty much the same solutions to work here. These are really just general ideas for recognizing patterns in n-dimensional space and should still work when time is added as a coordinate.

 Actually, the translation only needs to a multiple of half the block size, due to how Hashlife calculates the areas between the blocks.