Is There Slack in Evolution?

In a recent blog post, Studies on Slack, Scott Alexander discusses how slack (particularly the rationalist conception of it) interacts with competition in various situations, and models it in terms of two-layered competitive systems. Many of his example use evolution as the competitive system, and his very first example claims that in certain circumstances a useful adaption is more likely to occur if there is less evolutionary pressure:

Imagine a distant planet full of eyeless animals. Evolving eyes is hard: they need to evolve Eye Part 1, then Eye Part 2, then Eye Part 3, in that order. Each of these requires a separate series of rare mutations. Here on Earth, scientists believe each of these mutations must have had its own benefits – in the land of the blind, the man with only Eye Part 1 is king. But on this hypothetical alien planet, there is no such luck. You need all three Eye Parts or they’re useless. Worse, each Eye Part is metabolically costly; the animal needs to eat 1% more food per Eye Part it has. An animal with a full eye would be much more fit than anything else around, but an animal with only one or two Eye Parts will be at a small disadvantage. So these animals will only evolve eyes in conditions of relatively weak evolutionary pressure. In a world of intense and perfect competition, where the fittest animal always survives to reproduce and the least fit always dies, the animal with Eye Part 1 will always die – it’s less fit than its fully-eyeless peers. The weaker the competition, and the more randomness dominates over survival-of-the-fittest, the more likely an animal with Eye Part 1 can survive and reproduce long enough to eventually produce a descendant with Eye Part 2, and so on. There are lots of ways to decrease evolutionary pressure. Maybe natural disasters often decimate the population, dozens of generations are spend recolonizing empty land, and during this period there’s more than enough for everyone and nobody has to compete. Maybe there are frequent whalefalls, and any animal nearby has hit the evolutionary jackpot and will have thousands of descendants. Maybe the population is isolated in little islands and mountain valleys, and one gene or another can reach fixation in a population totally by chance. It doesn’t matter exactly how it happens, it matters that evolutionary pressure is low.

Is this situation realistic?

The first part that struck me is that Scott presupposed that each of these mutations occurs separately. This would make sense in ordinary evolution, where each mutation provides a benefit and reaches fixation. If each mutation is harmful, why bother going through normal competition that the organism is going to lose, and why not just have all three Eye Parts arising by chance at once? It may seem absurd that an organism would spontaneously form a fully-fledged eye, but actually all that indicates is that the entire pathway envisioned here is absurd: As creationists are so eager to point out, something as complex as a compound eye cannot just arise by chance. It requires evolution, which requires intermediate stages that are beneficial to the organism, exactly what has been positted not to occur. You can imagine we’re not actually talking about evolving eyes, but rather a much simpler adaption still goes against the flow of the fitness landscape. If you do it becomes easier to imagine that the entire adaption occurs as a single mutation at once.

But actually, Scott is right: An adaption like this would develop in stages rather than all at once. Let’s calculate this with some made-up numbers. I’ll suppose that each one of these Eye Parts has a 10-5 probability of occurring by chance. Multiply that for all three parts and you get a probably of 10-15 of all three mutations occurring at once. A mutation like that is only likely to occur in a species once 1015 organisms of that species have been born.

Let me emphasize the last sentence: The chance a mutation will occur in a population depends on the number of individuals born in that population, or in other words the total number of individual that have ever existed in the population. That is the product of the population size and the number of generations the population existed. In particular, larger populations acquire beneficial mutations faster. This will be important later.

Now, let’s compare this to developing eyes in stages. Suppose Eye Part 1 only reduces fitness by 1%. Then on average, an individual with Eye Part 1 has 1% fewer descendants, so on average 0.99 of its descendant will also have Eye Part 1. Adding up the geometric series, if an individual acquired Eye Part 1 through a spontaneous mutation it is likely to have around 100 descendants with Eye Part 1. The chance that one of them will acquire Eye Part 2 is 10^2 \cdot 10^{-5} = 10^{-3}. If an individual with both Eye Parts 1 and 2 loses another 1% in fitness than it will have 50 descendants with both Eye Parts 1 and 2, assuming perfect genetic linkage. Then there’s a 50 \cdot 10^{-5} = 5 \times 10^{-4} chance that one of these descendants will acquire Eye Part 3. Multiplying these together, including the probability for the initial Eye Part 1 mutation, there is a 10^{-5} \cdot 10^{-3} \cdot (5 \times 10^{-4}) = 5 \times 10^{-12} chance of eyes evolving per individual of eyes evolving. This is still very small, but significantly larger than the 10-15 odds of all mutations happening at once.

In contrast, imagine if each mutation is beneficial and immediately reaches fixation. It takes 105 individuals until one acquires Eye Part 1, and immediately everyone has it. Afterwards it takes 105 individuals each to evolve Eye Parts 2 and 3, a total of 3×105. Since it takes time for traits to reach fixation and even for beneficial traits this doesn’t always occur, it is expected to take a larger number of individuals until eyes evolve, but this is still much less than the numbers required in the above two scenarios where eye parts are maladaptive.

Okay, this is the model for evolving eyes in ordinary circumstances. But what if you add slack? Let’s look at Scott’s first scenario: A disaster wipes out most of the population and leaves behind a resource-rich environment for the survivors, so even individuals with below-average fitness can reproduce above replacement and not have their lineage die out. First of all, I want to question the notion that there is less “evolutionary pressure” here, whatever that means. Less fit individuals can propagate their genes when they wouldn’t have otherwise, but fitter individuals still propagate their genes even more. If by chance a high proportion of the population right after the disaster had some fitness-lowering gene, then by the time the population rebounded the gene would be much less frequent, because the fitter individuals without the gene will repopulate faster than the less fit individuals with the gene. So it’s a matter of perspective: Are we looking at the absolute number of offspring and descendants an individual will have, or are we interested in the proportion of a trait relative to the whole population? If the former then the population boon meaningfully reduces the evolutionary pressure, if the latter than it doesn’t affect it at all.

But we’re not asking a broad qualitative question that could depend on the perspective we take, we’re asking a concrete question: Will this disaster-and-repopulation make evolving eyes more likely? Let’s think. Normally an individual with Eye Part 1 has around 100 descendants like it each which can develop Eye Part 2. During the time of plenty it can have many more than that. Sounds good?

Not so much, once you compare this with what every other individual is doing. If this individual has 100 relatedness-weighted offspring, so does every other individual during time of plenty. If there is sexual reproduction this can be tricky to think about: The first individual with Eye Part 1 has more than 100 descendant, but only some retain Eye Part 1. Other individuals also typically have more than 100 descendants. However, each descendant has many ancestors, so it’s hard to count how many descendants there should be. It’s easier to count genes: Each descendant gene comes from exactly one ancestral gene. If the spontaneously formed Eye Part 1 gene spreads to 100 descendants, then in that time every other gene should also spread to 100 descendants, or a bit more since Eye Part 1 gene reduces fitness. Since we need individuals to store all of these genes — what else are individuals good for? — the population size increases by roughly one hundred.

Now remember what I said earlier about the effect of population size on evolution? If the population increased by a factor of one hundred, that means that at the start — right after the disaster occurred — the population was one hundred times less than the ordinary stable population. That means that the mutation for Eye Part 1 was one hundred times less likely to occur in the first place! This nullifies any advantage that seems to have been gained by the plentiful conditions increasing the chance Eye Part 2 develops. Overall, a disaster that decreases population does not increase the speed the eye evolves.

Lightspeed delays lead to multiple technological singularities

By Yudkowsky’s classification, I’m assuming the Accelerating Change Singularity: As technology gets better, the characteristic timescale at which technological progress is made becomes shorter, so that the time until this reaches physical limits is short from the perspective of our timescale. At a short enough timescale the lightspeed limit becomes important: When information cannot traverse the diameter of civilization in the time until singularity further progress must be made independently in different regions. The subjective time from then may still be large, and without communication the different regions can develop different interest and, after their singularities, compete. As the characteristic timescale becomes shorter the independent regions split further.

You Must be a Coward to be Afraid of Losing Your Mind

I recently read two authors both make a similar kind of argument, which I disagree with.

Jacob Falkovich responds to a comment on his blog, about the insights gained from taking drugs:

Do you believe your own epistemology to be so fragile, your disbelief in woo so contingent, that they will crumble in the face of a few hours of altered consciousness?

A recent essay of Paul Graham, about having children:

On the other hand, what kind of wimpy ambition do you have if it won’t survive having kids? Do you have so little to spare?

Both excerpts are accusing the reader of a lack of confidence in her principles if she is afraid some transformative experience will change them: If are afraid drugs will make you believe in woo, you must not have much confidence in your current disbelief in woo. If you are afraid parenting will make you lose your ambition, you must not be very ambitious.

I don’t like the way these two excerpts shame being afraid of an external factor changing the way we think. My minds are fragile and I shouldn’t be afraid to admit it. A bullet can shut it down completely, and so many things can affect it in subtler ways I don’t understand.

For comparison, I don’t think either author would argue this way about more frequent and short-term events that change our motivation and reasoning. If someone worries that he’s unmotivated when he works at home, you don’t tell him that it’s not a problem if he cares enough about working, you tell him he shows he cares about working by setting up the environment where he can be motivated.

Linear Logic, Topos Theory, and Algebraic Geometry

Note: The stuff below is some idle thought without much backing to it and extremely speculative. Don’t take it seriously.

Something seems right about linear logic: Both classical logic and intuitionistic logic embed in it, and although sequent calculus loses duality symmetry going from classical logic to intuitionistic logic this symmetry is regained in linear logic. It seems to me like intuitionistic logic is more fundamental than classical logic, and although there’s no good theory for how to fully do mathematics in linear logic (as Bishop did in intuitionistic logic), I suspect that linear logic is more fundamental still.

The category of sheaves over a topological space or a site is a model of intuitionistic set theory. Conversely, given a model of intuitionistic set theory we can try to interpret it as being a category of sheaves over some kind of space. We can construct a dictionary of logical equivalents of geometric concepts and vice versa. This is the core idea of topos theory. If linear logic is more fundamental than intuitionistic logic then a natural generalization of this idea is to geometrically interpret models of linear dependent type theory to get a more general notion of space. As far as I’m aware a good theory of linear dependent type doesn’t exist yet.

A speculative idea I have for how to apply this is that it can reconcile to conflicting intuitions in algebraic geometry: On one hand, algebra laws over a field say that if a \neq 0 and b \neq 0 then a b \neq 0. For an arbitrary commutative ring R, even though this statement is no longer true it is still sometimes useful to think of it as holding in some “local” sense. For example, suppose R is a ring of polynomials of some variety X. For each element x \in X and f, g \in R the evaluations f (x), g (x) are elements of a field, and so if f (x) \neq 0 and g (x) \neq 0 then f (x) g (x) \neq 0. It follows that for any ideal I in R if V (I) \subseteq D (f) and V (I) \subseteq D (g) then V (I) \subseteq D (f g). Finally, by Nullstellensatz this is the same as saying that if I + (f) = I + (g) = R then I + (fg) = R, an purely algebraic fact that can be proven for an arbitrary commutative ring R.

On the other hand, scheme theory makes use of nilpotent elements of a ring and these elements contain significant geometric information. That is, we cannot say that x^2=0 is equivalent to x=0 since the schemes corresponding to these equations are different.

Perhaps this can be reconciled if we work in a logic where P \land P is not equivalent to P. Then we can say that x \neq 0 \land x \neq 0 implies x^2 \neq 0, but the former is not implied by x \neq 0. Indeed we do not have such an implication for the multiplicative conjunction \land of linear logic. So maybe the right way to think of a scheme with nilpotent functions over it is as a space where the logic over the space is intrinsically linear.

A somewhat natural average-case NP-complete problem

A sampling NP problem is determined by an NP language L and distribution D that can be sampled in probabilistic polynomial time. The problem is to determine for an x sampled frome D whether x \in L. This is a model for the average-case complexity of an NP problem. It is a little known fact that there are “sampling-NP-complete” problems in a certain sense (definition here). Such problems are rarer and less likely to arise naturally than NP-complete, and so are less well known. An example of such a problem is: Given a Turing machine M, sampled by an exponential distribution on codes for Turing machines, and a uniformly random bitstring x, solve the 3SAT problem which is produced by running M on x for |x|^2 steps.

It strikes me that idealized forms of roguelike games such as Nethack could provide a somewhat natural example of a sampling NP complete problem, and this could give an intuitive picture of what such problems could look like. In such a game, the player explores a procedurally generated map trying to reach a particular target. The player must overcome a diverse variety of obstacles and can acquire a diverse variety of special items to help them. To make this fit the sampling NP framework assume this game is deterministic and perfect information and the player has a limited number of turns to reach the target (none of these are the case in most real roguelike games). Then in an arbitrary-sized world the player has a fixed probability of getting into this situation: The player is trapped in some region of the map with only a fixed particular set of items. The effects of these items interact in a peculiar way that imitate running a nondeterministic Turing machine with features of the map as input. If the playey can make the Turing machine accept then they can get to the next stage of the map, after which winning is routine.

Here’s a roughly outline of a more formal model that fits this schema:

Consider a finite 2D lattice of locked boxes (this generalizes easily to the infinite lattice). You have keys to boxes (0,0) and (1,0) and you want to open box (2,0). Within each box is either a white or black token, sampled uniformly. Each box also has two compartments containing a blue token and a red token, built such that opening one compartment destroys the contents of other. Finally, boxes (x,y) for x \geq 0 each have special keys that can open any box under suitable conditions: specifically, for each such key there’s a sequence (x_0, y_0, c_0), \dots, (x_{n-1}, y_{n-1}, c_{n-1}) where x_i, y_i \in \mathbb {Z} and c_i \in \{\mathrm {black}, \mathrm {white}, \mathrm {blue}, \mathrm {red}\}, such that the key opens box (x,y) when given the c_i-colored token from box (x+x_i, y+y_i) for each i. Tokens are reusable across multiple keys. At each box the key is sampled from some probability distribution supported on all the (countably-many) possible keys; for example, n could be sampled from an exponential distribution, each c_i sampled uniformly and x_i, y_i sampled from a discretized Gaussian distribution. Then the following happens with finite probability: The first two boxes have white tokens and the keys [(0, -1, \mathrm {white}), (1, -1, \mathrm {white})] and [(-1, -1, \mathrm {white}), (0, -1, \mathrm {white})]. This allows opening boxes (0,k) and (1,k) as long as both have white tokens. This occurs for n layers granting certain predetermined keys. Conditional on these keys being just right, the black token nucleates a pattern of open boxes with x<0 encoding the history of a 1D cellular automaton with both random and controllable inputs. This can emulate any NP problem with any sampling procedure. An ‘accept’ state can create a pattern of open boxes that spreads to the start site and to (2,0).

Prefix-free codes and ordinals

Consider the problem of representing a number in computer memory, which is idealized as a sequence of zeros and ones. The binary number system is a well-known solution to this problem — for example, the sequence “01101” represents 0 \cdot 2^4 + 1 \cdot 2^3 + 1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 11. But there’s a problem: You don’t expect the entire computer to just be used to represent one number; you expect it to have other things stored afterwards. So how do you tell where the number ends? If the sequence begins 01101001\dots does this represent the number 011_2 or 01101_2 or 0110100_2?

The solution to this problem most commonly used in practice is to declare in advance a fixed number of bits that will be used to represent the number, usually 32 bits or 64 bits. For example, if we fix a 5-bit representation the 01101001\dots always means that the number is 01101_2 = 11 and 001\dots represents other stuff in memory. This works well enough in practice, but it has a problem: The number of bits you set aside for storing the number forces an upper limit on how big the number can be. For example, you cannot store any number bigger than 2^64 in 64 bits. If the computer ever needs to store a bigger number than can be represented with space set aside then the computer fails.

I’ll introduce some terminology. The condition that a system of representing numbers is unambiguous can be phrased formally by saying the this method is a prefix-free code. A prefix-free code consists of a set of codewords, which are sequences of bits, such that no codeword is a prefix of another. A continuing stream of bits can be interpreted as a codeword by taking an initial segment that is a codeword, and by prefix-free property at most one such interpretation is possible. Since we want this code to represent numbers we also want a map from the codewords to the natural numbers (by which I mean including zero, naturally)

Continue reading

Entropic Games

While trying to come up with mathematically interesting variants of the notion of a game, I’ve thought of this: What if each player was trying to maximize the entropy of their moves? You can imagine that each player has some message that she wants to transmit to some third party that can observe the move-history of the game but cannot receive any other information from the players. Then each player must balance the considerations of playing randomly to transmit more entropy at the moment versus playing moves that ensure that she’ll have a higher-entropy space of moves in the future.

For example, here are the rules for the entropic game Hot Potato: The game is played with one potato, that at each moment is possessed by one of the two platers. At a player’s turn, if she hold the potatoe she may either keep the potato or pass the potato on to the other player. If she does not have the potato she is forced to do nothing.

One strategy is for a player to give or keep the potato with even odds every time she has the potato. Then each move when she has the potato has an entropy of 1 bit, the maximum possible entropy for that move. However, this seems like a suboptimal strategy, since it doesn’t account for how keeping the potato is better than passing it: If you keep the potato you are guaranteed at a choice between keeping it and passing it next turn, but if you pass it you risk a zero-entropy move next turn if your opponent doesn’t pass it back. Thus you should keep the potato with better than even odds. However, even if keeping the potato is a better move you shouldn’t keep the potato all the time — that is a zero-entropy strategy.

And ordinary perfect-information two-player game can be made into an equivalent entropic game by giving the winning player a large entropic reward. That is, the winner is given an extra turn where she can recite a large amount of information — more than is needed to describe a game history not counting this move. Then the entropic incentive to reach a winning state is stronger than the entropic incentive to ranomize your moves before that, so players will mostly play as if they’re trying to win.

Here’s a formal model of entropic games: Suppose a player has n possible moves M_0, \dots, M_{n-1}. The player has analyzed that after playing M_i the expected entropy from all moves after that is S_i. Suppose she plays M_i with probability p_i. Then the total expected entropy is

S = \sum _{k=0} ^{n-1} p_k (\log p_k + S_i)

Optimizing this value, we get that

p_k = \frac {1} {Z} e^{-S_k}
Z = \sum_{k=0}^{n-1} e^{-S_k}

This is a Boltzmann distribution, like the probability distribution for a system in thermal equilibrium. Now, the anticipated entropies S_k can be calculated by taking expected values assuming the opponent picks her moves of the same form, and so on recursively.

Unlike ordinary combinatorial games this theory straightforwardly extends to more than two players. In ordinary game theory with three players there’s a indeterminacy with kingmaker positions, where a losing player must choose which of the two other players to make the winner. If a player faces two equally good positions in an entropic game, however, she will surely choose both with equal chances.

On Extraterrestial Life

I used to think that the interest astronomers seeking life on other planets have for exoplanets in the “Goldilocks zone” that can sustain liquid water, or for signs of water in this solar system, or for signs of complex organic molecules outside Earth, that this interest is misguided. After all, they’re looking for the materials that are the building blocks of life on this planet, but it’s perfectly possible that life on other planets is made out something completely different, such as having ammonia as a substrate or having silicon as the core constituent of complex molecules (and these are relatively mild modifications, they’re building life on the same general plan and just substituting one material for another). Perhaps searching for Earth-like life is the best scientists can do given the absence of any other information on what to expect aliens to be like, but we shouldn’t think these clues are actually the right guides for finding life on other planets.

However, thinking about this more carefully, I changed my mind. Let me explain:

Continue reading

Proof that DIGICOMP_EXP is in CC·#L

Three years ago, Scott Aaronson wrote the “paperlet” in blog title The Power of the Digi-Comp II: My First Conscious Paperlet. In it, he studies models of computation based on the mechanical computer Digi-Comp II, which employs balls rolling down switches and toggles. Using the so-called “pebbles model”, he shows that a natural generalization of Digi-Comp II to an arbitrary pattern of switches is equivalent in computational power to a complexity class called \mathsf {CC}. He also proposes a problem DIGICOMPEXP which represents simulating Digi-Comp II with a polynomially large network and an exponentially large number of pebbles, and poses the question of what is this problem’s computational complexity.

In the comments, I offer a proof that DIGICOMPEXP is hard for \mathsf{PL} (and the same proof can be extended to show that it’s hard for \mathsf {CC} \cdot \mathsf {PL}, by following the simulation of \mathsf {PL} in DIGICOMPEXP by an arbitrary \mathsf {CC} circuit). I then follow up with a proof that DIGICOMPEXP is in \mathsf {CC} \cdot \# \mathsf {L}. Although Aaronson was impressed by my first result, he was unable to follow and was unconvinced by my second result. I don’t blame him — at the time, I had difficulties with writing and specifically with explaining my mathematical thinking. Now that I have gotten better at writing and specifically I have much more experience with writing university-level problem sets in mathematics, I want to rewrite my second proof showing that DIGICOMPEXP is in \mathsf {CC} \cdot \# \mathsf {L}, hoping that it will be comprehensible this time.

On a side note, I want to remark that although \mathsf {CC} is a fairly obscure complexity class, Aaronson’s post was not my first exposure to the class, and my previous exposure was also in the context of a recreational mathematics problem: In the sandbox computer game Dwarf Fortress many players have built computers within the gameworld. It turns out that the amount of computation that is possible in a single timestep contains \mathsf {CC} through a mechanism that does not obviously extend to all of \mathsf {P}. For details, see my forum post here and my Stack Exchange question here.

Now, the proof:


In the following, I’m letting the variable names X, Y, Z denote nodes in the computational graph of the pebbles model. We denote that X splits into Y and Z by writing X \to (Y, Z), and that X and Y merge into Z by (X, Y) \to Z. Let C be the function that maps a node to how many pebbles pass through it in the DIGICOMPEXP computation. Then C is some suitable (exponentially large) constant in the starting node. Given a split X \to (Y, Z), we have C (Y) = \lceil X/2 \rceil and C (Z) = \lfloor X/2 \rfloor. Given a merge (X, Y) \to Z, we have C (Z) = C (X) + C (Y).

Now define another function W mapping nodes to dyadic rationals as follows: W maps the starting node to the same value as C. At a merge (X, Y) \to Z we have W (Z) = W (X) + W (Y) like with C. Finally, at a split X \to (Y, Z) we have W (Y) = W (Z) = W (X)/2. In words, W is like C except that a splitter always splits the pebbles evenly between the two piles, even if this leads to fractional pebbles.

Observe that W (X) can be expressed as a weighted count of the number of paths from the starting node to X, weighted by the 2 to the power of negative the number of splitters in each path. It follows that the binary digits of W are computable in \# \mathsf {L}. Next, I will show that C is \mathsf {CC}-computable in terms of W, and therefore DIGICOMPEXP is in \mathsf {CC} \cdot \# \mathsf {L}.

Define R (X) = C (X) - \lfloor W (X) \rfloor. Then R is 0 at the starting node. I will study how R behaves in the operations of splits and merges:

Consider a split X \to (Y, Z). Let c = \lfloor W (X) \rfloor \mod 2 be the binary units digit of W (X). Then

\lfloor W (X) \rfloor = 2 \left\lfloor \frac {\lfloor W (X) \rfloor} {2} \right\rfloor + c = 2 \lfloor W (X) / 2 \rfloor + c

It follows that

C (Y) = \left\lceil \frac {\lfloor W (X) \rfloor + R (X)} {2} \right\rceil = \left\lceil \lfloor W (X)/2 \rfloor + \frac {R (X) + c} {2} \right\rceil = \lfloor W (Y) \rfloor + \left\lceil \frac {R (X) + c} {2} \right\rceil

and so R (Y) = \lceil (R (X) + c)/2 \rceil. By a similar argument we have R (Z) = \lfloor (R (X) + c)/2 \rfloor.

Next, consider a merge (X, Y) \to Z. Observe that \lfloor W (Z) \rfloor = \lfloor W (X) + W (Y) \rfloor = \lfloor W (X) \rfloor + \lfloor W (Y) \rfloor + \lfloor \{W (X)\} + \{W (Y)\} \rfloor. Since C (Z) = C (X) + C (Y), it follows that R (Z) = R (X) + R (Y) - \lfloor \{W (X)\} + \{W (Y)\} \rfloor. Note that \lfloor \{W (X)\} + \{W (Y)\} \rfloor is the unit digit of the sum of the fractional parts of W (X) and W (Y), and so is \mathsf {CC}-computable in terms of W because binary addition is \mathsf {CC}-computable.

In conclusion, it is possible to compute the R value of an arbitrary node using the operations R \mapsto (\lceil R/2 \rceil, \lfloor R/2 \rfloor), (R_0, R_1) \mapsto R_0+R_1 and R \mapsto R+c where c is a small integer that’s \mathsf{CC}-computable given the values of W, and this computation does not need any fanout. Moreover, R (X) has a polynomial bound. This is like the pebbles model, except that the number of pebbles may be negative. This integral pebbles model may be emulated by the ordinary pebbles model by representing a possibly negative amount x in an integral pebble pile by x+N in an ordinary pebble pile where N is an even number large enough to guarantee that x+N is nonnegative. Then the splitting and merging operations on x are represented by the same operations on x+N, followed by adding or subtracting an appropriate constant. Finally, the output of the DIGICOMPEXP computation is given by whether C (Z) = \lfloor W (Z) \rfloor + R (Z) is positive for some terminal node Z, which is \mathsf {CC}-computable in terms of the binary digits of W (Z) and R (Z) + N in unary.