# On Extraterrestial Life

I used to think that the interest astronomers seeking life on other planets have for exoplanets in the “Goldilocks zone” that can sustain liquid water, or for signs of water in this solar system, or for signs of complex organic molecules outside Earth, that this interest is misguided. After all, they’re looking for the materials that are the building blocks of life on this planet, but it’s perfectly possible that life on other planets is made out something completely different, such as having ammonia as a substrate or having silicon as the core constituent of complex molecules (and these are relatively mild modifications, they’re building life on the same general plan and just substituting one material for another). Perhaps searching for Earth-like life is the best scientists can do given the absence of any other information on what to expect aliens to be like, but we shouldn’t think these clues are actually the right guides for finding life on other planets.

# Proof that DIGICOMP_EXP is in CC·#L

Three years ago, Scott Aaronson wrote the “paperlet” in blog title The Power of the Digi-Comp II: My First Conscious Paperlet. In it, he studies models of computation based on the mechanical computer Digi-Comp II, which employs balls rolling down switches and toggles. Using the so-called “pebbles model”, he shows that a natural generalization of Digi-Comp II to an arbitrary pattern of switches is equivalent in computational power to a complexity class called $\mathsf {CC}$. He also proposes a problem DIGICOMPEXP which represents simulating Digi-Comp II with a polynomially large network and an exponentially large number of pebbles, and poses the question of what is this problem’s computational complexity.

In the comments, I offer a proof that DIGICOMPEXP is hard for $\mathsf{PL}$ (and the same proof can be extended to show that it’s hard for $\mathsf {CC} \cdot \mathsf {PL}$, by following the simulation of $\mathsf {PL}$ in DIGICOMPEXP by an arbitrary $\mathsf {CC}$ circuit). I then follow up with a proof that DIGICOMPEXP is in $\mathsf {CC} \cdot \# \mathsf {L}$. Although Aaronson was impressed by my first result, he was unable to follow and was unconvinced by my second result. I don’t blame him — at the time, I had difficulties with writing and specifically with explaining my mathematical thinking. Now that I have gotten better at writing and specifically I have much more experience with writing university-level problem sets in mathematics, I want to rewrite my second proof showing that DIGICOMPEXP is in $\mathsf {CC} \cdot \# \mathsf {L}$, hoping that it will be comprehensible this time.

On a side note, I want to remark that although $\mathsf {CC}$ is a fairly obscure complexity class, Aaronson’s post was not my first exposure to the class, and my previous exposure was also in the context of a recreational mathematics problem: In the sandbox computer game Dwarf Fortress many players have built computers within the gameworld. It turns out that the amount of computation that is possible in a single timestep contains $\mathsf {CC}$ through a mechanism that does not obviously extend to all of $\mathsf {P}$. For details, see my forum post here and my Stack Exchange question here.

Now, the proof:

In the following, I’m letting the variable names $X, Y, Z$ denote nodes in the computational graph of the pebbles model. We denote that $X$ splits into $Y$ and $Z$ by writing $X \to (Y, Z)$, and that $X$ and $Y$ merge into $Z$ by $(X, Y) \to Z$. Let $C$ be the function that maps a node to how many pebbles pass through it in the DIGICOMPEXP computation. Then $C$ is some suitable (exponentially large) constant in the starting node. Given a split $X \to (Y, Z)$, we have $C (Y) = \lceil X/2 \rceil$ and $C (Z) = \lfloor X/2 \rfloor$. Given a merge $(X, Y) \to Z$, we have $C (Z) = C (X) + C (Y)$.

Now define another function $W$ mapping nodes to dyadic rationals as follows: $W$ maps the starting node to the same value as $C$. At a merge $(X, Y) \to Z$ we have $W (Z) = W (X) + W (Y)$ like with $C$. Finally, at a split $X \to (Y, Z)$ we have $W (Y) = W (Z) = W (X)/2$. In words, $W$ is like $C$ except that a splitter always splits the pebbles evenly between the two piles, even if this leads to fractional pebbles.

Observe that $W (X)$ can be expressed as a weighted count of the number of paths from the starting node to $X$, weighted by the 2 to the power of negative the number of splitters in each path. It follows that the binary digits of $W$ are computable in $\# \mathsf {L}$. Next, I will show that $C$ is $\mathsf {CC}$-computable in terms of $W$, and therefore DIGICOMPEXP is in $\mathsf {CC} \cdot \# \mathsf {L}$.

Define $R (X) = C (X) - \lfloor W (X) \rfloor$. Then $R$ is 0 at the starting node. I will study how $R$ behaves in the operations of splits and merges:

Consider a split $X \to (Y, Z)$. Let $c = \lfloor W (X) \rfloor \mod 2$ be the binary units digit of $W (X)$. Then

$\lfloor W (X) \rfloor = 2 \left\lfloor \frac {\lfloor W (X) \rfloor} {2} \right\rfloor + c = 2 \lfloor W (X) / 2 \rfloor + c$

It follows that

$C (Y) = \left\lceil \frac {\lfloor W (X) \rfloor + R (X)} {2} \right\rceil = \left\lceil \lfloor W (X)/2 \rfloor + \frac {R (X) + c} {2} \right\rceil = \lfloor W (Y) \rfloor + \left\lceil \frac {R (X) + c} {2} \right\rceil$

and so $R (Y) = \lceil (R (X) + c)/2 \rceil$. By a similar argument we have $R (Z) = \lfloor (R (X) + c)/2 \rfloor$.

Next, consider a merge $(X, Y) \to Z$. Observe that $\lfloor W (Z) \rfloor = \lfloor W (X) + W (Y) \rfloor = \lfloor W (X) \rfloor + \lfloor W (Y) \rfloor + \lfloor \{W (X)\} + \{W (Y)\} \rfloor$. Since $C (Z) = C (X) + C (Y)$, it follows that $R (Z) = R (X) + R (Y) - \lfloor \{W (X)\} + \{W (Y)\} \rfloor$. Note that $\lfloor \{W (X)\} + \{W (Y)\} \rfloor$ is the unit digit of the sum of the fractional parts of $W (X)$ and $W (Y)$, and so is $\mathsf {CC}$-computable in terms of $W$ because binary addition is $\mathsf {CC}$-computable.

In conclusion, it is possible to compute the $R$ value of an arbitrary node using the operations $R \mapsto (\lceil R/2 \rceil, \lfloor R/2 \rfloor)$, $(R_0, R_1) \mapsto R_0+R_1$ and $R \mapsto R+c$ where $c$ is a small integer that’s $\mathsf{CC}$-computable given the values of $W$, and this computation does not need any fanout. Moreover, $R (X)$ has a polynomial bound. This is like the pebbles model, except that the number of pebbles may be negative. This integral pebbles model may be emulated by the ordinary pebbles model by representing a possibly negative amount $x$ in an integral pebble pile by $x+N$ in an ordinary pebble pile where $N$ is an even number large enough to guarantee that $x+N$ is nonnegative. Then the splitting and merging operations on $x$ are represented by the same operations on $x+N$, followed by adding or subtracting an appropriate constant. Finally, the output of the DIGICOMPEXP computation is given by whether $C (Z) = \lfloor W (Z) \rfloor + R (Z)$ is positive for some terminal node $Z$, which is $\mathsf {CC}$-computable in terms of the binary digits of $W (Z)$ and $R (Z) + N$ in unary.

# The Official Rules for Modal Combat

1 A game of Modal Combat begins when there is common knowledge among each of the players that they would like to play Modal Combat.

1.1 The above rule does not apply when a subset of the players have already begun a game of modal combat. However, any person can an ongoing game by reaching common knowledge with the current players that they would like to join.

1.2 In the Nomic variant of Modal Combat, additional rules are created when there is common knowledge among the players that such rules are necessary.

2 It is common knowledge among the players that all players obey the rules of Modal Combat

2.1 In particular, the rules of Modal Combat are common knowledge.

2.1.1 If at any point the rules of Modal Combat contain ambiguities or offer difficulties in their interpretation or application, then the proper way to resolve these difficulties or ambiguities is common knowledge.

3 All players are required to reason in a prefectly rational manner at all times.

3.1 All players must act in accordance with their personal self-interest to the maximal extent allowed by the rules of the game.

4 To avoid any unsportsman-like play, all players are required to disclose any remedy, immunity (innate or acquired), or antidote to any poison, elixir, or powder before the first turn of the game.

5 Any player may view the source code for any other player.

5.1 However, each player can change his or her source code at any time.

6 Each player must either:

(i) Always tell the truth.

(ii) Always lies.

(iii) Always randomly decide between telling the truth or lying based on a perfectly fair and unpredictable coin toss.

7 In a prisoner’s dilemma, a player must cooperate if and only if the player can prove that his or her opponent can prove that the player can prove that if the player’s opponent cooperates then the player will cooperate if and only the opponent is following this rule.

8 One of the player is designated the barber. The barber must shave every player and only those players that don’t shave themselves.

9 Once a player knows what his or her own eye color is, that player instantly loses.

10 Whereof one cannot speak, one must remain silent.

# Why I Am Changing My Mind About AI Risk

The following is pretty unpolished, and except for some minor edits was written a long time before I published it. It might not be representative of my current thoughts in multiple ways. Nonetheless I think publishing it will be more honest and informative than not publishing it.

I have long been an observer of Less Wrong and the rationality movement. When I first read Eliezer Yudkowsky’s Sequences (around 2010) I was very entertained by them for being well-written texts which, as I thought, got a surprising amount of difficult questions right. Of course, there were many points where I disagreed with him, and still do. When I read his futurist articles, they made sense, but I was skeptical. Their point of view was weird, but there was a sense I found that appealing — I had already thought it likely that a truely rational look at our world be equally weird, so I saw it fitting in some sense with the rest of the Sequences’ rationalist messages. Arguments, that were more popular during the early history of Less Wrong, that even with a small chance of success donating to SIAI (as MIRI was then called) had an enourmous expected value did not convince me, but would occasionally frighten me when I thought about it.

I decided to hold off before making decisions, think about things some more, and definitely don’t give them money while there’s a large chance they’re looneys. So I gave myself time to think about the arguments for or against. Eventually (I believe around 2012 or 2011) I decided that AI risk and its proponents really is ridiculous.

I am now in the process of changing my mind about this. Here are three reasons why:

1. I am amazed by the past progress of AI in the past few years using convolutional and recurrent neural networks. In particular, the observation that underlying the large variety of recent achievements is a relatively small set of relatively simple ideas suggests to me that there really is an underlying method behind “general-purpose” and that we have found a piece of it. Whereas beforehand I was unconvinced that any line of research could be known to have relevance on future general AI, I see this as a possible counterexample.

2. My opinion on the issue is influenced by my impression of what other people think about it. I try to be open and unrepentant about this — I believe that learning from the collective opinions of thers is rational. AI risk did not look good. It was mostly only discussed seriously in a single community over a small number of websites. While the advocates claimed they are doing a technical research program, they were to a large extent unconnected to academia, and were fundraising to the public rather than institutions qualified to judge them on a technical basis, which is suspicious.

This is changing. Although AI risk is still not mainstream, it has gotten much bigger than it used to be. I now believe that even if I hadn’t initially gotten into Less Wrong when I did I still would have been exposed to these ideas. And now that there are more players in the game I can better mentally seperate the questions of whether MIRI is any good and whether AI risk as a whole is important.

In retrospect it seems like I did not react to this factor as quickly as I could have.

3. Over the past two years, I have undergone a period of psychological hardship, and I worry it had affected my cognition. In particular, it would have increased my positive affect to the rationalist community. Optimistically, this let me look at the issues in a new light unbiased by my prior preconceptions. Pessimistically, in a moment of weakness I have let new opinions enter without examining them with due diligence, and a subtle flaw lies hidden. I imagine another person in my situation might become religious.

Of these reasons, the most worrying is the first one — it says we might not have much time. The third one is also worrying to me on a personal level.

In the short term, here are some things that I think of doing:

1. I still disagree in significant ways with many of the positions current advocates have on the details of AI risk. I hope I’ll be able to write up my thoughts on this matter, and that people will give it enough attention either to be convinced or to take the time to convince me otherwise. Currently the writer I’ve seen whose opinion most resembles mine is Paul Christiano.

2. Seeing as I’m still not sure, and may never will be, about AI risk, I intend to be on the lookout for any reason to change my mind again. Unfortunately if I start thinking of myself as seriously committed to AI risk changing my mind will be more difficult.

Edit (2017-01-04): Changed title from “Why I Changed My Mind About AI Risk”. This is the title I was intending and don’t know why I ended up writing “changed” instead of “am changing”.

# A fun Banach space fact

The dual of $L^{1 + \sigma}$ is $L^{1 + \frac {1} {\sigma}}$.

# A fun complex analysis fact

$e^{i \pi} = -373791533.224\dots$

Derivation under the cut

# Hypothetical

Suppose an alien came down to earth, and described an operation it can perform on you. This operation will make a lot stronger and tougher physically. It will also make you a lot smarter, more conscientious, and can even make you a more moral person. Inexplicably, if you get this operation people will take you more seriously and are more likely to believe anything you say. And all of these effects are huge.

There’s one catch: Anyone who undergoes this operation becomes completely obsessed with cleaning pottery jugs. Seriously, it doesn’t matter how boring you consider this now, it’ll be what you spend all of your time on. If you have anything else you care about you’ll still remember that, but it’ll take second place to cleaning pottery jugs.

Now, you’re expecting the way this hypothetical will continue, you’ll be asked what you think about this operation, and whether you want the alien to perform it on you or not. Yeah right! As if the alien gives a fuck what you think! In fact it has already done this operation. Lucky it takes a while for it to take effect, so you still have some time to be your old self for a few years. Good luck!