Why I Am Changing My Mind About AI Risk

The following is pretty unpolished, and except for some minor edits was written a long time before I published it. It might not be representative of my current thoughts in multiple ways. Nonetheless I think publishing it will be more honest and informative than not publishing it.

I have long been an observer of Less Wrong and the rationality movement. When I first read Eliezer Yudkowsky’s Sequences (around 2010) I was very entertained by them for being well-written texts which, as I thought, got a surprising amount of difficult questions right. Of course, there were many points where I disagreed with him, and still do. When I read his futurist articles, they made sense, but I was skeptical. Their point of view was weird, but there was a sense I found that appealing — I had already thought it likely that a truely rational look at our world be equally weird, so I saw it fitting in some sense with the rest of the Sequences’ rationalist messages. Arguments, that were more popular during the early history of Less Wrong, that even with a small chance of success donating to SIAI (as MIRI was then called) had an enourmous expected value did not convince me, but would occasionally frighten me when I thought about it.

I decided to hold off before making decisions, think about things some more, and definitely don’t give them money while there’s a large chance they’re looneys. So I gave myself time to think about the arguments for or against. Eventually (I believe around 2012 or 2011) I decided that AI risk and its proponents really is ridiculous.

I am now in the process of changing my mind about this. Here are three reasons why:

  1. I am amazed by the past progress of AI in the past few years using convolutional and recurrent neural networks. In particular, the observation that underlying the large variety of recent achievements is a relatively small set of relatively simple ideas suggests to me that there really is an underlying method behind “general-purpose” and that we have found a piece of it. Whereas beforehand I was unconvinced that any line of research could be known to have relevance on future general AI, I see this as a possible counterexample.

  2. My opinion on the issue is influenced by my impression of what other people think about it. I try to be open and unrepentant about this — I believe that learning from the collective opinions of thers is rational. AI risk did not look good. It was mostly only discussed seriously in a single community over a small number of websites. While the advocates claimed they are doing a technical research program, they were to a large extent unconnected to academia, and were fundraising to the public rather than institutions qualified to judge them on a technical basis, which is suspicious.

    This is changing. Although AI risk is still not mainstream, it has gotten much bigger than it used to be. I now believe that even if I hadn’t initially gotten into Less Wrong when I did I still would have been exposed to these ideas. And now that there are more players in the game I can better mentally seperate the questions of whether MIRI is any good and whether AI risk as a whole is important.

    In retrospect it seems like I did not react to this factor as quickly as I could have.

  3. Over the past two years, I have undergone a period of psychological hardship, and I worry it had affected my cognition. In particular, it would have increased my positive affect to the rationalist community. Optimistically, this let me look at the issues in a new light unbiased by my prior preconceptions. Pessimistically, in a moment of weakness I have let new opinions enter without examining them with due diligence, and a subtle flaw lies hidden. I imagine another person in my situation might become religious.

Of these reasons, the most worrying is the first one — it says we might not have much time. The third one is also worrying to me on a personal level.

In the short term, here are some things that I think of doing:

  1. I still disagree in significant ways with many of the positions current advocates have on the details of AI risk. I hope I’ll be able to write up my thoughts on this matter, and that people will give it enough attention either to be convinced or to take the time to convince me otherwise. Currently the writer I’ve seen whose opinion most resembles mine is Paul Christiano.

  2. Seeing as I’m still not sure, and may never will be, about AI risk, I intend to be on the lookout for any reason to change my mind again. Unfortunately if I start thinking of myself as seriously committed to AI risk changing my mind will be more difficult.

Edit (2017-01-04): Changed title from “Why I Changed My Mind About AI Risk”. This is the title I was intending and don’t know why I ended up writing “changed” instead of “am changing”.

Advertisements

2 thoughts on “Why I Am Changing My Mind About AI Risk

  1. >Of these reasons, the most worrying is the first one — it says we might not have much time. The third one is also worrying to me on a personal level.

    It is not clear to me which reasons you refer to in this sentence. Did you miss further points in one of the previous enumerations? Did the numbers get mesed up?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s