The First Circle

Epistemic Status: Squared

The following took place at an unconference about AGI in February 2017, under Chatham House rules, so I won’t be using any identifiers for the people involved.

Something that mattered was happening.

I had just arrived in San Francisco from New York. I was having a conversation. It wasn’t a bad conversation, but it wasn’t important.

To my right on the floor was a circle of four people. They were having an important conversation. A real conversation. A sacred conversation.

This was impossible to miss. People I deeply respected spoke truth. Their truth. Important truth, about the most important question – whether and when AGI would be developed, and what we could do about that to change that date, or to ensure a good outcome. Primarily about the timeline. And that truth was quite an update from my answer, and from my model of what their answers would be.

I knew we were, by default, quite doomed. But not doomed so quickly!

They were unmistakably freaking out. Not in a superficial way. In a deep, calm, we are all doomed and we don’t know what to do about it kind of way. This freaked me out too.

They talked in a different way. Deliberate, careful. Words chosen carefully.

I did not know the generation mechanism. I did know that to disturb the goings on would be profane. So I sat on a nearby couch and listed for about an hour. I said nothing.

At that time, decision was made to move to another room. I followed, and during the walk was invited to join. Two participants left, two stayed, and I joined.

The space remained sacred. I knew it had different rules, and did my best to follow them. When I was incorrect, they explained. Use ‘I’ statements. Things about your own beliefs, your models, your feelings. Things you know to be true. Pay attention to your body, and how it is feeling, where things come from, what they are like. Report it. At one point one participant said they were freaking out. I observed I was freaking out. Someone else said they were not freaking out. I said I thought they were. The first reassured me they thought there was some possibility we’d survive. Based on their prior statements, that was an update. It helped a little.

I left exhausted by the discussion, the late hour and the three hour time zone shift, and slept on it. Was this people just now waking up, perhaps not even fully? Or were people reacting too much to Alpha Go? Was this a west coast zeitgeist run amok? An information cascade?

Was this because people who understood that there was Impending Ancient Doom and we really should be freaking out about it used to freaking out vastly more than everyone else? So when Elon Musk freaked out and puts huge funding into OpenAI without thinking it through, and other prominent people freaked out, they instinctively kept their relative freak out level a constant amount higher than the public’s freakout level, resulting in an overshoot?

Was this actually because people were freaking out about Donald Trump giving them a sense we were doomed and finding a way to express that?

Most importantly, what about the models and logic? Did they make sense? The rest of the unconference contained many conversations on the same topic, and many other topics. There was an amazing AI timeline double crux, teaching me both how to double crux and much about timelines and AI development. But nothing else felt like that first circle.

As several days went by, and all the data points came together, I got a better understanding of both how much people had updated, and why people had updated. I stopped freaking out. Yes, the events of the previous year had been freakout worthy, and shorter timelines should result from them. And yes, people’s prior timelines had been a combination of too long and based on heuristics not much correlated to actual future events. But this was an over-reaction, largely an information cascade, for a confluence of reasons.

I left super invigorated by the unconference, and started writing again.

Meta-note: This should not convince anyone of anything regarding AI safety, AI timelines or related topics, but I do urge all to treat these questions with the importance they deserve. The links in this post by Raymond Arnold are a good place to start if you wish to learn more.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to The First Circle

  1. gwern says:

    (Elon Musk has not actually put a billion dollars into OpenAI. If you look at their Form 990, their total budget is ~$14m and it’s unclear how much of that even comes from Musk: https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/ )

  2. michealvassar says:

    Really worth reading.
    https://blog.openai.com/ai-and-compute/
    Suggests that the last six years of progress are unlikely to continue at a similar rate past the next three years, though progress may remain rapid until the low-hanging fruit in hardware architecture are depleted.

  3. Pingback: Rational Feed – deluks917

  4. Anonymous says:

    How are you feeling about it now over a year later? (And post-Alpha-Zero?)

    • TheZvi says:

      Similar feelings now. The people who are used to freaking out seem to have very short timelines, but don’t do the potentially crazy things one would do if one had such an alief. Some of the people who are used to not freaking out, and increasing parts of the culture at large, are now freaking out some, which is good, since they should be.

      I have huge uncertainty, and feel I’ve only resolved enough (for now) to rule out doing otherwise insane things. If and when I come to decision points where the answer matters more, I’ll work to get more clarity.

  5. Pingback: The Second Circle | Don't Worry About the Vase

  6. Pingback: The Third Circle | Don't Worry About the Vase

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s