FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Another insider’s p(doom) prediction: 5-50%.  A very good read from a respected source!

HUFFPOST. New OpenAI Leader’s Chilling ‘Doom’ Warning May Scare Your Pants Off.

Emmett Shear voiced his concerns about the dangers of artificial intelligence in a resurfaced clip.

The new interim CEO of OpenAI suggested earlier this year that artificial intelligence holds a level of potential risk for humankind that “should cause you to s**t your pants.”

Emmett Shear, the co-founder and former CEO of Twitch, was appointed over the weekend to lead OpenAI after its board of directors ousted its longtime CEO Sam Altman in a shock firing on Friday.

In a June interview on “The Logan Bartlett Show” podcast, Shear said he feared that AI technology could evolve until it is smart enough to design artificial intelligence on its own, “fully self-improve itself” and outsmart humans.

“That kind of intelligence is just an intrinsically very dangerous thing,” he said. “Because intelligence is power. Human beings are the dominant form of life on this planet, pretty much entirely because we’re smarter than the other creatures now.”

He assessed the “probability of doom” as somewhere between 5 and 50%.

His interviewer, Bartlett, noted that most AI experts seem to place some percentage of risk on the technology. He cited Paul Christiano, an AI researcher and former OpenAI employee who has thrown around a variety of alarming figures about the probability of bad outcomes from AI in the long term.

“That should cause you to shit your pants,” Shear said.

In June, Shear shared a clip from the interview on X (formerly Twitter), saying the exchange “more or less captures my high level beliefs around AI and how dangerous it is.”

That clip was resurfaced and shared on Sunday after Shear was announced as OpenAI’s interim leader.

In a Sunday post on X, Shear said he accepted the role because he believes OpenAI to be one of the “most important companies currently in existence” and “ultimately I felt that I had a duty to help if I could.”

He shared a three-point plan to be executed over the next 30 days, including hiring an independent investigator to dig into Altman’s ouster and the “entire process leading up to this point.”

The findings, he said, would inform whether the company needed further governance changes.

The exact reason for Altman’s firing was not disclosed by the company, but the board said in its announcement that he was “not consistently candid in his communications with the board.”

In an internal memo obtained by The New York Times, board members said Altman’s “lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.”

The news sent shockwaves through the tech industry over the weekend, and punctuates the ongoing debate over the multibillion-dollar AI boom and the potential perils the technology poses for the human race.

Learn More:

1:53:57. “As a result, like my p doom, my probability of doom is […]  between 5 and 50. So there’s a wide spread. Would you maybe go 2 and 50. Paul Christiano, who handled, uh, you know, a lot of the stuff within OpenAI, I think said 25 to 50. It seems like if you, if you talk to most AI researchers, there’s some preponderance of people that give that. That should cause you to shit your pants. But it’s human level extinction. I, think. Yeah, yeah. No, it’s not just a human level extinction. It’s such extinction. Humans is bad enough. It’s like potential destruction of all value in the light cone. Like, like not just for us, but for any species in the wake of the explosion. Like, uh, it’s like a universe destroying bomb. Like it’s really if, if, if, if, if it’s really bad’s bad in a way that’s like, makes global warming like not a problem. It’s bad in a way that makes normal kinds of bad not a bad […] I’m like, yeah, we’ll just roll the dice. It’s fine. We’ll figure it out later. Yeah, no, no. Like this should go, this is not a figure it out later thing. This is like a big fucking problem. […]  it’s like someone figured out how to invent a way to make like 10x more powerful fusion bond bombs out of like sand and bleach and that anyone could do at home. Yeah. Um, it’s terrifying.”

AI Concerns
1:44:45
Uh, shifting gears, artificial intelligence, uh, feeling you might have some concerns about the path we’re on and some
1:44:54
elements of how things are playing out. I mean, so I guess I have, I have a very specific concern about ai.
1:45:00
Like generally I’m a, I’m very pro technology and I really believe in sort of like, uh, the upsides usually outweigh the downsides.
1:45:09
Everything. Technology can be misused. You should usually wait and you until you eventually, as we understand it
1:45:16
better, you wanna put in regulations. But like regulating early is usually a mistake.
1:45:21
Um, when you do do regulation, you wanna do making regulations that are about, uh, reducing risk and for, uh, innovation and actually, uh, Authorizing more innovation.
1:45:31
Innovation’s usually good for us. I sort of have a, a very, a very high level like syllogism about
1:45:38
like, like AI that I, I’ve come to believe that I think is like correct. It’s like if we can build, if you consider intelligence to be the capacity
1:45:46
to solve problems from a given set of resources to a given goal, we are
1:45:52
building things that are more and more intelligent, the, the, we’ve built an intelligence, it’s kinda amazing actually, we’ve built something like definably.
1:45:58
It’s, it may not be the smartest intelligence, but it is unintelligence. It can solve problems. we’ve built something that can solve problems like arbitrary
1:46:05
problems from arbitrary resources and make arbitrary plans. At some point as it gets better, the kinds of problems that we’ll be able to install,
1:46:15
solve will include, uh, programming, chip design, material science, uh, power
1:46:22
production, um, of the things you would need to design an artificial intelligence
1:46:28
at that point, you’ll be able to point the thing we’ve built back at itself, and this will happen in before you get that point with humans in the loop.
1:46:35
It already is happening with humans in the loop, but that loop will get tighter and tighter and tighter and faster and faster and faster until it can fully
1:46:43
self-improve itself, at which point it will get very fast, very quickly, and
1:46:50
a thing that is very, very, very smart. you generate something that is very good, very intelligent, and by intelligent again, I mean this capacity to solve problems.
1:46:57
There’s people, there’s lots of ways people, that word it’s a, it’s an English word, which means it has no one definition. But I have that in that sense.
1:47:03
I mean, that kind of intelligence is just an intrinsically very dangerous thing because intelligence is power.
1:47:11
Human beings are the dominant form of life on this planet pretty much entirely because we are more smarter than the other
1:47:16
When was the last time intelligence came into a, you know, entity at this level? Right. And within humans, I think people get confused between
1:47:23
these two different things. Within the band of human intelligence, intelligence is not the most important thing. Humans have a lot of other attributes, but intelligence is a
1:47:30
very important thing for people. We have a lot of other gorillas are more intimidating as a, uh, species than, than we are.
1:47:36
But gorillas did not build this. Yeah. Studio and humans are ex like humans are dangerous as hunters for
1:47:43
example, because we’re smart. We do things like, you know, crowd, the William Mammos off cliffs and
1:47:48
set traps and like, uh, uh, human, we have, we have whites in our IRSs.
1:47:54
It’s the exact balance, uh, of white to color, uh, is what communicates what we’re looking at.
1:48:00
Other primates have black eyes cause they don’t want the other people, other creatures to see them. Cause they’re trying not, not to give away information.
1:48:06
We have the white, uh, around the IRS because it lets you tell what other people are looking at and you basically have this eerie ability to do it.
1:48:12
You know exactly what other people are looking at. Imagine hunting. These creatures are hunting you you see one of them and one of them can like.
1:48:21
Indicates that they wanna cross the way that they just saw a deer by like looking at the deer.
1:48:28
there’s no sound exchanged at all. It’s just purely this like, and like you have to be pretty smart to do like big,
1:48:33
very strong theory of mind to do that. you build something that is a lot smarter than us, not like somewhat smarter,
1:48:41
again, within humans, the smartest people don’t rule the earth, obviously. Um, but like as much smarter than we are as we are, than like dogs, right?
1:48:50
Like a, a big jump. That thing is intrinsically pretty dangerous because
1:48:56
if it gets set on a goal that isn’t that, that, uh, like the, the, the
1:49:03
instrumental first instrumental steps, instrumental convergence, the first instrumental step towards achieving that goal is we’ll go first step
1:49:09
one if it, if this is easy for you. Cuz you’re really just that smart. Well, step one, just kinda like take over the planet, right? Like, it’s like, then I just have control over everything and then,
1:49:16
and then step two, solve my goal. Can you define instrumental convergence for those that didn’t sit through three and a half hours of me talking to LEAs or,
1:49:22
go? Yeah. So, so instrumental convergence is this idea that like, uh, often when you’re
1:49:29
trying to achieve a goal, step one is to achieve a instrumental goal along the way. So like, if you want to, like, for example, like, uh, Uh, what’s, what’s the,
1:49:42
Yeah, yeah. No. So, uh, no, I, I’m thinking more like an instrumental, instrumental convergence. Uh, I’m trying to give an example for where, where it happens to
1:49:47
people and their people’s lives. Um, oh, in chess, your actual goal is to checkmate them, but like, along the way
1:49:54
to check making them check making them. Most of the time one of the things you wanna do is take their pieces. Now, you could not take their pieces.
1:50:00
There’s probably, I bet a really, a really good chess player. I guess the kind of mediocre one could checkmate them without taking any pieces,
1:50:09
just like trapping their, their, but like, that’s like even more impressive. But like, generally speaking, if you’re just trying to win a game of chess, taking
1:50:17
their queen, taking their pawns, taking, taking their pieces is a good idea. It like, makes it easier to checkmate them.
1:50:23
And so I can predict something about almost anyone, any good chess player, they’ll take a bunch of their pieces, they’ll take the other
1:50:28
person’s queen eventually, probably. Um, the same way you can predict, uh, that corporations, uh, uh, that want
1:50:38
to expand into a new market, there’s an instrumental goal, like step one, they’ll probably hire people in that market.
1:50:44
just predictable. And in general, if you wanna accomplish most goals, like big goals, step one is
1:50:49
like accumulate a lot of money and power. Like if you have, if you have a goal of, uh, changing the world,
1:50:56
accumulating money and power or accumulating followers who like care listen to you and will do what you say. These things are like, Obviously good for steps, even if you don’t, even
1:51:04
if you didn’t even know what the next goal was, they would be good for steps. And if you know what it is, they’re definitely good for steps.
1:51:10
Step one, if you can achieve it along the way to, to achieving any big goal. Yeah. Paperclips is the traditional one.
1:51:17
Um, is first just as if you, if you can pull this off, which like
1:51:22
you can’t, humans can’t do this. We don’t think of goals like this because we’re not capable enough. But if you could pull it off, step one would be like, well, first I’m just gonna
1:51:29
like, literally make sure I have total control over everything at all times.
1:51:36
And then step two, I’ll like do whatever it, step two, do the thing easy. I already have control over everything.
1:51:41
No one can stop me. I have access to all the resources. Simple.
1:51:47
and I think people just don’t, it’s hard, it’s hard to have people don’t imagine sufficiently capable as sufficiently capable.
1:51:55
Um, and then, so then, and some people, some people have this idea like, oh,
1:52:00
what if we just don’t give it goals? Well, first of all, we are giving it goals. People are already building agents, but let’s just say we didn’t.
1:52:07
And so you ask this Oracle, what’s the best way for me to accomplish Goal X? And it knows it takes you literally, um, and it, and it, it, it actually
1:52:20
answers your question correctly. The answer to your question will, will be a thing that causes you to
1:52:26
bootstrap an AI that then takes over the world and accomplishes the goal. That’s the most reliable way to accomplish that goal.
1:52:31
Now I just laid out a chain of argument with a lot of, if this, then
1:52:38
this, if this, then this, if this. Uh, I know Eliza thinks that like we’re all doomed for sure.
1:52:45
Um, buy his doom argument. I buy the chain and the logic. I just think that like, first of all, I’m less optimistic that the current
1:52:53
set of technology is gonna get to self bootstrapping super intelligence. I’m less optimistic than he is that, or you get Optim pessimistic, whatever.
1:53:01
I’m less, I’m less sure than he is that when it hits that self bootstrapping step, that uh, that process will be fast and that we will, that there aren’t important
1:53:12
new discoveries that will take a long time on top of that, that we haven’t found. I’m less sure that, uh, uh, so there’s an idea of alignment, getting it, you could
1:53:25
make the I ai such that it wants the same things we want and then if you ask it to do the thing, it won’t go and do horrible things cuz it’s not dumb and it’s aligned.
1:53:33
And if it wants the same things, it knows what you mean. It’s smart and it has, it has aligned goals.
1:53:39
Hooray. Like, that’ll work great. I’m le uh, Eliza thinks that we’re like, just alignments this
1:53:45
incredibly hard problem that like, is almost unsolvable and more doomed. I’m like, not so sure. I think it’s a more solvable problem than he thinks it is for a variety of reasons.
1:53:52
You know, just, it would take too long to like, go into, but like, my, my belief is that it’s easier.
1:53:57
Um, and so, uh, As a result, like my p doom, my probability of
1:54:04
doom is like my bid ask spread. And that’s pretty high. Cause I have a lot of uncertainty, but I would say it’s like between like
1:54:12
five and So there’s a wide spread. Would you maybe two and 50, you
1:54:18
Paul Christiano, who handled, uh, you know, a lot of the stuff within open ai, I think said 25 to 50.
1:54:25
It seems like if you, if you talk to most AI researchers, there’s some preponderance of people that give
1:54:30
that should, that should cause you to shit your pants. But it’s human level extinction. I, think. Yeah, yeah. No, it’s such a human level extinction.
1:54:37
It’s such extincting. Humans is bad enough. It’s like potential destruction of all value in the light code.
1:54:42
Like, like not just for us, but for any alien species cut in the wake of the explosion. Like, uh, it’s like a universe destroying bomb.
1:54:49
Like it’s really if, if, if, if, if it’s really bad’s bad in a way that’s like,
1:54:55
makes global warming like not a problem. It’s bad in a way that makes nor normal kinds of bad.
1:55:01
Not a, that’s not nor normally I’m like, yeah, we’ll just roll the dice. It’s fine. We’ll figure it out later.
1:55:07
Yeah, no, no. Like this should go, this is not a figure it out later thing. This is like a big fucking problem. problem.
1:55:12
Southern Manhattan, Miami might go underwater, like, okay, but this is we’re talking about. And so why do you think, I mean, I, I,
1:55:19
it’s, it’s like someone figured out how to invented a way to make like 10 x more powerful fusion bond bombs out of like sand and that
1:55:28
like could anyone could do at home. Yeah. Um, it’s terrifying.
1:55:33
And, and I’ve had enough time with it now that I can laugh about it. When I first realized it was fucking heart stopping.
1:55:41
When was that? uh,
1:55:47
probably, like, it was, it was a dinner I went to before when opening.
1:55:56
I was just, we had basically right after attention is all you need had been written, like, and they sort of realized the scaling laws were there.
1:56:03
And I went to a dinner and someone was there and they were talking about it and they were like, I think we were, were, uh, we actually might be on the,
1:56:11
the path to build with general ai. All you need was 2018. Yeah, I think early 2017.
1:56:19
and Yeah, the Google Paper that kicked this all off. uh, and I, I’d heard about the problem.
1:56:25
I thought about it. I just had, I, like, I’d been like, the, the AI doom thing seems plausible, whatever.
1:56:30
Like, I don’t think an AI is coming anytime soon, so I’m just like, not gonna think about it that hard yet. Uh, and then I was like, oh, maybe.
1:56:37
And then I started thinking about it harder, and then I was like, oh, shit. Oh, oh, oh.
1:56:42
Um, and so, uh, uh, I guess the, the pro, I believe the proper response is like,
1:56:53
unfortunately this isn’t the kind of thing where, uh, we can stop forever.
1:57:01
And it unfortunately is also the kind of thing where like more time is good. Like I’m actually, I’m, I’m okay with stretching out the time a little bit. But like, ultimately to solve the problem, um, this is one of my biggest
1:57:10
points of diversions with, uh, Joukowski. Um,
1:57:16
he is a mathematician philosopher, know, decision theorist by training.
1:57:22
I am an engineer and my, everything I’ve ever learned about engineering is the only way you will ever get something that works if you need to work on the first
1:57:30
try to build lots of prototypes and models at the smaller scale and practice and
1:57:35
practice and practice and practice try build, start, start building the thing. But like smaller.
1:57:41
And if there is a world where we survive, and everything goes wrong where we build an AI that’s smarter than humans and we survive it, it’s gonna be because
1:57:51
we built smaller ais in that and we actually had lots of, as many people,
1:57:56
smart people as we can working on that and taking the problem seriously.
1:58:02
And so I’m, I’m generally, I’m in favor of trying to create something, got a fire alarm where we like, like maybe not AI is bigger than X at some point.
1:58:09
Like, try trying to like create a and I actually think there’s good reason to believe, like nobody wants to end the world and this argument
1:58:15
is not that hard to understand. so I actually think there’s a good, there’s a good option for international co cooperation and like treaties about some sort of, you know, the, the AI
1:58:23
test ban treaty about not bigger than x. At some point. I don’t think we, I don’t think we’re actually at the point where it just
1:58:29
needs to be not bigger than our, the current AI are just not that smart yet. I think we should be moving towards creating that kind of a, some
1:58:35
kind of soft, I dunno, we have to figure out what that looks like. Cuz it’s, it’s trickier to set that than it, setting that rule
1:58:42
is way harder than it looks. Writing good policy is hard. We should be thinking about it now. I just think we’re not think we’re ready for, for it yet.
1:58:48
But in the meantime, on these smaller models, we, it is good that lots of people are fucking around with them.
1:58:53
It’s good that we have more and more people trying to figure out how they work and trying to figure out how you can make them do things.
1:59:00
How, figure out how to make them do bad things. The best way to figure out how to solve a big AI from doing bad
1:59:05
things we’ll out the best way. There’s a bunch. It is true. There’s a bunch of failure modes for super intelligent ais that don’t exist
1:59:11
unless super intelligent ais and we better not bet on, oh, don’t worry. It works. It works in the dumb ones.
1:59:16
It’ll work. No, no, that’s not how it works. You can’t do that. But like we will figure out, we are figuring out more and more about
1:59:24
the principles of how it works. And if, if we survive, it’s gonna be because that process produces a, a
1:59:30
good generalized understanding, a good generalized model of how these kinds of predictive models, which I think includes humans as we are some kind of very complex
1:59:39
predictive model with other stuff too. But like that’s a big part of what a human is.
1:59:44
We’ll understand how those work at a deeper level. We’ll have some of, some science of it actually. And that’s what we need.
1:59:50
We need a science of ais. Right now we have an engineering of ais. And no science of ais. And we need to get, use the engineering to bootstrap ourselves into a science of
1:59:58
ais before we build a super intelligent AI so that it doesn’t tell us all. Why do you think people are struggling with the discourse around this?
2:00:06
Like very smart people seem to abject, it’s very, very obvious, very obvious mood affiliation rules everything around me.
2:00:12
People make, don’t make decisions on a reasoning basis. I mean, myself included, most of the time I happen to, like, I happen
2:00:19
to find this problem interesting and compelling on its own. So I spend a lot of time like digging into the arguments themselves
2:00:26
because I was like drawn to it. But like most of the time I make decisions the same way as everyone else does. The person pitches me on pitching me on the flat earth thing.
2:00:33
Who’s saying that? Like a bunch of things. I don’t really like listen to them. I just like, I can, they say some things and it triggers like,
2:00:39
oh, you’re part of that tribe. Those people generally, I don’t think, think very clearly. I’m just gonna discount everything you’re saying and ignore you.
2:00:45
Yeah. Robin Hanson talks about nine 11 people and it’s like, you don’t argue with them. You just sort of move on
2:00:51
Yeah, exactly. Conversation the AI people sound like, uh, religious nuts who are telling you about
2:00:57
the end of the doomsday, end of the world. And it, it sadly pattern matches really nicely. Right? Like the AI is like the, you know, is like the antichrist.
2:01:05
It’s coming and you know, if what if we’re good, the good AI will come and save us from that. It’s like, it sounds like, like, uh, Christian rap.
2:01:15
Yes. Um, last of us, what was it? Yeah, It, uh, unfortunately, um, Reasoning from fictional evidence is it doesn’t work.
2:01:24
And that mood affiliation reminds me of this is not an argument and
2:01:29
the earth is not round because the flat earthers sound crazy. The earth is round because you can demonstrably the see that the earth
2:01:37
is round and go measure that yourself. And it’s true. And if you make decisions based on anyone who’s telling you that doing
2:01:45
X will unleash a force which is going to kill us all, they sound like a bunch of, uh, crazy religious people
2:01:52
Because it’s never happened before. never happened before. Guaranteed, guaranteed the first time. That’s true.
2:01:58
We’re all dead because you, your algorithm always predicts the same thing. You, the, the thing you’re going through in your head always predicts
2:02:06
the same outcome for anyone who is predicting doom from creating a powerful force beyond human ability.
2:02:13
Now, is true. You should be skeptical in general when someone proposes that because there are a lot, there are infinity examples from the past three thou, you know, 6,000
2:02:22
years of history, of people predicting that falsely about things and, and
2:02:29
people, people made imaginary cures for medi medicine, medicinal cures for
2:02:34
like tinctures that were supposed to cure you for a very, very long time. And then we made one that worked and, and you just can’t reason that way sometimes.
2:02:43
Sometimes it’s new, sometimes it’s not like before. Usually it’s like before and sometimes it’s not. And I, I am personally convinced.
2:02:50
This time it is not like before. And I encourage everyone who’s in that mode.
2:02:55
Like the main thing that what, what I want you to pay attention to is listen to me. Like people like me listen to people like, uh, even Joukowski, who I, I disagree
2:03:05
with ’em on the amount of doom, we’re like pro cryonics, pro pro-technology.
2:03:11
Like technology’s gonna fix all our problems. Crazies. Yeah. Like if I, if I have a, if I have a defect, it’s that I am too pro-technology.
2:03:17
I want too little regulation. Like if are you doing cryonics by the way? I have not signed up yet. I really should. I’ve been, it’s one of those things on my to-do list.
2:03:24
I’m failing the rationality test. Yes. Yes. uh, but, uh, but you’re a techno optimist.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.