FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Who Would Win the AI Arms Race?

Bloomberg’s Nate Lanxon and Jackie Davalos are joined by controversial AI researcher Eliezer Yudkowksy to discuss the danger posed by misaligned AI. Yudkowksy contends AI is a grave threat to civilization, that there’s a desperate need for international cooperation to crack down on bad actors and that the chance humanity survives AI is slim.

P(doom)

“Nobody wins an AI arms race, except AI”

walk us through your your argument behind why misaligned artificial intelligence will actually lead to the destruction of humanity maybe I’m missing something there well if it’s smarter than us and we don’t understand it and we couldn’t shape it and it ended up with the sort of weird desires that things that I would predict that things get when you trade them by gradient descent which is something of a more technical story but you know the the the view from a thousand feet is that when you’re trying something for the first time things sometimes go wrong it ends up wanting weird stuff most weird things you can want imply using lots of resources doing lots of computation not having humans interfere with you and if something is very very smart it probably gets What It Wants even if we want something different I can try to zoom in on possible ways that something much much smarter than us might defeat us in a fight if it came to that but I wouldn’t be able to guess it accurately for the same reason that the 11th century would not be able to call how the 21st century would roll over them militarily if it came to a fight across a thousand years the same reason that if you were playing chess against the best modern AI chess players much much better than you and I couldn’t tell you the eye is going to move here you’ll defeat using this tactic if I could predict the AI playing that well I’d be that smart at chess myself I could just move wherever I predicted the AI would move so I can try to put lower bounds on how badly we would lose a fight against something much smarter than us I can observe for example that Alpha fold 2 has got has basically cracked the problem of predicting a protein structure from DNA sequences which is one of the Keystone abilities you need to synthesize your own artificial life forms if I was going deeper into that conversation I could talk about how proteins themselves are fold up and are hand held together by Van Der waals forces basically by Static Cling and how this is those are weaker forces that’s why your hand isn’t as strong as steel as strong as concrete it’s not that life is inherently squishy it’s that it’s made out of squishy materials stronger ways to put life together things like bacteria with a hundred times the the strength and power density of the bacteria we know we would lose that fight very badly it would probably not inform us that a fight was going on until we were had already lost essentially that is what happens when you go up against something smarter than you but that’s I mean there’s such a big there’s such a big gap between predicting the protein structures of you know 200 million proteins and a synthetic life form being created that poses a threat to us like what is the transition between those two things additional brain work that an AI might be able to do very very fast if it was smarter and faster than us when you when you hear the word AI think maybe of an entire alien civilization contained in a box running at a million times human speed also the aliens are smarter than us

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.