FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“If we build AI systems that are more competent than us at politics, persuation, at science, what happens? The future is going to belong to them, not to us.” — Andrea Miotti

“If we create entities that are smarter than us and have their own goals, what does that mean for humanity? Are we in danger? … If it’s 5 years, we are not ready.” – Yoshua Bengio

A Narrow Path https://www.narrowpath.co/

5,658 views 3 Dec 2024 #ai #regulation #safety

I interviewed Andrea Miotti, from Control AI. We discussed the dangers of letting the race to AGI, or human level AI, continue unabated. We draw comparisons with the safety engineering that goes into an airplane. The science and necessary redundancies are well understood. But that is not the case with AI. We are essentially developing the plane while we are flying it. To make matters worse, all of humanity is on the plane. If we make a mistake, we can all suffer the consequences. A lot of this is already happening. AI has been shown to be more effective than AI engineers at writing machine learning code, for short timeframes. We discussed several arguments that people make against these issues, such as that AI is not really learning. Andrea presents his three-phase solution called A Narrow Path, which defines what companies, governments, and scientists will have to do to be more certain about our future. An interesting historical analogy: companies wanted to sell nuclear weapons on the open market when they were first introduced.

Control AI https://controlai.com/

0:00 Intro 0:11 Contents 0:24 Part 1: An introduction to Andrea and ControlAI 0:50 Control AI born from AI summit 2:12 AI labs shouldn’t be trusted to solve this alone 2:58 Situations when governments step in 4:22 Biosecurity example 5:22 Safety engineering to predict bridge failures 6:40 Danger is in the blast radius 8:10 Safety engineering for airplanes 9:18 Understanding building materials important 10:50 Redundant systems possible in planes 12:17 Test pilot comparison 12:44 Startups build the plane while flying it 13:58 How many people are on the plane? 14:35 Recent work from Control AI 15:57 Part 2: What is really the problem with AI risk? 16:08 Threat from AI was an open secret 17:33 Politicians were surprised at sudden news 18:05 How Geoffrey Hinton felt 18:48 Species-level threat deserves attention 19:49 What if it was a 100 year timeline? 21:30 Dealing with risks is not a zero sum game 23:04 We already have too much money, just need attention 23:36 How UK fits into AI safety story 24:37 UK safety institute becoming mandatory 25:47 What do you say to people who think AI risk is science fiction? 26:52 Improving at art, developing inhuman skills 28:43 Brand new skills that are impossible for humans 29:23 Alien form of intelligence 30:16 What about, are AIs just mimicking and not truly learning? 30:34 Competence is all that matters 32:44 Even if AIs forget, what is this if not learning? 33:49 Ants vs humans competing for resources 35:40 Superintelligence resources are unimaginable 36:55 Analogy: difficult to survive in economic terms 38:00 Still a better situation since AI won’t have laws 38:52 Economic argument sounds less sci-fi 40:00 Not all sectors of the economy will advance equally 41:05 The economy is not a canary for AI capabilities 43:16 Why using AI for self improvement is a disaster 44:51 The first version of self improvement is already here 45:50 Results showing AI more capable than ML engineers 47:03 Focus on the acceleration of the whole system 48:25 Models are constantly evolving brand names 49:22 Part 3: What is the path that can keep us alive? 49:51 Phase 0: Stop racing to superintelligence 50:20 Phase 1: Stabilize international relations 51:33 Phase 2: Solve scientific problem of alignment 52:35 Restating the three phases 53:17 Thinking about everything at once is confusing 54:40 Stages all needed, and have sequential dependency 55:25 Companies wanted to sell nuclear weapons 56:05 Do we need something beyond this path? 56:45 We really need proper engineering 58:04 There is a large spectrum of methodologies 59:29 Walking stick analogy 1:00:51 Conclusion 1:01:01 Join the discords 1:01:21 Outro 1:01:48 Someone else is waving too

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.