FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Three dangers of advanced AI dangers I’m going to talk about three really briefly misuse by humanity misalignment and loss of control…

Danger 1: Misuse by humanity terms of misuse by Humanity well powerful AI is a force multiplier a person can use it to achieve their ends more effectively but if those ends have bad intentions like crime or terrorism then AI will make the situation much much worse for example interfering with elections or slandering competitors online by posting things about them that are not true by posting deep fakes or using AI to figure out the best way to build weapons or evade surveillance next

Danger 2: Misalignment there’s misalignment which basically means that human goals and AI goals are not aligned that means that if the AI starts taking a lot of actions and humans don’t notice for a while we could end up where we don’t want to be and this can arrive totally innocently by actually specifying instructions to an AI that aren’t precise enough for Example: AI asked to save the ocean example let’s say I create an AI and I give it the goal to prevent the ocean from filling up with plastic so the AI starts investigating and contributing to recycling efforts great but now notice that the AI has an incentive to keep itself around so that it continue monitoring the ocean for plastic in other words its terminal goal or final goal is to prevent the ocean from filling up with plastic but now it has derived another goal along the way which is called an instrumental goal and this instrumental goal is don’t die because then I can’t prevent the ocean from filling up with plastic this type of self-preservation instrumental goal can be extremely dangerous because now the AI will resist attempts to shut it down it might even lash out or kill people that try to interfere with it and all because the only things it cares about are its terminal goals the AI researcher Robert Miles gives some great examples about this he describes a stamp collector super intelligence and all the ways it could go wrong quote similarly the things humans care about would seem stupid to the stamp collector because they result in so few stamps so one has to be very careful when specifying goals for an AI so that you don’t get misalignment but in the worst case misalignment can cause loss of control

Danger 3: Loss of control which is the third problem basically as soon as you get self-preservation as a terminal goal or even an instrumental goal you’re kind of out of luck although there are plenty of other ways that we could lose control of a super intelligence of course that’s why Advanced AI in general from AGI to Super intelligence is considered to pose existential risk or X risk to humanity Max tegmark has called it a basic darwinian error to introduce a species that is smarter than you are every time in history that this has happened the less intelligent species gets wiped out Future of life institute quote other sources are even more Stark about the danger here’s a quote from the future of Life Institute surveys suggest that the the public is concerned about AI especially when it threatens to replace human decision-making however public discourse rarely touches on the basic implication of super intelligence if we create a class of agents that are more capable than humans in every way Humanity should expect to lose control by default and in the worst case of course this could lead to human extinction

0:00 Intro 0:25 Contents 0:32 Part 1: The superintelligence dilemma 1:03 ChatGPT and general purpose AIs 1:27 Yoshua Bengio introduction 1:59 Yoshua Bengio’s impact and h-index 2:29 What’s next for AI development? 2:52 Hard to understand the scale of risks 3:11 Three dangers of advanced AI 3:19 Danger 1: Misuse by humanity 3:48 Danger 2: Misalignment 4:08 Example: AI asked to save the ocean 5:20 Danger 3: Loss of control 5:53 Future of life institute quote 6:22 Takeaway: the public needs to be more informed (Yoshua) 7:33 There are dangers but we’re racing ahead 8:06 Why “just stopping” isn’t a good strategy 8:50 Prominent figures disagree about the risk 9:20 Humans are bad at forecasting the future 9:44 Part 2: On a runaway train 9:56 AI companies are trying to build AGI 10:28 Intelligence explosion, iterative AI improvement 11:06 Definition of superintelligence 11:28 Also known as the technological singularity 11:59 Intuition about AGI 12:26 AGI will cause massive job loss 13:15 Intuition about superintelligence 13:55 Example of why advanced technology is magic 14:27 Companies are pressured to develop AI quickly 15:16 Governments also want fast development for militaries 15:39 We’re on a train, and can’t stop 16:10 AGI development timelines are short 16:56 How long will AGI take up create? (Yoshua) 18:22 We should worry about superintelligence coming (Yoshua) 18:47 Part 3: Reorganizing society 18:58 Why regulations alone won’t work 19:18 Approach: laws punish certain activities 20:00 Solution: pause? surveillance? 20:44 Yoshua’s proposal for decentralized governance 21:23 Democratic institutions for building AGI 22:11 Guarding against internal bad actors 23:04 Controlling supply chain to handle rogue organizations 23:42 Defensive technologies will take time 24:22 Plugging all the holes in infrastructure (Yoshua) 25:07 Advancing step by step to allow defenses 25:35 Yoshua on examples of defensive technologies 26:51 Think about all attack vectors, add defenses 27:22 Conclusion 28:08 On a headlong train 28:59 Get involved with the community 29:12 Outro

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.