FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“Lights out for all of us.” [Translation: Death for all humanity.]

TRANSCRIPT. In March 2023 an open letter sounded the alarm on the training of giant AI experiments. It was signed by over 30,000 individuals including more than 2,000 industry leaders and more than 3,000 experts. Since then there has been a growing concern about out of control AI. CBS NEWS: “Development of Advanced AI could pose quote profound risk to society and Humanity.” – ABC NEWS: “They say we could potentially face a dystopic future or even extinction.” People are right to be afraid. These Advanced systems could make humans extinct and with threats escalating it could be sooner than we think. Autonomous weapons, large-scale cyber attacks and AI enabled bio terrorism endanger human lives today. Tampant misinformation and pervasive bias are eroding trust and weakening our society. We face an international emergency. AI developers are aware and admit these dangers. NBC NEWS. “and we’re we’re putting it out there before we actually know whether it’s safe” –  OPENAI: “and the bad case and I think this is like important to say is like lights out for all of us”. ANTRHOPIC: “a straightforward extrapolation of today’s systems to those we expect to see in 2 to 3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces enabling many more actors to carry out large-scale biological attacks” CBS NEWS: “in 6 hours he says the computer came up with designs for 40,000 highly toxic molecules”. They remain locked in an arms race to create more and more powerful systems with no clear plan for safety or control. GOOGLE: “It can be very harmful if deployed wrongly and we don’t have all the answers there yet and the technology is moving fast so does that keep me up at night absolutely”. They recognize a slowdown will be necessary to prevent harm but are unable or unwilling to say when or how there is. Public consensus and the call is loud and clear. Regulate AI now. We’ve seen regulation driven innovation in areas like Pharmaceuticals and Aviation. Why would we not want the same for AI? There are major downsides we have to manage to be able to get the upsides . There are three key areas we are looking for us lawmakers to act: ONE: Immediately establish a registry of giant AI experiments maintained by a US Federal agency. TWO: They must build a licensing system to make Labs prove their systems are safe before deployment. THREE: They must take steps to make sure developers are legally liable for the harms their products cause. Finally we must not stop at home. This affects everyone. At the upcoming UK Summit every concerned Nation must have a seat. We are looking to create an international multi-stakeholder auditing agency this kind of international Co operation has been possible in the past we coordinated on cloning and we banned bioweapons and now we can work together on AI.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.