URGENT AND WORTH YOUR TIME.
“So, little recap. Here’s what we’ve established so far: The makers of AI openly admit their technology is not controllable- no idea how to control it. They openly admit they don’t understand how it works- why it does what it does. They openly admit it has the power to cause human extinction in the near term and they openly admit they’re focusing on all of their time and money on making it stronger not safer. Also there is no government regulation of this whatsoever. Currently the sale of a ham sandwich is far more regulated than anything to do with AI… oh and then there’s this, and they can’t stop themselves because they see AI and AGI as inevitable and in order to make their child childhood science fiction fantasies come true they’ve got to be the ones to make it.” (39:23)
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is episode #1. Thank you for watching. I’m just a dad with a small business who doesn’t want everyone to die. I hope you find the content accessible and informative. Any and all feedback is more than welcome in the comments, I’d like to make this interactive. In March 2023 following the release of Chat GPT4, I came across an article in Time Magazine on line that changed my perspective on everything. I’m an optimist to my core, and a lover to technology. But the one article changed my outlook on the future more than anything I thought was remotely possible. The Time article was written by Eleizer Yukdowsky, a universally respected AI safety leader and researcher, who has been working on AI safety for more than 20 years. He wrote: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” LITERALLY EVERYONE ON EARTH WILL DIE. That’s what he wrote. I read the article a dozen times. The journalist in me tried to poke holes. I couldn’t fine one. I’m an optimist who loves to have fun thought no way this could be real. LITERALLY EVERYONE ON EARTH WILL DIE?!??!?! I went down a rabbit hole, hundreds of hours of podcasts and dozens and dozens of articles and books. Much gratitude and respect to the podcasters who got me to this point. Brilliant hosts Lex Fridman, Dwarkesh Patel’s Lunar Society Podcast, Tom Bilieu’s Impact Thery, Flo Reid’s Unherd, Ed Mylett, Harry Stebbings, Bankless, Future of Life Institute Podcast, Eye on AI with Craig Smith, Liron Shapira, Robot Brains and more. Your work is foundational to this FOR HUMANITY podcast and whenever I use a clip from someone else’s podcast I will give full credit, promotion and thanks. Your work is foundational to this debate. I am convinced, based on extensive research and decades of investigation by many leading experts, on our current course, human extinction due to Artificial Intelligence will happen in my lifetime, and most likely in the next two to ten years. In Episode one, this podcast lays out how the makers of AI: -openly admit their technology is not controllable -openly admit they do not understand how it works -openly admit it has the power to cause human extinction, within 1-50 years, -and openly admit that they are focusing nearly all of their time and money making it stronger not safer. -beg for regulation (who does that?) but there is no current government regulation, a ham sandwich sale is far more regulated. -are trying to live out their childhood sci-fi fantasies RESOURCES : Follow Impact Theory with Tom Bilyeu on YouTube • Full Episodes of Impact Theory Follow Lex Fridman Podcast on Youtube / @lexfridman Follow Dwarkesh Patel’s Podcast on YouTube / @dwarkeshpatel Follow Eye on Ai with Craig Smith on YouTube / @eyeonai3425 Follow Unherd Podcast on YouTube https://www.youtube.com/@UnHerd/about Follow Robot Brains Podcast on YouTube / @ucxnviqjbonxljxkjznv-xbw Follow Bankless Podcast on Youtube / @bankless Follow Ed Mylette Podcast on YouTube / @edmylettshow Follow V20 w Harry Stebbings on YouTube Follow the Future of Life Institute Podcast Eleizer Yudkowky / esyudkowsky Connor Leahy / npcollapse ➡️Conjecture Research https://www.conjecture.dev/research/ ➡️EleutherAI Discord / discord ➡️Stop AGI https://www.stop.ai/ Max Tegmark’s Twitter: / tegmark ➡️Max’s Website: https://space.mit.edu/home/tegmark ➡️Future of Life Institute: https://futureoflife.org Mo Gawdat: Website: https://www.mogawdat.com/ ➡️YouTube: / @mogawdatofficial ➡️Twitter: / mgawdat ➡️Instagram: / mo_gawdat Future of Life Institute ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: / flixrisk ➡️ LINKEDIN: / futu. .