FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

The Alignment Problem: For Humanity, An AI Safety Podcast Episode #2

“So, that is the alignment problem in a nutshell. How do less intelligent things control more intelligent things.”

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is Episode #2: The Alignment Problem, that the makers of AI have no idea how to control their technology of align it with human values, goals, and ethics. For example, don’t kill everyone. Much gratitude and respect to the podcasters who got me to this point. Brilliant hosts Lex Fridman, Dwarkesh Patel’s Lunar Society Podcast, Tom Bilieu’s Impact Thery, Flo Reid’s Unherd, Ed Mylett, Harry Stebbings, Bankless, Future of Life Institute Podcast, Eye on AI with Craig Smith, Liron Shapira, Robot Brains and more. Your work is foundational to this FOR HUMANITY podcast and whenever I use a clip from someone else’s podcast I will give full credit, promotion and thanks. Your work is foundational to this debate. I am convinced, based on extensive research and decades of investigation by many leading experts, on our current course, human extinction due to Artificial Intelligence will happen in my lifetime, and most likely in the next two to ten years. This podcast lays out how the makers of AI: -openly admit their technology is not controllable -openly admit they do not understand how it works -openly admit it has the power to cause human extinction, within 1-50 years, -and openly admit and they are focusing nearly all of their time and money making it stronger not safer. -beg for regulation (who does that?) but there is no current government regulation, a ham sandwich sale is far more regulated. -are trying to live out their childhood sci-fi fantasies PLEASE LIKE, SHARE, SUBSCRIBE AND COMMENT–WE HAVE NO TIME TO WASTE. RESOURCES : Follow Impact Theory with Tom Bilyeu on YouTube Follow Lex Fridman Podcast on Youtube Follow Dwarkesh Patel’s Podcast on YouTube Follow Eye on Ai with Craig Smith on YouTube Follow Unherd Podcast on YouTube Follow Robot Brains Podcast on YouTube Follow Bankless Podcast on Youtube Follow Ed Mylette Podcast on YouTube Follow V20 w Harry Stebbings on YouTube Follow the Future of Life Institute Podcast Eleizer Yudkowky   / esyudkowsky   Connor Leahy   / npcollapse   ➡️Conjecture Research https://www.conjecture.dev/research/ ➡️EleutherAI Discord   / discord   ➡️Stop AGI https://www.stop.ai/ Max Tegmark’s Twitter:   / tegmark   ➡️Max’s Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.org Mo Gawdat: Website: https://www.mogawdat.com/ ➡️YouTube: / @mogawdatofficial ➡️Twitter:   / mgawdat   ➡️Instagram:   / mo_gawdat   Future of Life Institute ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER:   / flixrisk   ➡️ INSTAGRAM:   / futureoflif…   ➡️ META:   / futureoflife.  . ➡️ LINKEDIN:   / futu.  .

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.