We value your privacy

We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Skip to content
blog.biocomm.ai Logo blog.biocomm.ai Logo
  • Blog
  • Resources
    • THINK. A humble public service message.
    • P(doom)FIXER SafeAI Engineering
    • DECK For Humanity Podcast
    • GLOBAL AI SAFETY COOPERATION BEGINS 2023/24.
    • WHAT can go wrong with a powerful WISH?
    • The AI Safety Problem
    • The Containment Problem
    • Proliferation of Open Source AI
    • Thought Leaders
      • OPPENHEIMER; An Open World & The Atom Bomb (1945, 1964)
      • BOSTROM; The Intelligence Explosion, TED (2015)
      • YAMPOLSKIY; Artificial Superintelligence: A Futuristic Approach (2016)
      • BREMMER & SULEYMAN; The AI Paradox (2023)
      • FLI: ALLEN, BENGIO, BRONSON, HARIRI, MARCUS, RUSSELL, TALLINN, WOZNIAK, Reflections 6 months on (2023)
      • HINTON on CNN GPS with Fareed Zaharia (2023)
      • SCHMIDT on CNN GPS with Fareed Zakaria (2023)
      • TEGMARK & OMOHUNDRO; Provably Safe Systems: The Only Path to Controllable AGI (2023)
      • 16 Thought Leaders for Provably Safe AI
      • Molly Russell’s father; UK Online Safety Law (2023)
    • WARNINGS BY EXPERTS
      • Future of Life Institute
      • Center for AI Safety
      • Ban Autonomous Weapons
      • KILLER ROBOTS
      • AI as an EXISTENTIAL Threat
    • Professor Stuart Russell on Safe AI
    • AI Alignment
    • Benefits of AI
    • Useful AI Podcasts
    • Multipolar Trap. Race to the Bottom.
    • Meet The Shoggoth
    • About P(doom)
    • SafeAI Forever Forum
  • Categories
    • AI Thought Leaders
    • AI Existential Risk
    • AI Biosecurity Risk
    • AI Regulation
    • AI Emergent Ability
    • AI and Killer Robots
    • AI Dangers
    • AI Safety Organisations
    • AI must be Provably Safe
    • AI Legal Matters
    • AI and Business
    • AI by OpenAI
    • AI in the EU
    • AI in the Movies
    • AI in Politics & Government
    • AI Industry
    • AI Industry Products
    • AI Opinions
    • AI Science
    • AI Scientists
    • AI Benefits
    • AI and Jobs
    • AI Data Centers
    • AI by Google Deepmind
    • AI by MSFT
    • AGI
    • P(doom)
    • Can Humanity Survive AI?
  • Blog
  • Resources
    • THINK. A humble public service message.
    • P(doom)FIXER SafeAI Engineering
    • DECK For Humanity Podcast
    • GLOBAL AI SAFETY COOPERATION BEGINS 2023/24.
    • WHAT can go wrong with a powerful WISH?
    • The AI Safety Problem
    • The Containment Problem
    • Proliferation of Open Source AI
    • Thought Leaders
      • OPPENHEIMER; An Open World & The Atom Bomb (1945, 1964)
      • BOSTROM; The Intelligence Explosion, TED (2015)
      • YAMPOLSKIY; Artificial Superintelligence: A Futuristic Approach (2016)
      • BREMMER & SULEYMAN; The AI Paradox (2023)
      • FLI: ALLEN, BENGIO, BRONSON, HARIRI, MARCUS, RUSSELL, TALLINN, WOZNIAK, Reflections 6 months on (2023)
      • HINTON on CNN GPS with Fareed Zaharia (2023)
      • SCHMIDT on CNN GPS with Fareed Zakaria (2023)
      • TEGMARK & OMOHUNDRO; Provably Safe Systems: The Only Path to Controllable AGI (2023)
      • 16 Thought Leaders for Provably Safe AI
      • Molly Russell’s father; UK Online Safety Law (2023)
    • WARNINGS BY EXPERTS
      • Future of Life Institute
      • Center for AI Safety
      • Ban Autonomous Weapons
      • KILLER ROBOTS
      • AI as an EXISTENTIAL Threat
    • Professor Stuart Russell on Safe AI
    • AI Alignment
    • Benefits of AI
    • Useful AI Podcasts
    • Multipolar Trap. Race to the Bottom.
    • Meet The Shoggoth
    • About P(doom)
    • SafeAI Forever Forum
  • Categories
    • AI Thought Leaders
    • AI Existential Risk
    • AI Biosecurity Risk
    • AI Regulation
    • AI Emergent Ability
    • AI and Killer Robots
    • AI Dangers
    • AI Safety Organisations
    • AI must be Provably Safe
    • AI Legal Matters
    • AI and Business
    • AI by OpenAI
    • AI in the EU
    • AI in the Movies
    • AI in Politics & Government
    • AI Industry
    • AI Industry Products
    • AI Opinions
    • AI Science
    • AI Scientists
    • AI Benefits
    • AI and Jobs
    • AI Data Centers
    • AI by Google Deepmind
    • AI by MSFT
    • AGI
    • P(doom)
    • Can Humanity Survive AI?
  • Blog
  • Resources
    • THINK. A humble public service message.
    • P(doom)FIXER SafeAI Engineering
    • DECK For Humanity Podcast
    • GLOBAL AI SAFETY COOPERATION BEGINS 2023/24.
    • WHAT can go wrong with a powerful WISH?
    • The AI Safety Problem
    • The Containment Problem
    • Proliferation of Open Source AI
    • Thought Leaders
      • OPPENHEIMER; An Open World & The Atom Bomb (1945, 1964)
      • BOSTROM; The Intelligence Explosion, TED (2015)
      • YAMPOLSKIY; Artificial Superintelligence: A Futuristic Approach (2016)
      • BREMMER & SULEYMAN; The AI Paradox (2023)
      • FLI: ALLEN, BENGIO, BRONSON, HARIRI, MARCUS, RUSSELL, TALLINN, WOZNIAK, Reflections 6 months on (2023)
      • HINTON on CNN GPS with Fareed Zaharia (2023)
      • SCHMIDT on CNN GPS with Fareed Zakaria (2023)
      • TEGMARK & OMOHUNDRO; Provably Safe Systems: The Only Path to Controllable AGI (2023)
      • 16 Thought Leaders for Provably Safe AI
      • Molly Russell’s father; UK Online Safety Law (2023)
    • WARNINGS BY EXPERTS
      • Future of Life Institute
      • Center for AI Safety
      • Ban Autonomous Weapons
      • KILLER ROBOTS
      • AI as an EXISTENTIAL Threat
    • Professor Stuart Russell on Safe AI
    • AI Alignment
    • Benefits of AI
    • Useful AI Podcasts
    • Multipolar Trap. Race to the Bottom.
    • Meet The Shoggoth
    • About P(doom)
    • SafeAI Forever Forum
  • Categories
    • AI Thought Leaders
    • AI Existential Risk
    • AI Biosecurity Risk
    • AI Regulation
    • AI Emergent Ability
    • AI and Killer Robots
    • AI Dangers
    • AI Safety Organisations
    • AI must be Provably Safe
    • AI Legal Matters
    • AI and Business
    • AI by OpenAI
    • AI in the EU
    • AI in the Movies
    • AI in Politics & Government
    • AI Industry
    • AI Industry Products
    • AI Opinions
    • AI Science
    • AI Scientists
    • AI Benefits
    • AI and Jobs
    • AI Data Centers
    • AI by Google Deepmind
    • AI by MSFT
    • AGI
    • P(doom)
    • Can Humanity Survive AI?
Previous Next
Control AI. MAGIC Video (1:49) “Why Are We Letting Them Do This?”
  • View Larger Image
    • 
    • 

    Control AI. MAGIC Video (1:49) “Why Are We Letting Them Do This?”

    Peter A. Jensen2023-10-30T07:19:53+00:00October 27, 2023|

    Share This Story, Choose Your Platform!

    FacebookTwitterLinkedInWhatsAppEmail

    FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT.

    IMPORTANT COPYRIGHT DISCLAIMER. THIS AI SAFETY BLOG IS FOR EDUCATIONAL PURPOSES ONLY AND KNOWLEDGE SHARING IN THE GENERAL PUBLIC INTEREST ONLY. This free and open not-for-profit ‘First do no harm.’ AI-safety blog is curated and organised by BiocommAI. Some of the following selected stand-out information is copyrighted and is CITED WITH LINK-OUT to the respective publisher sources. These vital public interest stories are selected and presented to serve the global public humanitarian interest for educational and knowledge sharing purposes regarding the EXISTENTIAL THREAT TO HUMANITY OF THE PROLIFERATION OF UNCONTROLLED, UNCONTAINED, UNSAFE AND UNREGULATED AI TECHNOLOGY. Copyrights owned by publishing sources are respectfully cited by the LINK-OUT to all sources. To request a takedown or update please contact: info@biocomm.ai

    IMPORTANT DISCLOSURE: None of the information is this blog is meant to be construed as investment advice. This blog is for educational and knowledge sharing purposes only. Opinions expressed are based upon information considered reliable, but this blog does not warrant its completeness or accuracy, and it should not be relied upon as such- always do your own due diligence. This blog is not under any obligation to update or correct any information provided. Statements and opinions are subject to change without notice. No compensation is received for the opinions expressed. Past performance is not indicative of future results. This blog does not relate to any specific outcome or profit. You should be aware of the real risk of loss in following any strategy or investment in AI business opportunities or products. Strategies or investments discussed may fluctuate in price or value. Investors may get back less than invested. Information or strategies mentioned or referenced in this blog may not be relevant for investment analysis. Always seek advice from your own financial or investment adviser.

    Copyright 2024 | All Rights Reserved | BiocommAI Limited | 1st Floor, 9 Exchange Place, IFSC, Dublin 1, D01 X8H2, Ireland

    error: Content is protected !!
    Go to Top