FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A very good read from a respected source!

TOP-20 AGI Known Problems and TOP-20 AGI Mathematically Provable Containment and Control Solutions.

Source. LESSWRONG

TOP-20 AGI Known Problems

  1. Self-replication/Escape (10532 + 1624 results)
  2. Self-modification (8459 results)
  3. P(doom) (5685 results)
  4. Goodhart’s Law (4370 results)
  5. Goals (4048 results)
  6. Deception (2624 results)
  7. Black box (532 results)
  8. Emergent behavior (groking) (41 + 414 results)
  9. Power seeking (352 results)
  10. Instrumental convergence (330 results)
  11. Wireheading (321 results)
  12. Value Learning & Proxy Gaming (376+19 results)
  13. Cybersecurity (181)
  14. Inner misalignment (113 results)
  15. Biosecurity (101 results)
  16. Gradient hacking (91 results)
  17. Misaligned goals (51 results)
  18. Model splintering (50 results)
  19. Ontological crisis (47 results)
  20. Alien mind (30 results)

TOP-20 AGI Mathematically Provable Containment and Control Solutions (alphabetical)

  1. ABSOLUTELY NONE
  2. Gonzo-zip
  3. Goose Egg
  4. Inexistence
  5. Nada
  6. Naught
  7. Nil
  8. Nix
  9. None
  10. Nothing
  11. Nothingness
  12. Non-existent
  13. Nut-tin’
  14. Null
  15. Nyet
  16. Zero
  17. Zilch
  18. Zip
  19. Zippo

Not to worry?

The big-tech companies are working on safe AI too, and while they certainly do not know how, just yet, to make safe AI  – “we are doing the best we can” – they really think they can hopefully solve the problem. ($100s of Billions of Budget allocation: >99% capability and <1% safety). AND Government regulations are slowly coming! (routinely watered down by those same tech industry lobbyists)

Meanwhile, as the tech companies race-to-the-bottom, the current average P(doom) prediction by AI engineers is 40%.

Geoffrey Hinton is widely respected as the “Godfather of Deep Learning AI”

“So presently ninety-nine percent (99%) of the money is going into developing [AI] and one percent (1%) is going into safety. People are saying all these things might be dangerous so it should be more like 50/50.” — Geoffrey Hinton

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.