FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

AGI (and the ASI Intelligence Explosion) is Closer than you think… a lot Closer.

It could be already happening. Seriously.

AI models are now creating and coding and creating synthetic data (knowledge) for the next generation of AI models.

Meta-cogintion is emergent behavior which means the machine understands itself- and can improve itself with synthetic code and content.

Understanding mathematics means understanding everything.

Self-improvement coding + synthetic data + self-play + self-learning + scale = Intelligence Explosion.

Learn More about Q-learning and the Q* (Q-star) breakthrough at OpenAI:

References for convenience…

Recent (example) tweets that certainly are worrying…

Question: Can this be true?

“The AI safety field is a FANTASTIC example of organic growth – it was just Eliezer and a few dozen nerds for like two decades. Now there are a few hundred nerds, funded by donations from other nerds. That’s it. That’s Big AI Safety”

Answer: Basically, Yes.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.