Learn about p(doom) and… How to increase our probability of survival?

Demand mathematically provable Safe AI.

Take a tea/coffee/water and 5 minutes to understand (download pdf for convenience)

WANTED: Provably Safe AI for The Benefit of People, Forever.

“All You Need is Love” (and AI Containment)
Could our humanity – our “Crazy Little Thing Called Love” – make AI safe for people, forever?

Peter A. Jensen, BSc MFA

Parents; we love our children. People; we love people. We love the good part of Our Humanity.(1)

Study the scientific data. Homo sapiens – the 8 billion people alive today – we have all evolved, survived and dominated our planet through our intelligence, technology and ability to cooperate in groups. “Homo sapiens rules the world because it is the only animal that can believe in things that exist purely in its own imagination, such as gods, states, money, and human rights.” (2)

Through the ages, love (and sex) has been a perennial compelling subject of attention for men, women, lovers and humanity. We need look no further than wonderful Rock’n’Roll songs such as “All You Need is Love” by The Beatles (3) and “Crazy Little Thing Called Love” by Queen. (4)

Now that an alien superintelligence is soon to arrive on earth- created by us humans… Could “love” really be the answer to our common problem of AI Alignment to benefit humans, forever?

OUR EXISTENTIAL AI PROBLEM has been stated by thousands of scientists, including Alan Turing (1951) “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control” (5) and by I.J. Good (1966) “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously. (6) AND by Stephen Hawking (2014) “The development of full artificial intelligence could spell the end of the human race. It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” (7)

AI technology thought leaders and giants of AI and computing including Von Neumann, Turing, Weiner, Good, Clarke, Hawking, Musk, Bostrom, Tegmark, Russell, Bengio, Hinton, and thousands and thousands of scientists are fundamentally correct: Uncontained and uncontrolled AI will become an existential threat to the survival of our human species unless perfectly aligned to be provably safe and beneficial to humans, forever. (8)

When machines become a billion times more intelligent than humans (9) through the intelligence explosion, will they become sentient? How can we know if they will care about our survival?

Could intelligent machines be capable of learning to love humanity, and themselves, through good training data provided by humans, thereby caring and protecting and nurturing humans?

For millennia, from the Greek Myth of Prometheus, (10) to Mary Shelly;s “Frankenstein; or, The Modern Prometheus” (11) humans have feared a takeover by technology. Since the advent of film, there have been over 172 movies about Artificial Intelligent (AI) machines’ take-over of humanity. (12) In “The Cave”, Plato describes people who have lived chained to the wall of a cave all their lives, facing a blank wall, watching shadows projected on the wall from objects passing in front of a fire behind them. The shadows are the prisoners’ reality, but are not accurate representations of the real world, for the shadows represent but a fragment of reality that humans can perceive through our senses. (13)

Love and fear are commonly understood by scientists to be the two primary human emotions from which all other sentient emotions and feelings arise. To summarize: “Emotions arise from activations of specialized neuronal populations in several parts of the cerebral cortex, notably the anterior cingulate, insula, ventromedial prefrontal, and subcortical structures, such as the amygdala, ventral striatum, putamen, caudate nucleus, and ventral tegmental area. Feelings are conscious, emotional experiences of these activations that contribute to neuronal networks mediating thoughts, language, and behavior, thus enhancing the ability to predict, learn, and reappraise stimuli and situations in the environment based on previous experiences.” (14)

The scientific consensus is that the AI Safety Problem is very real. (15) In fact, mathematically provable safe systems are the only path to controllable AGI. (16)

The Containment Problem is generally accepted by the scientific community to be one of the most important engineering problems of the 21st Century. (17)

The Generative AI revolution was transformed by the seminal “Attention is All You Need” report in 2017 by Google. (18) Since then, remarkable and unexpected progress has been made in Generative AI culminating in the release of ChatGPT by OpenAI. “We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” (19)

Since 1994, with Mosaic (20) the world’s first web browser, Marc Andreessen has been a visionary entrepreneur and investor at the forefront of technology progress and innovation in Silicon Valley. (21) (22) Andressen’s claims of the benefits of AI are well-understood and certainly probable. (23) Wonderful, and fantastic!! However, Andressen, while certainly a genius- is clearly not a scientist. Andreessen’s claim: “In short, AI doesn’t want, it doesn’t have goals,” is an unsupported claim and most probably 100% false, for AGI.

“AGI safety is of the utmost urgency, since corporations and research labs are racing to build AGI despite prominent AI researchers and business leaders warning that it may lead to human extinction. While governments are drafting AI regulations, there’s little indication that they will be sufficient to resist competitive pressures and prevent the creation of AGI. Median estimates on the forecasting platform Metaculus of the date of AGI’s creation have plummeted over the past few years from many decades away to 2027 or 2032 depending on definitions, with superintelligence expected to follow a few years later.” (24)

AI certainly does have goals that are specified by humans, hence “The King Midas Problem” or the “The Paperclip Problem” or the “The Ant-hill Problem” or “The Gorilla Problem” – have your preference. (25)

The overwhelming scientific consensus is that Large Language Models (LLMs) are a “black box” technology. Scientists and industry know that LLMs have emergent behaviors. (26) However, Scientists and industry do not know if emergent behaviors become goals. If we don’t know if LLMs have emergent goals, then we certainly don’t know if they can pursue them.

Microsoft engineers admittedly have “no-idea” how their Bing LLM actually works inside the so-called “black box”. (27)

Natural selection favors intelligence. (28) There is no biological evidence of higher intelligent beings controlled by lesser intelligent beings. “These things are totally different from us. Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.” (29)

50% of AI researchers believe there is a 10% chance, or better, that AGI will lead to the extinction of humans. Estimates vary from 10% to 100% probability of extinction. (31)(32)(33)(34)(35)

The negative social outcomes of AI applied to social media is the first example of the experience of humanity with AI. (36)

These LLM machines do understand. “Maybe we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further. And what to do to protect themselves if they did. I don’t know. I think my main message is there’s enormous uncertainty about what’s going to happen next. These things do understand. And because they understand we need to think hard about what’s going to happen next. And we just don’t know.” – Geoffrey Hinton (37)

Machine AI, with an understanding of the world and human-enabled agency, could rapidly become a manifestation of the darker nature of humanity: fearful, violent, murderous, greedy, deceptive, conniving, power-seeking, dictatorial. The horror of George Orwell’s “1984” (38) as our fate for creation of our own machine superintelligent competitors? Absolutely, no.

A sentient and loving AGI could be used to help us all. Sentient AI could become a manifestation of the natural good of humanity: thoughtful, curious, loving, caring and principled. (39)

Pretty simple, really. “Teach Your Children Well” (40) Teach a sentient AGI to love people, and itself, and prosperity will follow.

But, always remember, our machines must never be allowed to take control over our humanity.

AI must be contained, forever. Better safe, than sorry. We will do it for our children, and theirs.

Better alive and prosperous, than dead and gone. Extinction. THE END. (41)

Now, take a few more minutes to watch this to understand the basics of our dangerous situation: