“The new AI tools are far far more powerful than these social media algorithms and they could cause far more damage.” Yuval Hariri

Social media AI algorithms have become deadly for children; aggressive algorithms that relentlessly serve up a vicious cycle of negative content…

Molly Russell, a 14 year old from North-west London, is known to have viewed 16,900 posts on Instagram and Facebook linked to topics such as depression, self-harm and suicide before ending her life in 2017.

83% of social media users surveyed were recommended self-harm content on their personalised feeds, such as Instagram’s ‘explore’ and TikTok’s ‘for you’ pages, without searching for it, pushed by algorithms, onto their smartphone. 75% saw self-harm content online for the first time aged 14 or younger.Swansea University

Transcript (auto-generated). We should not be sitting here. This should not happen because it does not need to happen. But we told this story in the hope that change would come about and I hope that the world will be safer. The digital world particularly will be a safer place to inhabit. And the final thing I want to say is thank you Molly for being my daughter. Through the inquest we have seen just a fraction of the Instagram posts seen by my 14 year old daughter Molly in 2017. Posts that were too disturbing for some to see in court. Posts that are not allowed to be broadcast. Posts that cause an expert psychiatrist weeks of poor sleep. And posts the senior coroner has now concluded contributed to Molly’s death. We’ve heard a senior meta executive describe this deadly stream of content that platform’s algorithms pushed to Molly as safe and not contravening the platform’s policies. If this demented trail of life-sucking content was safe my daughter Molly would probably still be alive. It’s time for the government’s Online safety Bill to urgently deliver its long promised legislation. It’s time to protect our innocent young people instead of allowing platforms to prioritize their profits by monetizing the misery of young people. Simple message to Mark [Zuckerberg] would be just to listen to people that use this platform. Listen to the conclusions that the coroner gave in this inquest and then do something about it.

Learn More:

Parents, internet bans won’t protect your children from social media’s endless loop of harmful content

My daughter Molly died after becoming trapped by an algorithm that served up distressing images – what are social media companies doing about that?

We are seeking to protect our children and young people from aggressive algorithms that relentlessly serve up a vicious cycle of negative content. Content that we know causes significant distress, harm and, tragically in my daughter Molly’s case, death.

Ian Russell. 16 January 2023. The Independent

This week, the Online Safety Bill returns to parliament – a vital bill that, for various reasons, has been subject to numerous delays and setbacks.

While all legislation must undergo robust scrutiny, the narrative around free speech will no doubt dominate much of today’s discussion about online safety. If we more closely monitor online content, are we in effect stifling free speech?

As one of many bereaved parents who have lost their child to the harmful effects of online content, I would argue that no, we are not stifling free speech. We are seeking to protect our children and young people from aggressive algorithms that relentlessly serve up a vicious cycle of negative content. Content that we know causes significant distress, harm and, tragically in my daughter Molly’s case, death.

Free speech is so often trotted out as justification for the existence of harmful content. But, as a proponent of free speech myself, I believe that it should not be conflated with free-for-all content.

The notion that in our offline world we are at liberty to say whatever we want to say is a false notion to start with. The laws of libel and slander are well established and for good reason – they are there to protect people within society from unsubstantiated claims and, presumably, because it was found to be problematic not to have those laws.

But I find it interesting that when it comes to libel or slander, we are readily accepting of the rules, yet when it comes to personal harm and risk to life, there are people trying to block this legislation, arguing that it is an attempt to censor free speech.

Elon Musk’s talk of free speech, and his re-introduction to Twitter of previously banned accounts, shows the dangers of this naive and idealistic approach. Free speech isn’t black and white, it’s much more nuanced than that.

This recent report from Samaritans states that more than three quarters of people surveyed saw self-harm content online by the time they were 14 – with some being 10 or younger. So it’s clear that something needs to be done in order to protect our children.

So what is this harmful content? Well, it’s not necessarily content that has been published with the aim of causing harm. Sometimes, content showing images of self-harm or encouraging suicide is posted by the user to find help and support. Of course, this doesn’t apply to all such content, and there is a wealth of imagery out there that is posted specifically to cause harm.

Regardless of its primary purpose, however, distressing content shouldn’t be accessible to all. Some platforms argue that if somebody has posted self-harm content as a cry for help, then they shouldn’t take it down. But if in order to help one person towards safety you may have made 100,000 people far less safe, and that can’t be a responsible approach.

The technology exists to flag such content, so why not remove it with improved signposting to helplines, for example? That way, it creates an immediate pathway for support for the person struggling, and reduces the negative impact it would otherwise have when amplified through social media shares and algorithms.

So this isn’t really about freedom of speech – it’s about freedom to live. A child psychiatrist (who spoke at the inquest into the death of my daughter Molly) stated that he was unable to sleep well for weeks after seeing the social media content viewed by Molly before she killed herself. My daughter was only 14.

Molly was trapped by an algorithm that served up distressing images. The coroner concluded that her death was caused by self-harm, depression and “the negative effects of online content”. But this negative content isn’t necessarily sought out.

The way algorithms work is a bit of a mystery. They are multiple and complex, and we know from the Samaritans report that 83 per cent of people who saw harmful content didn’t seek it out – it was suggested to them through features such as Instagram’s “explore” and TikTok’s “for you” pages. The report also found that 76 per cent who viewed self-harm content online went on to harm themselves more severely because of it.

So what we need from social media companies is more accountability and more transparency. All this talk about “town squares” sounds lovely, but there’s a darker side to social media tech the platforms don’t want to discuss.

In fact, not long after Molly’s death, a whistleblower leaked research conducted in-house by Facebook, showing that, among British teens who reported suicidal thoughts, 13 per cent traced the desire to kill themselves back to Instagram.

So while there’s a broader issue with social media content in terms of harms (filters, unrealistic expectations of beauty, etc), there is a very specific and particularly harmful issue that can be quickly dealt with, if companies had the will – or indeed the legal obligation – to do so. And this is images of self-harm and content that encourages suicide.

There are around 200 school-age suicides in the UK every year. Just one is one too many. So while we debate digital free speech as a concept, young people will be viewing content, becoming distressed and physically hurting themselves.

In the meantime, as parents, carers or teachers, we can only do what is within our power – and this is something I’m going to be discussing as part of the free Now and Beyond Festival on 8 February. We shouldn’t panic and put draconian measures in place. I’m sure there may be times when removing a young person’s internet access is the right thing to do but, in the main, we’ll only isolate our children further if that’s our go-to approach.

So we need to remember that there is no blame on our children’s part. They probably didn’t even seek out the content in the first place and if they did, there’s likely to be a vulnerability that needs addressing. Our children need to know that we won’t judge them and that they can come to us whenever something online upsets them. And, we all need to know how to report such content to encourage urgent removal.

We can’t be tech experts, but we can endeavour to keep the lines of communication open between us and our children or pupils. And until effective legislation comes into force, I hope that social media companies realise that they already have blood on their hands. The important concept of free speech should not be hijacked and distorted in such a way to allow online harm to seek out new victims.

Ian Russell is the bereaved father of Molly Russell who died in 2017. He is also a campaigner and founder of The Molly Rose Foundation. Ian will be hosting a free online session on digital dependency for teachers and parents/carers with Carrie Langton (founder of Mumsnet), Manjit Sareen (co-founder, Natterhub), Ukaoma Uche (Papyrus suicide prevention charity) and 16-year-old Kai Leighton (Beyond youth board member) as part of the Now and Beyond Festival. To book your free place, click here. Any schools or colleges wishing to book onto the wider Now and Beyond festival can register free here

If you are experiencing feelings of distress, or are struggling to cope, you can speak to the Samaritans, in confidence, on 116 123 (UK and ROI), email, or visit the Samaritans website to find details of your nearest branch.

Yuval Noah Harari argues that AI has hacked the operating system of human civilisation – The Economist

Storytelling computers will change the course of human history, says the historian and philosopher

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures. Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence. Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there. Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough.

The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools. Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation.

We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world. We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain.

Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday. Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies.

Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy. We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.

This text has been generated by a human.

Or has it?


Yuval Noah Harari is a historian, philosopher and author of “Sapiens”, “Homo Deus” and the children’s series “Unstoppable Us”. He is a lecturer in the Hebrew University of Jerusalem’s history department and co-founder of Sapienship, a social-impact company.