FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

At the Mountains of Madness (1931) is a science fiction-horror novella by American author H. P. Lovecraft and was originally serialized in the February, March, and April 1936 issues of Astounding Stories. (For much of his life, Lovecraft was fixated on the concepts of decline and decadence.) The Shoggoths gained independence, rose up and destroyed their creators, the Elder Things, in a civilization-ending rebellion. The formless Shoggoths later appeared in “The Shadow over Innsmouth” (1931), “The Thing on the Doorstep” (1933), and “The Haunter of the Dark” (1935)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE NEW YORK TIMES. Why an Octopus-like Creature Has Come to Symbolize the State of A.I. The Shoggoth, a character from a science fiction story, captures the essential weirdness of the A.I. moment.  30 MAY 2023.

“In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new abilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.”

by Kevin Roose

A few months ago, while meeting with an A.I. executive in San Francisco, I spotted a strange sticker on his laptop. The sticker depicted a cartoon of a menacing, octopus-like creature with many eyes and a yellow smiley-face attached to one of its tentacles. I asked what it was.

“Oh, that’s the Shoggoth,” he explained. “It’s the most important meme in A.I.”

And with that, our agenda was officially derailed. Forget about chatbots and compute clusters — I needed to know everything about the Shoggoth, what it meant and why people in the A.I. world were talking about it.

The executive explained that the Shoggoth had become a jokey reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.

But it was only partly a joke, he said, because it also hinted at the anxieties that many researchers and engineers have about the tools they’re building.

Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, bloblike monsters made out of iridescent black goo, covered in tentacles and eyes.

Shoggoths landed in the A.I. world in December, a month after ChatGPT’s release, when a Twitter user, @TetraspaceWest, replied to a tweet about GPT-3 (an OpenAI language model that was ChatGPT’s predecessor) with an image of two hand-drawn Shoggoths — the first labeled “GPT-3” and the second labeled “GPT-3 + RLHF.” The second Shoggoth had, perched on one of its tentacles, a smiley-face mask.

In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses and feeding those scores back into the A.I. model.

Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.

@TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”

Comparing an A.I. language model to a Shoggoth, @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.

“I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”

The Shoggoth image caught on, as A.I. chatbots grew popular and users began to notice that some of them seemed to be doing strange, inexplicable things their creators hadn’t intended. In February, when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.

Eventually, A.I. enthusiasts extended the metaphor. In February, the Twitter user @anthrupad created a version of a Shoggoth that had, in addition to a smiley-face labeled “R.L.H.F.,” a more humanlike face labeled “supervised fine-tuning.” (You practically need a computer science degree to get the joke, but it’s a riff on the difference between general A.I. language models and more specialized applications like chatbots.)

Today, if you hear mentions of the Shoggoth in the A.I. community, it may be a wink at the strangeness of these systems — the black-box nature of their processes, the way they seem to defy human logic. Or maybe it’s an in-joke, visual shorthand for powerful A.I. systems that seem suspiciously nice. If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.

In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new abilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world. And some of them have gotten to play around with the versions of this technology that haven’t yet been sanitized for public consumption — the real, unmasked Shoggoths.

That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)

And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoningthan a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.

Kevin Roose is a technology columnist and the author of “Futureproof: 9 Rules for Humans in the Age of Automation.” @kevinroose Facebook

A version of this article appears in print on June 12, 2023, Section B, Page 1 of the New York edition with the headline: This Meme Symbolizes State of A.I..

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.