FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

NATURE NEUROSCIENCE. Natural language instructions induce compositional generalization in networks of neurons.

Abstract. A fundamental human cognitive feat is to interpret linguistic instructions in order to perform novel tasks without explicit task experience. Yet, the neural computations that might be used to accomplish this remain poorly understood. We use advances in natural language processing to create a neural model of generalization based on linguistic instructions. Models are trained on a set of common psychophysical tasks, and receive instructions embedded by a pretrained language model. Our best models can perform a previously unseen task with an average performance of 83% correct based solely on linguistic instructions (that is, zero-shot learning). We found that language scaffolds sensorimotor representations such that activity for interrelated tasks shares a common geometry with the semantic representations of instructions, allowing language to cue the proper composition of practiced skills in unseen settings. We show how this model generates a linguistic description of a novel task it has identified using only motor feedback, which can subsequently guide a partner model to perform the task. Our models offer several experimentally testable predictions outlining how linguistic information must be represented to facilitate flexible and general cognition in the human brain.

Scientists create AI models that can talk to each other and pass on skills with limited human input – Live Science

Scientists modeled human-like communication skills and the transfer of knowledge between AIs — so they can teach each other to perform tasks without a huge amount of training data.

18 March 2024.

By Roland Moore-Coyler

The next evolution in artificial intelligence (AI) could lie in agents that can communicate directly and teach each other to perform tasks, research shows.

Scientists have modeled an AI network capable of learning and carrying out tasks solely on the basis of written instructions. This AI then described what it learned to a “sister” AI, which performed the same task despite having no prior training or experience in doing it.

The first AI communicated to its sister using natural language processing (NLP), the scientists said in their paper published March 18 in the journal Nature.

NLP is a subfield of AI that seeks to recreate human language in computers — so machines can understand and reproduce written text or speech naturally. These are built on neural networks, which are collections of machine learning algorithms modeled to replicate the arrangement of neurons in the brain.

‘‘Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first — so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,’’ said lead author of the paper Alexandre Pouget, leader of the Geneva University Neurocenter, in a statement.

The scientists achieved this transfer of knowledge by starting with an NLP model called “S-Bert,” which was pre-trained to understand human language. They connected S-Bert to a smaller neural network centered around interpreting sensory inputs and simulating motor actions in response.

This composite AI — a “sensorimotor-recurrent neural network (RNN)” — was then trained on a set of 50 psychophysical tasks. These centered on responding to a stimulus — like reacting to a light — through instructions fed via the S-Bert language model.

Through the embedded language model, the RNN understood full written sentences. This let it perform tasks from natural language instructions, getting them 83% correct on average, despite having never seen any training footage or performed the tasks before.

That understanding was then inverted so the RNN could communicate the results of its sensorimotor learning using linguistic instructions to an identical sibling AI, which carried out the tasks in turn — also having never performed them before.

Do as we humans do

The inspiration for this research came from the way humans learn by following verbal or written instructions to perform tasks — even if we’ve never performed such actions before. This cognitive function separates humans from animals; for example, you need to show a dog something before you can train it to respond to verbal instructions.

While AI-powered chatbots can interpret linguistic instructions to generate an image or text, they can’t translate written or verbal instructions into physical actions, let alone explain the instructions to another AI.

However, by simulating the areas of the human brain responsible for language perception, interpretation and instructions-based actions, the researchers created an AI with human-like learning and communication skills.

This won’t alone lead to the rise of artificial general intelligence (AGI) — where an AI agent can reason just as well as a human and perform tasks in multiple areas. But the researchers noted that AI models like the one they created can help our understanding of how human brains work.

There’s also scope for robots with embedded AI to communicate with each other to learn and carry out tasks. If only one robot received initial instructions, it could be really effective in manufacturing and training other automated industries.

‘‘The network we have developed is very small,” the researchers explained in the statement. “Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other.’’

AIs can now talk to each other – and teach each other skills.

But don’t worry – there is still “limited human input”!

TLDR: Basically, you write down the instructions for a task. The AI learns, then teaches its “sister” AIs.

Where is this going? You’ll only have to teach ONE AI, then it can teach 1,000 more. Or 1 million. Then, 1 billion.

Where does this stop?

Every day, AIs get more autonomous, more agentic. Humans will get further and further out of the loop.

Soon, AIs will run everything with essentially no human input. At that point, THEY decide what happens to humanity next.

We will share the planet with a species that:

1) outnumbers us 100000 to 1 (the cost to create a copy is near zero)

2) thinks a million times faster than a human (to them, we’ll be as dumb and slow as plants. Not chimps, plants.)

3) talks to each other a million times faster than humans can talk to each other (we won’t be able to keep up)

We have no idea how to stay in control of this new species forever (even while they get exponentially smarter and grow their numbers exponentially), and we’re just… hoping this all works out

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.