AI is learning to create itself, because humans would have had a hard time creating intelligent machines, maybe we should let them take care of it themselves

Rui Wang, an artificial intelligence researcher at Uber, likes to run Paired Open-Ended Trailblazer (POET) software, which he helped develop, on his laptop overnight. POET is a kind of training tool for virtual robots. So far they haven’t learned much. These AI agents don’t play Go, see signs of cancer, or bend proteins. You try to navigate without getting caught in a rough, cartoony landscape of fences and ravines.

But it’s not exciting what robots learn, but how they learn. POET generates the obstacle courses, assesses the robots’ skills, and assigns them their next challenge, all without human intervention. No, no, robots improve through trial and error.

At some point, Mwang explains, he’ll be able to jump over a cliff like a kung fu master. Every day I go to my office, open my computer and don’t know what to expect. It might seem elementary at first, but for Wang and a handful of other researchers, POET points to a revolutionary new way to create super-intelligent machines: by getting AI to make itself.

Wang’s former colleague Jeff Clune is one of the strongest supporters of this idea. Clune has been working on it for years, first at the University of Wyoming and later at Uber AI Labs, where he collaborated with Wang and others. He now divides his time between the University of British Columbia and OpenAI and enjoys the support of one of the best artificial intelligence laboratories in the world.

Clune believes trying to create truly intelligent AI is the most ambitious scientific endeavor in human history. Today, seven decades after serious AI efforts began, we are still a long way from creating machines that are as intelligent as humans or even smarter. Clune thinks POET could show a shortcut. We have to free ourselves from our bonds and get out of our own way,” he says.

If Clune is right, using AI to create AI could be an important step towards general artificial intelligence (AGI), i.e. machines that can outperform humans. In the short term, this technique could also help us discover other types of intelligence: non-human intelligences capable of finding solutions in unexpected ways, perhaps complementing rather than replacing our own intelligence.

Clune’s ambitious vision is not only based on the investment of OpenAI. The history of AI is full of examples where human-designed solutions have been replaced by machine-learned ones. Take computer vision as an example: The great advance in the field of image recognition took place ten years ago, when the previous manual systems were replaced by self-learning systems. The same goes for many AI achievements.

One of the fascinating aspects of AI, and machine learning in particular, is its ability to find solutions that humans don’t have. An often-cited example is that of AlphaGo (and its successor, AlphaZero), who used seemingly alien strategies to best humankind at the ancient and fascinating game of Go.

In 2016, Lee Sedol, the pro player considered the best international of the 2000s, bowed to the onslaught of the show at the end of a three-hour game that commentators deemed a close one. After hundreds of years of study by human masters, the AI ​​has found solutions no one had thought of.

Clune is currently working with an OpenAI team that developed bots that learned to play hide and seek in a virtual environment in 2018. These AIs started with simple goals and simple tools to achieve them: one pair had to find the other who could hide behind moving obstacles. But when these bots were unleashed to learn, they quickly found ways to use their environment in ways the researchers didn’t expect.

They used loopholes in the simulated physics of their virtual world to jump over and even walk through walls.
This kind of unexpectedly emerging behavior suggests that AI could find technical solutions that humans would not have thought of on their own, by inventing new types of algorithms, or more efficient neural networks, or even abandoning techniques altogether.

First you have to build a brain and then teach it. But machine brains don’t learn like we do. Our brains are fantastic at adapting to new environments and new tasks. Today’s AIs can solve problems under certain conditions, but fail when those conditions change even slightly. This rigidity hinders the search for a more general AI that can be useful in a variety of scenarios, which would be a big step towards real intelligence.

For Jane Wang, researcher at DeepMind London, the best way to make AI more flexible is to let it learn this trait on its own. In other words, she wants to build an AI that not only learns specific tasks, but also learns to learn those tasks in a way that can be adapted to new situations.

Researchers have been trying to make AI more adaptable for years. Wang thinks letting the AI ​​solve this problem on its own avoids the guesswork of a hand-crafted approach: “We can’t expect to find the right answer right away.” She hopes that this will also help us learn more about how the brain works. There’s still so much we don’t understand about how humans and animals learn, she says. There are two main approaches to automatically generating learning algorithms, but both start with an existing neural network and use AI to teach it.

The first approach, invented separately by Wang and his colleagues at DeepMind and around the same time by a team at OpenAI, uses recurrent neural networks. This type of network can be trained to encode any type of algorithm by activating its neurons, which is similar to the firing of neurons in biological brains. DeepMind and OpenAI have taken advantage of this to train a repetitive neural network to generate reinforcement learning algorithms that tell an AI how to behave in order to achieve specific goals.

The result is that DeepMind and OpenAI systems do not learn an algorithm that solves a specific challenge, like image recognition, but learn a learning algorithm that can be applied to multiple tasks and adjusted over time. It’s like the old saying about learning to fish: While a hand-crafted algorithm can learn a specific task, these AIs should learn to learn themselves. And some of them work better than man-made ones.

The second approach is that of Chelsea Finn of the University of California, Berkeley and her colleagues. Called meta-model-agnostic learning, or MAML, it trains a model using two machine learning processes, one embedded within the other.

Thats how it works:

The internal process of MAML is trained with data and then tested as usual. But then the outer model takes the performance of the inner model, such as how it identifies images, and uses that to learn how that model’s learning algorithm can be adjusted to improve performance. It’s like a school inspector overseeing a group of teachers, each offering different learning techniques. The inspector examines which techniques the students use to get the best results and adjusts them accordingly.

Using these approaches, researchers are building an AI that is more robust, more generalized, and able to learn faster with less data. For example, Finn wants a robot that has learned to walk on level ground to be able to walk on a slope, on grass, or while carrying a load with minimal additional training.

Last year, Clune and his colleagues extended Finn’s technique to design an algorithm that learns with fewer neurons so as not to erase everything it previously learned, a major unsolved problem in machine learning known as catastrophic forgetting. A trained model that uses fewer neurons, called a “sparse” model, will have more unused neurons that are dedicated to new tasks during training, meaning fewer “used” neurons will be squashed.

Clune found that challenging his AI to learn more than one task led him to create his own version of a sparse model, which performs better than human-made models. If we want the AI ​​to create and teach itself, it should also create its own training environments—the schools and textbooks and the lesson plans.

Over the past year, we’ve seen a number of projects where AI has been trained from automatically generated data. Face recognition systems, for example, are trained with AI-generated faces. AIs also learn by training each other. In a recent example, two robotic arms worked together, with one learning to take on progressively more difficult block stacking challenges while the other arm could practice grasping objects.

When the AI ​​starts generating intelligence on its own, there is no guarantee that it will be human-like. Instead of teaching machines to think like humans, machines could teach humans new ways of thinking.

And you?

What is your opinion on the topic?

See also:

Blockchain, cybersecurity, cloud, machine learning and DevOps are among the most in-demand tech skills in 2022, according to a Comptia report

The footage shows an AI-controlled robotic tank blowing up cars, reigniting fears of the proliferation of deadly autonomous weapons

The transcript used as evidence of the sensitivity of Google’s LaMDA AI was edited to make it easier to read, according to notes from an engineer fired by Google

Google engineer fired after claims Google’s AI chatbot LaMDA has become sentient, expressing thoughts and feelings analogous to those of a human child

Leave a Comment