Artificial Intelligence Specialist, Founder of the Center for Human Compatible AI at the University of California at Berkeley Professor Stuart Russell said that his colleagues are “scared” by their own successes in this area and compared the progress in the development of AI with the creation of the atomic bomb. Almost all experts believe that machines surpassing humans in intelligence will be invented in this century, he said, and called for urgent regulation of this technology at the international level.
“The AI community has not yet become accustomed to the fact that we have begun to have a very large impact on the real world,” Professor Russell said in Guardian interview. For most of the history of this industry, this was not the case – we just sat in laboratories, developed, tried to make it work, most often in vain. So the question of real impact on the world was completely irrelevant. And we needed to grow very fast to keep up.”
Artificial intelligence has begun to penetrate many areas of life, from Internet search engines to banks, and advances in image recognition and machine translation are among the main achievements of recent years. Stuart Russell, who will be delivering a lecture series titled “Living with Artificial Intelligence” on BBC radio this year, believes there is an urgent need to oversee the development of super-intelligent AI.
According to Russell, AI designed to solve narrow problems is too risky to use for solving real problems. For example, it’s dangerous to ask an AI to solve a cancer problem as soon as possible.
“In this case, he will most likely find a way to transplant cancer cells to all of humanity in order to conduct millions of experiments in parallel, using all of us as guinea pigs,” the scientist said. And all because this is the solution to the problem that we gave him. We just forgot to clarify that you cannot conduct experiments on humans and use the entire world GDP to conduct experiments, and much more is not allowed.”
The professor believes that there is still a big difference between the current AI and the one shown in science fiction films like Ex Machina, but a future where machines are smarter than humans is likely.
“I think the time frame ranges from 10 years, in the most optimistic scenario, to several hundred,” he said. “But almost all AI researchers will say it will happen in this century.”
The problem is that AI doesn’t need to be smarter than humans to be a serious threat to them. You don’t have to look far for examples – just look at the algorithms of social networks that decide what people should read and watch. They already have a huge impact on people’s perceptions. And AI developers, according to Russell, are intimidated by their own success.
“This is a bit like what happened in physics when scientists realized that atomic energy exists, that they can measure the mass of different atoms and understand how much energy would be released if some types of atoms were transformed into others. And then it happened and nobody was ready for it,” Russell explained.
The scientist is convinced that the future of AI lies in the development of machines that, like well-trained servants, address people on all issues without making decisions on their own. It should be assumed that different people may have different – and sometimes conflicting – preferences, and they change over time. He calls for some code for developers and organizations to ensure AI impartiality. Also, the EU ban on disguising cars as a person should be extended to the whole world.
In five years, AI will enable algorithms to “read legal documents” and “give medical advice” Sam Altman said, co-founder and president of the non-profit organization OpenAI. In ten, the cars will be en masse on the conveyor belt and, possibly, even become companies. Even later, AI will do almost everything, including scientific discoveries. As a result, AI will create “phenomenal abundance”, but at the same time the cost of labor will drop to almost zero.