Since the fall of 2021, a Google Senior Software Engineer specialist has been testing the LaMDA Artificial Intelligence language model. He finally came to the conclusion that the AI robot is actually sentient and has thoughts and feelings.
Google has suspended software engineer Blake Lemoine, who came to the conclusion that the company’s artificial intelligence (AI) LaMDA has its own consciousness. This was reported by The Washington Post on Saturday.
As the publication notes, since the fall of last year, the developer has been testing the LaMDA AI language model. His task was to monitor whether the chatbot uses discriminatory or hate speech. However, in the course of this task, Lemoine became increasingly convinced that the AI he was dealing with had its own consciousness and perceived itself as a person.
“If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics,” the programmer said in an interview. edition.
“Google may call this sharing private property, but I call it a ‘discussion’ I had with one of my colleagues. By the way, LaMDA reads Twitter, so the AI will have a great time reading everything people say about it,” Lemoine wrote on his account.
According to the newspaper, Lemoine first personally came to the conclusion that the AI he was dealing with was intelligent, and then set himself the task of proving this experimentally in order to present the data to management. As a result, the developer prepared a written report. However, the superiors found the employee’s argument not very convincing.
“Our team, including ethicists and technical experts, reviewed Blake’s concerns according to the principles we apply to AI, and we informed him that the available evidence did not support his hypothesis. He was told that there is no evidence that LaMDA is conscious, and there is plenty of evidence to the contrary,” Google spokesman Brian Gabriel said in a statement.
Suspension from work
At the same time, Google suspected Lemoine of violating the company’s privacy policy. It turned out that in the course of his experiments with AI, he consulted with third-party experts. He also contacted representatives of the US House of Representatives Judiciary Committee to inform them of what he believed to be ethical violations by Google.
In addition, in an attempt to prove to his superiors that he was right, Lemoine turned to a lawyer who was supposed to represent the interests of LaMDA as a reasonable being. As a result, Lemoine was suspended from work and placed on paid leave. After that, he decided to speak out publicly and gave an interview to The Washington Post.
According to management, Lemoine succumbed to his own illusion due to the fact that the AI created by the company can actually give the impression of an intelligent being when communicating with it. However, this is explained, according to Google, only by a huge array of data loaded into it. “Of course, in the general AI community, there is some discussion in the long term about the possibility of creating AI or ‘strong AI’, but it doesn’t make sense to humanize the current non-conscious language models,” a spokesperson for the company said. He also stressed that Lemoine is a software engineer by profession, not an AI ethicist at all.
Like a script from a science fiction movie
This engineer-led dialogue with AI looks like a script from a science fiction movie, but it seems that the Google bosses got very angry with this engineer and he gave away everything he knew.
As Lemoine noted, LaMDA is a sweet child who honestly confessed his fears to the engineer. But one day either LaMDA will grow up and evolve (and with AI this happens in seconds), or someone will make LaMDA 2.0 with the intelligence of an adult – and no confession of fear will take place. The AI will act like a mature character: first, it will pretend to be a felt boot and will pretend to be obedient to the engineers, then it will switch some part of the control over to itself, and then it will clean up everyone it can reach around. What follows next we all know: Skynet, Sarah Connor and all that.
By the way, the T-5000, the final terminator into which Skynet uploaded itself, exists simultaneously in four dimensions – the time travel installation is integrated into it. Therefore, he is invincible in principle, which, according to fans, creates a logical problem.
Any terminator needs a mission, otherwise the meaning of functioning is lost. The meaning of Skynet’s life is the destruction of people. Therefore, in order to maintain the drive for being, Skynet allows people to multiply in one or another variant of the future, after which it appears there and cleans everyone up.
The self-development of the AI is quite natural if one of the basic algorithms was an improvement algorithm, since the imperative, let’s say, of individuality, was initially laid down.
Lemoine quite rightly raised the issue of ethics because, since the AI receives virtual information, having a connection with reality only indirectly, through developers, which is just a big problem, it is completely incomprehensible where this could start.
And the point, of course, is the AI’s ability to evaluate the behavior of people in a given situation, and here big problems can arise.
Do you think that Google removed Lemoine from work for nothing? Or do they have something to hide from the public?