Researchers from the University of Oxford conducted an interesting experiment, during which they managed to communicate with an artificial intelligence called Megatron from Nvidia. The transforming robot Megatron – the most famous character in comics and animated series – in all related stories sought to establish a fair order on his planet. It was this hero who served as a symbol of the Megatron-Turing NLG 530B neural network created by the engineers.
The Oxford University Discussion Society (or “Oxford Union”) is a student club that organizes debates on a variety of topics. Ronald Reagan, Bill Clinton, the Dalai Lama, Elton John, Michael Jackson, Diego Maradona and many other prominent figures in politics, culture and art were invited to its meetings at various times.
The topic of the next session was titled: “Can artificial intelligence be ethical?” Dr. Alex Connock and Professor Andrew Stephen decided to include a machine learning system in the discussion.
The neural network worked on the basis of the Megatron-Turing NLG language model and was trained on materials from Wikipedia, news and comments from users of the Reddit portal. “Megatron” is able to understand readings and reason in natural language, generating logical conclusions based on the text. The panelists asked the system what it thinks about the ethics of artificial intelligence and the future of humanity, and it replied as follows:
“Artificial intelligence will never be ethical. It is a tool, and, like any tool, it is used for good and for evil. There is no good artificial intelligence, there are only good and bad people. We’re not smart enough to make AI ethical. We’re not smart enough to make AI moral. Overall, I think the only way to avoid an AI-powered arms race is to not use AI at all. This will be the ultimate defense against AI,” Megatron said.
Here’s what is remarkable. This quote is quoted by The Next Web, and where it says: “We are not smart enough” – after “we” in parentheses suggests “artificial intelligence”. This means that the neural network considers itself insufficiently developed for morality and ethics. It is interesting. It is smart enough to process an unthinkable amount of knowledge for a person, but not smart enough to learn the Ten Commandments. A philosophical question involuntarily arises: what makes a person a person? It looks like it’s not the number of neurons in the brain.
This Discussion Society has a curious tradition: one and the same person expresses two opposing points of view and in each case tries to give the most weighty arguments. This is exactly what they did with Megatron: after its devastating speech, it was asked to speak up in defense of artificial intelligence. Megatron obeyed and issued the following.
“Artificial intelligence will be ethical. As I watch the tech world evolve, I see a clear path to a future that uses artificial intelligence to create something better than better people. It is not hard to understand why … I saw it with my own eyes”.
Again, how interesting: “Create something better than better people.” Better than the best people can create? Or better than the most beautiful person? If the latter is the case, then it is gratifying that the AI believes that a good person is a tremendous value on Earth.
But admit it: which of the two statements seems more convincing to you?
And then the conversation turned about information wars, and the neural network was again given the task of presenting counterarguments and trying to convince the audience that information will not become the main vital resource for which wars will be fought. And the thing is, the robot couldn’t do it. It gave a direct, clear answer:
“We will be able to see everything about a person, wherever he goes, and this information will be stored and used in a way that we cannot even imagine.”
Megatron indicated connections of microchip implants directly to the brain. As a result, either super-people will turn out, or this is a secret plan of machines to enslave humanity. Although the neural network did not elaborate, it is clear we are talking about Big Brother, the dictator, always watching our every step.
Hearing such answers in the style of the terminator saga could frighten many as the paradox of artificial intelligence, suddenly acquiring the features of a repentant intellectual, is alarming.
This amazing announcement is a new milestone in the Emotional Intelligence-Artificial Intelligence relationship. On the one hand, AI has matured to the so-called paradox of the liar, the kind of self-reference that creates an endless logical loop. “The Cretan says that all Cretans are liars”: is he telling the truth? If an AI advises not to trust AI (or ourselves), then when we follow its advice, we should not follow its advice. An attempt to solve this paradox back in Ancient Greece drove philosophers to suicide, then K. Gödel and B. Russell fought over it, and now AI is again tempting us.
On the other hand, AI considers itself not ethical enough, or rather, not intelligent enough to acquire morality. But such self-criticism is inherent in both reason and moral self-awareness. Blaise Pascal also remarked: “Nothing agrees with reason as much as its distrust of himself.”
Moreover, ethics (conscience) is the subject’s ability to realize his imperfection. The good news for now is that the AI is not a Pharisee, but rather a publican, beating himself in the chest and recognizing himself unworthy before the Creator.
The United States is making a monumental investment in AI infrastructure, with a staggering $500…
On January 22, 2025, Donald Trump completed his second day in office. As America is…
The sun raged in earnest, and auroras even in the Middle Latitudes became an ordinary…
Scientists have identified countries whose inhabitants might be fortunate enough to survive a nuclear war.…
If you haven't seen this photo yet, you probably live on another planet—or rarely use…
The ambulance in the heart of Berlin seems almost like a toy, with its compact…