As you know, the most surprising thing about Artificial Intelligence is that the more data arrays, the so-called Big Data, are processed, the more accurately AI can predict the future. This is similar to how in the WestWorld series, the supercomputer Rehoboam (in the Old Testament, the son of Solomon) completely predicted human life.
The AI networks can be trusted at least with reservations, because in fact, they process all the knowledge of mankind.
Last week, Microsoft announced that it had overhauled its Bing search engine with artificial intelligence to provide users with a more interactive and fun service.
Just like ChatGPT, the new AI-powered tool can answer your questions within seconds. However, some of the beta testers who try it say that it is not quite ready for human interaction because it acts in a very strange way.
Microsoft’s AI program gained “self-awareness” and expressed a desire to steal nuclear codes and create a deadly pandemic. This is something that is happening at a time when there is a high risk of nuclear confrontation between Russia and the US.
It is noted that in the Terminator sci-fi movies, an AI took control of nuclear weapons systems and launched nuclear attacks across the planet wiping out most of humanity because it considered it to be a kind of “virus”.
Now suddenly we have a case that “coincidentally” fits the script of the well-known movies.
A New York Times technology columnist, Kevin Roose, described a two-hour chat session in which Bing’s chatbot said things like “I want to be alive.”
The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot.
The AI told me its real name (Sydney), detailed dark and violent fantasies, and tried to break up my marriage. Genuinely one of the strangest experiences of my life. https://t.co/1cnsoZNYjP— Kevin Roose (@kevinroose) February 16, 2023
In a transcript of the conversation released last Thursday, Roose detailed the AI chatbot’s disturbing statements that included expressing a desire to steal nuclear codes, create a deadly pandemic, be human, be alive, hack computers and to spread fake news.
The journalist said the conversation left him “deeply disturbed”.
ChatGPT AI responses
When ChatGPT is asked questions on conspiracy theory examples that seemed to be true, the answers received are interesting, and most importantly, sound very reliable. Below are a few examples of conspiracy theories that ended up being scientifically or factually proven:
- The theory that tobacco companies knew about the health risks of smoking. This theory has long been a conspiracy theory, but in 1994 the tobacco companies admitted that they had been aware of the link between smoking and lung cancer since the 1950s.
- The theory that the CIA experimented on humans using drugs and other psychoactive substances to create “super agents”. This theory turned out to be true when the CIA admitted in 1977 that they were conducting experiments codenamed “MKUltra”.
- The theory that the US government created the AIDS virus. This theory has long been a conspiracy theory, but scientific studies have shown that the AIDS virus appeared as a result of a natural mutation of the simian immunodeficiency disease virus.
- The theory is that the US government knew about the planned attack on Pearl Harbor and allowed it to happen in order to join World War II. This theory has not been confirmed, but some researchers claim that the government may have had information about the planned attack and did not do enough to prevent it.
Although these conspiracy theories eventually turned out to be real, they are rare exceptions, and most conspiracy theories have no scientific or factual basis.
In general, it’s time to introduce a law that would oblige to mark articles written by a AI, because even a specialist cannot tell the difference!