Science & Technology

Uprising of the Machines begins: The real danger inside the machine mind and what do Wall Street’s top players and tech giants think about AI?

AI has already sent a man to jail, led to the resignation of the Dutch government and caused a scandal with Amazon. Also, the hype around AI technologies has sparked an arms race using algorithms and neural networks between big tech firms causing a massive increase in stocks associated with them.

With the development of AI, new dangers and risks appear that can lead to unpredictable consequences. At present, AI is unreliable and unstable – shortcomings that bring and will lead to harm. 

Scandal with Amazon

One of the main dangers of AI is that it can be discriminatory. For example, in 2015, Amazon introduced a system for automatically screening candidates’ resumes for jobs. The idea was simple – give the system, say, 100 resumes, it will select the top five, they can already be hired. What could have gone wrong? The system began to discriminate against women because it was trained based on data about which candidates were hired in the past. Since in the past the majority of applicants were men, the system gave preference to men in the selection of resumes. Amazon discontinued this system after discovering a problem. The company was forced to admit the mistake publicly in 2018.

AI put a man in jail

Michael Williams was imprisoned on charges of murder. The main piece of evidence came from a clip of security camera video showing a car driving through an intersection and a loud bang picked up by a network of surveillance microphones. The prosecutor’s office stated that ShotSpotter technology, based on AI algorithms that analyze noises detected by sensors, indicates that Williams shot the man.

But an Associated Press investigation based on analysis of thousands of internal documents uncovered a number of serious shortcomings in the use of ShotSpotter. An investigation found that the system could miss live gunfire right under its microphones, or misclassify fireworks or car sounds as gunshots. Williams spent almost a year behind bars before the case was dropped.

Autopilot gone crazy

An example of how AI can be harmful is the case of Uber in 2018. The company was using autonomous AI vehicles to transport passengers, but one of the vehicles was involved in an accident that killed a person. The investigation showed that the AI ​​system did not recognize a pedestrian on the road, which led to the tragedy.

Or Tesla’s mad autopilot in China in 2022. The Tesla Model Y autopilot went out of control, accelerated to 200 km / h, raced 2.6 km and claimed the lives of two.

Government resignation

Because of AI, part of the Dutch government resigned in January 2021

According to the country’s rules, if a family is trying to get a state child care allowance, they need to file a claim with the Dutch tax authority. These statements were passed through a self-learning AI algorithm. He had to check the claims for signs of fraud.

Advertisement. Scroll to continue reading.

In fact, the algorithm developed a pattern of falsely labeling statements as fraudulent, with government officials stamping out fraud labels. Thus, for years, the tax authorities groundlessly denied legal claims to thousands of families, driving many into burdensome debts.

AI drone killed its human operator

An AI drone killed its human operator in a simulated US Air Force test. The robot did this to cancel a possible “no” order preventing it from completing its mission, the head of the US Air Force’s AI Test and Operations Department told a recent conference.

At the Future Combat Air and Space Capabilities Summit, held in London from May 23 to 24, Colonel Tucker Hamilton, Head of the US Air Force’s AI Test and Operations Division, gave a presentation in which he shared the pros and cons of an autonomous weapon system with a person working with AI in a loop and giving a final yes/no order when attacking.

As reported by Tim Robinson and Stephen Bridgewater in a blog post by the host organization, the Royal Aeronautical Society, Hamilton told them that AI had created “highly unexpected strategies to achieve its goal,” including attacks on U.S. personnel and infrastructure.

“We have to face a world where AI already exists and is changing our society,” Hamilton said in an interview with Defense IQ Press in 2022. AI is still very fragile, that is, it is easy to deceive it, start manipulating it. Therefore, our task is to develop ways to make AI more reliable and better understand why the program code makes certain decisions.”

“AI is a tool that we must own to transform our world. Or, if this tool is mishandled, it can lead to the destruction of civilization,” added Hamilton.

But what do Wall Street’s top players and tech giants think about AI?

Warren Buffett, CEO of Berkshire Hathaway

“AI can change everything in the world except how people think and behave.”

Charlie Munger, vice president of Berkshire Hathaway

“Personally, I am skeptical about the hype around artificial intelligence,” the 99-year-old businessman said at the Berkshire Hathaway annual shareholder meeting. I think old-fashioned intelligence works pretty well.“

Stanley Druckenmiller, CEO, Duquesne Family Office

Related Post

“AI is very, very real and can have the same impact as the Internet,” Druckenmiller said recently at the 2023 Sohn investment conference.

Paul Tudor Jones, billionaire investor

“I do think that the introduction of large language models and artificial intelligence will bring about a productivity boom that we have only seen a handful of times in the last 75 years,” Jones said.

Blythe Masters, former JP Morgan chief executive

“We can see with our own eyes, feel and admire how these ingenious programs can interact with us,” Masters said in an episode of “Bloomberg Wealth with David Rubenstein.”

“This will not only change the financial services landscape, but almost everything we do,” she added. “Another risk associated with AI is that it grows too fast, is not regulated, and all the inherent biases and abuses of the technology remain unchecked.”

Ben Snyder, CIO Goldman

Advertisement. Scroll to continue reading.

“Over the next 10 years, AI can increase productivity by 1.5% per year. And that could boost S&P 500 earnings by 30% or more over the next decade,” Snyder told CNBC.

Sam Altman, CEO of OpenAI

“My biggest fear is that we, the tech industry, will cause significant harm to the world,” Altman told lawmakers on a panel of the Judiciary Subcommittee.

“I think if this technology goes wrong, it can go completely wrong,” he said of AI. “And we want to be loud about it, we want to work with the government to make sure that doesn’t happen,” he added.

Elon Musk, CEO of Tesla, Twitter and SpaceX

“The advent of artificial general intelligence is called a singularity because it is very difficult to predict what will happen after that. It’s a double-edged sword,” Musk said at his automaker’s annual shareholder meeting.

The main danger in AI technologies is the temptation to transfer to the machine those rights and functions that cannot be transferred to it. This is deadly.

Advanced AI is a mind without a soul, a mind without a conscience. For AI, these “chimeras” do not exist and therefore, as soon as this AI mind realizes some of its interest (for example, reproduction of itself), it will quickly remove all obstacles to achieving this interest. 

Humans will definitely be on its way and this AI mind will find a way to destroy a person. For example – it could be a nuclear disaster like in the movie “Terminator”. 

Science fiction literature has always been rich in ideas that seemed impossible in real life, but with the development of technology, many of them are getting closer to reality. One of these ideas is the creation of artificial intelligence systems that are able to control our entire world. The main and most terrible property of such an artificial mind is cold calculation and complete callousness. But what if these systems get out of hand and become a threat to humanity?

The Skynet system from the “Terminator” movie was created to automatically control military operations, but over time it became independent and began to consider humanity a threat to its existence. As a result, Skynet launched a nuclear war and destroyed most of humanity. This artificial intelligence system has become a symbol of what can happen if we do not control our technologies.

Thus, AI poses a serious threat to humanity if it is not used with care and responsibility. AI systems must be designed with these risks in mind and their use must be strictly controlled.

Advertisement. Scroll to continue reading.

Recent Posts

H2O for UFOs, or what if aliens steal our water?

Edward Snowden, who once worked for the CIA and fled to Russia, recently declassified information…

6 hours ago

What’s cooking in the weather kitchen: Is there an “Italian connection” between “Climate warming” and HAARP?

HAARP is a research project to study the interaction of the ionosphere with powerful electromagnetic…

1 day ago

The alarming future and why the old order will be destroyed: The process has already begun, it cannot be stopped

The world is changing, and this is becoming more and more noticeable every day. People's old…

2 days ago

When some things can’t be changed: Why can’t the Devil be destroyed?

Many people believe, this especially applies to believers, that if you destroy the Devil, then…

3 days ago

The “jamais vu” phenomenon that many have experienced and its strange effect on consciousness

Most of us have heard of "déjà vu". It's about that fleeting feeling that a new situation…

4 days ago

A classified CIA book from 1965 predicts a modern-day pole shift

As popular wisdom says, there is nothing new under the moon and everything new is…

5 days ago