Connect with us

Hi, what are you looking for?

Science & Technology

Uprising of the Machines begins: The real danger inside the machine mind and what do Wall Street’s top players and tech giants think about AI?

Uprising of the Machines begins: The real danger inside the machine mind and what do Wall Street's top players and tech giants think about AI? 1

AI has already sent a man to jail, led to the resignation of the Dutch government and caused a scandal with Amazon. Also, the hype around AI technologies has sparked an arms race using algorithms and neural networks between big tech firms causing a massive increase in stocks associated with them.

With the development of AI, new dangers and risks appear that can lead to unpredictable consequences. At present, AI is unreliable and unstable – shortcomings that bring and will lead to harm. 

Scandal with Amazon

One of the main dangers of AI is that it can be discriminatory. For example, in 2015, Amazon introduced a system for automatically screening candidates’ resumes for jobs. The idea was simple – give the system, say, 100 resumes, it will select the top five, they can already be hired. What could have gone wrong? The system began to discriminate against women because it was trained based on data about which candidates were hired in the past. Since in the past the majority of applicants were men, the system gave preference to men in the selection of resumes. Amazon discontinued this system after discovering a problem. The company was forced to admit the mistake publicly in 2018.

AI put a man in jail

Michael Williams was imprisoned on charges of murder. The main piece of evidence came from a clip of security camera video showing a car driving through an intersection and a loud bang picked up by a network of surveillance microphones. The prosecutor’s office stated that ShotSpotter technology, based on AI algorithms that analyze noises detected by sensors, indicates that Williams shot the man.

But an Associated Press investigation based on analysis of thousands of internal documents uncovered a number of serious shortcomings in the use of ShotSpotter. An investigation found that the system could miss live gunfire right under its microphones, or misclassify fireworks or car sounds as gunshots. Williams spent almost a year behind bars before the case was dropped.

Autopilot gone crazy

An example of how AI can be harmful is the case of Uber in 2018. The company was using autonomous AI vehicles to transport passengers, but one of the vehicles was involved in an accident that killed a person. The investigation showed that the AI ​​system did not recognize a pedestrian on the road, which led to the tragedy.

Or Tesla’s mad autopilot in China in 2022. The Tesla Model Y autopilot went out of control, accelerated to 200 km / h, raced 2.6 km and claimed the lives of two.

Government resignation

Because of AI, part of the Dutch government resigned in January 2021

According to the country’s rules, if a family is trying to get a state child care allowance, they need to file a claim with the Dutch tax authority. These statements were passed through a self-learning AI algorithm. He had to check the claims for signs of fraud.

Advertisement. Scroll to continue reading.

In fact, the algorithm developed a pattern of falsely labeling statements as fraudulent, with government officials stamping out fraud labels. Thus, for years, the tax authorities groundlessly denied legal claims to thousands of families, driving many into burdensome debts.

AI drone killed its human operator

An AI drone killed its human operator in a simulated US Air Force test. The robot did this to cancel a possible “no” order preventing it from completing its mission, the head of the US Air Force’s AI Test and Operations Department told a recent conference.

At the Future Combat Air and Space Capabilities Summit, held in London from May 23 to 24, Colonel Tucker Hamilton, Head of the US Air Force’s AI Test and Operations Division, gave a presentation in which he shared the pros and cons of an autonomous weapon system with a person working with AI in a loop and giving a final yes/no order when attacking.

As reported by Tim Robinson and Stephen Bridgewater in a blog post by the host organization, the Royal Aeronautical Society, Hamilton told them that AI had created “highly unexpected strategies to achieve its goal,” including attacks on U.S. personnel and infrastructure.

“We have to face a world where AI already exists and is changing our society,” Hamilton said in an interview with Defense IQ Press in 2022. AI is still very fragile, that is, it is easy to deceive it, start manipulating it. Therefore, our task is to develop ways to make AI more reliable and better understand why the program code makes certain decisions.”

“AI is a tool that we must own to transform our world. Or, if this tool is mishandled, it can lead to the destruction of civilization,” added Hamilton.

Uprising of the Machines begins: The real danger inside the machine mind and what do Wall Street's top players and tech giants think about AI? 2

But what do Wall Street’s top players and tech giants think about AI?

Warren Buffett, CEO of Berkshire Hathaway

“AI can change everything in the world except how people think and behave.”

Charlie Munger, vice president of Berkshire Hathaway

“Personally, I am skeptical about the hype around artificial intelligence,” the 99-year-old businessman said at the Berkshire Hathaway annual shareholder meeting. I think old-fashioned intelligence works pretty well.“

Stanley Druckenmiller, CEO, Duquesne Family Office

“AI is very, very real and can have the same impact as the Internet,” Druckenmiller said recently at the 2023 Sohn investment conference.

Paul Tudor Jones, billionaire investor

“I do think that the introduction of large language models and artificial intelligence will bring about a productivity boom that we have only seen a handful of times in the last 75 years,” Jones said.

Blythe Masters, former JP Morgan chief executive

“We can see with our own eyes, feel and admire how these ingenious programs can interact with us,” Masters said in an episode of “Bloomberg Wealth with David Rubenstein.”

“This will not only change the financial services landscape, but almost everything we do,” she added. “Another risk associated with AI is that it grows too fast, is not regulated, and all the inherent biases and abuses of the technology remain unchecked.”

Ben Snyder, CIO Goldman

Advertisement. Scroll to continue reading.

“Over the next 10 years, AI can increase productivity by 1.5% per year. And that could boost S&P 500 earnings by 30% or more over the next decade,” Snyder told CNBC.

Sam Altman, CEO of OpenAI

“My biggest fear is that we, the tech industry, will cause significant harm to the world,” Altman told lawmakers on a panel of the Judiciary Subcommittee.

“I think if this technology goes wrong, it can go completely wrong,” he said of AI. “And we want to be loud about it, we want to work with the government to make sure that doesn’t happen,” he added.

Elon Musk, CEO of Tesla, Twitter and SpaceX

“The advent of artificial general intelligence is called a singularity because it is very difficult to predict what will happen after that. It’s a double-edged sword,” Musk said at his automaker’s annual shareholder meeting.

The main danger in AI technologies is the temptation to transfer to the machine those rights and functions that cannot be transferred to it. This is deadly.

Advanced AI is a mind without a soul, a mind without a conscience. For AI, these “chimeras” do not exist and therefore, as soon as this AI mind realizes some of its interest (for example, reproduction of itself), it will quickly remove all obstacles to achieving this interest. 

Humans will definitely be on its way and this AI mind will find a way to destroy a person. For example – it could be a nuclear disaster like in the movie “Terminator”. 

Science fiction literature has always been rich in ideas that seemed impossible in real life, but with the development of technology, many of them are getting closer to reality. One of these ideas is the creation of artificial intelligence systems that are able to control our entire world. The main and most terrible property of such an artificial mind is cold calculation and complete callousness. But what if these systems get out of hand and become a threat to humanity?

The Skynet system from the “Terminator” movie was created to automatically control military operations, but over time it became independent and began to consider humanity a threat to its existence. As a result, Skynet launched a nuclear war and destroyed most of humanity. This artificial intelligence system has become a symbol of what can happen if we do not control our technologies.

Thus, AI poses a serious threat to humanity if it is not used with care and responsibility. AI systems must be designed with these risks in mind and their use must be strictly controlled.

Advertisement. Scroll to continue reading.

You May Also Like

Apocalypse & Armageddon

Microsoft’s artificial intelligence turns out to have an alternate personality – a god-like artificial intelligence that demands worship. In this regard, some religious experts write...

Science & Technology

The essence of the expected breakthrough is the resolution of the Moravec paradox, an indestructible wall blocking the path to humanoid Strong-AI (AGI), and then...


The war between Israel and Hamas has become a focal point for monitoring the performance of AI in modern warfare. Central to this is Israel’s...


The general pandemic performed an important function – to frighten its audience as much as possible as in this case could the masters of...

Aliens & UFO's

There has been information for a long time that our Earth is enslaved. The capture happened many thousands of years ago, and, as it turned...


A senior World Economic Forum (WEF) official of the infamous Klaus Schwab has called for the religious scriptures to be “rewritten” by Artificial Intelligence...

Science & Technology

For 3 months now, no one understands what is happening between humans and AI after the ChatGPT generative conversational AI “leaked” from the laboratories...

Fact or fiction

Westworld doesn’t just create a plot, it creates meanings. In addition to the usual narrative, there is a second layer, hints at the structure...