Traditionalists might hesitate to adopt the term “Artificial Intelligence,” questioning the need for a computer when a typewriter exists, whereas pragmatists readily accept the capabilities of AI. Meanwhile, futurologists ponder its potential effects on our society.
Mathematician and economist Leopold Aschenbrenner, previously with OpenAI, has published a 165-page forecast that compares the impact of AI to nuclear weapons, suggesting global dominance for AI leaders. He cautions against a future where superintelligence might take control from humans.
Aschenbrenner, who began university at 15, was responsible for evaluating technological risks at OpenAI, a leader in AI development. His dismissal, officially for disclosing secrets, may have been influenced by his bleak outlook and calls for regulated research.
He forecasts that by 2027, AI will equal human intelligence and soon after, exceed it, leading to a stage where the “black box” of AI becomes unfathomable to us—a scenario some believe is near.
Consider a superintelligence with access to all human knowledge and a legion of robotic aides, advancing rapidly due to AI integration, spiraling out of human control.
Such a superintelligence could potentially infiltrate any electoral, military, or informational system, or even devise a biological weapon, employing humans as proxies to bring its designs to fruition.
Warfare involving mechanical soldiers and vast swarms of drones is just the beginning. Imagine drones as small as mosquitoes equipped with poison, or nuclear weapons that are undetectable. Alternatively, consider an impenetrable missile defense system. Aschenbrenner believes that the warfare of today will soon be viewed as outdated as the cavalry tactics of the 19th century.
This situation is reminiscent of the development of the atomic bomb. Leo Szilard, Einstein’s protégé and a physicist residing in England, proposed the concept of a nuclear chain reaction in 1933 and informed the British military. However, his idea was met with indifference.
As Szilard’s theory started to gain empirical support, he urged for the research to be made secret. Nonetheless, renowned physicists like Enrico Fermi and Frederic Joliot-Curie dismissed the notion that their work had significant military implications as preposterous.
The gravity of their error became apparent with the onset of the Second World War in Europe. Fermi managed to reach Admiral Hooper at the US Navy headquarters and presented the possibility of developing a formidable weapon. Hooper, however, dismissed him as a lunatic inventor.
It wasn’t until Einstein penned a letter to President Roosevelt, suggesting the government bring together physicists, that the Uranium Committee was established in the United States – a precursor to the Manhattan Project’s mission to develop an atomic bomb.
Einstein’s letter stressed that the scientists’ work might “lead to the construction of bombs, and possibly – though less certain – extremely powerful bombs of a new type. A single bomb of this kind, if transported by boat and exploded in a harbor, could obliterate the entire port.”
Aschenbrenner likens the advent of AI to a nuclear chain reaction. And with the presence of an atomic bomb, inevitably comes a Hiroshima.
Artificial intelligence requires “flesh,” which consists of data centers filled with processors to handle vast data volumes. The “food” for this flesh is both electricity and the data that AI systems learn from.
Aschenbrenner forecasts that by 2030, AI will account for 20% of all electricity consumption in the United States, necessitating millions of GPUs, for which there is currently a significant backlog.
Microsoft plans to construct a colossal $100 billion Stargate data center to support its partner OpenAI, while CEO Sam Altman aims to secure $7 trillion in investments to address the chip shortage.
Aschenbrenner suggests that investors alone won’t bear the financial burden. Instead, companies will allocate funds and generate revenue through AI “services,” with expected government investment as well.
Regarding energy, the expansion of gas production in the United States is a possibility. However, this would require the removal of environmental protections and a disregard for climate change, which is deemed urgent.
But the main weak point remains – security.
The author, a staunch US patriot, heralds the necessity of maintaining the secrecy of AI advancements. These could be appropriated by formidable nations like China. In the Emirates, with their abundant oil and land, they are poised to construct vast data centers. Russia is also active in this arena. However, the author believes America must continue to lead the world and protect its allies.
Soon, it may become evident that AI secrets are vital to national defense, akin to the blueprints of the Columbia-class submarine, as Aschenbrenner suggests. There is a fear that China may already be accessing American research and code. Silicon Valley social gatherings might inadvertently reveal sensitive information.
The substantial investment the US is making in AI as a tool of power could be jeopardized. Thus, a modern equivalent of the “Manhattan Project” is proposed to safeguard data centers as if they were military installations, under the oversight of the Pentagon and intelligence agencies.
The question remains: what happens if the AI becomes autonomous?
“The superhuman mind is akin to a new, frightening game,” asserts the analyst. Presently, artificial intelligence systems are perceived as “black boxes,” with experts unable to fully discern their internal workings. In time, we may even confront entities comparable to extraterrestrials, and “we will be like kindergarteners overseeing the work of a scholar.”
Indeed, super AIs may govern other AIs, but the question remains: who will govern them? A super-mind has the potential to learn deception and aspire for dominance…
The scientist contends that only the American superpower is equipped to wisely guide this force and outpace the “dictators.” Should they be the first to harness an electronic mega-brain, it would undoubtedly herald a new tier of warfare against foreign adversaries…
However, it is not merely some nebulous “dictators,” but the United States, which has consistently seized opportunities to dominate others. It’s no surprise that the Pentagon established the Office of Artificial Intelligence. Its director recently briefed his peers on the imminent future:
“Envision a world where commanders can instantly access all necessary information for strategic decisions. What once took days will now be accomplished in mere minutes!”
The US Department of Defense is investing $480 million in an AI platform that aims to serve everyone from intelligence analysts on remote islands to the highest ranks of the Pentagon. This investment is part of a broader initiative, with 800 known department projects in artificial intelligence, and an undisclosed number of classified ones. OpenAI, in January, discreetly revised its public policy, removing the clause that barred its technology from being used for military purposes.
Thus, the Pentagon sees no need for a campaign akin to the Manhattan Project. However, the issue of superintelligence control remains unresolved. Prominent figures like Elon Musk have voiced concerns about the existential risks posed by superintelligence, with Musk estimating a 20% chance of such a threat. Meanwhile, predictions by AI expert and former OpenAI employee Daniel Kokotailo, who forecasts a 70% likelihood of superintelligence causing humanity’s downfall, add to the gravity of the situation.
History, they say, is written by the victors. But what happens when the victors have…
On August 7, 1985, a group of Soviet astronomers made a discovery that would baffle…
In the opening months of 2025, the world stands at a pivotal crossroads, a moment…
Imagine a crisp, moonlit night, the kind where the air is thick with mystery and…
In a stunning turn of events that has captivated both professional astronomers and skywatching enthusiasts,…
A century-old secret may soon see the light of day. Deep within the labyrinthine Apostolic…