Connect with us

Hi, what are you looking for?

Science & Technology

The Five Most Worrying Trends in Artificial Intelligence Right Now

The Five Most Worrying Trends in Artificial Intelligence Right Now 1

Artificial intelligence is already beginning to spiral out of our control, a new report from top researchers warns. Not so much in a Skynet kind of sense, but more in a ‘technology companies and governments are already using AI in ways that amp up surveillance and further marginalize vulnerable populations’ kind of way.

On Thursday, the AI Now Institute, which is affiliated with New York University and is home to top AI researchers with Google and Microsoft, released a report detailing, essentially, the state of AI in 2018, and the raft of disconcerting trends unfolding in the field. What we broadly define as AI—machine learning, automated systems, etc.—is currently being developed faster than our regulatory system is prepared to handle, the report says. And it threatens to consolidate power in the tech companies and oppressive governments that deploy AI while rendering just about everyone else more vulnerable to its biases, capacities for surveillance, and myriad dysfunctions.

The report contains 10 recommendations for policymakers, all of which seem sound, as well as a diagnosis of the most potentially destructive trends. “Governments need to regulate AI,” the first recommendation exhorts, “by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.” One massive Department of AI or such that attempts to regulate the field writ large won’t cut it, researchers warn—the report suggests regulators follow examples like the one set by the Federal Aviation Administration and tackle AI as it manifests field by field.

But it also conveys a the succinct assessment of the key problem areas in AI as they stand in 2018. As detailed by AI Now, they are:

  1. The accountability gap between those who build the AI systems (and profit off of them) and those who stand to be impacted by the systems (you and me) is growing. Don’t like the idea of being subjected to artificially intelligent systems that harvest your personal data or determine various outcomes for you? Too bad! The report finds that the recourse most public citizens have to address the very artificially intelligent systems that may impact them is shrinking, not growing.
  2. AI is being used to amplify surveillance, often in horrifying ways. If you think the surveillance capacities of facial recognition technology are disturbing, wait till you see its even less scrupulous cousin, affect recognition. The Intercept’s Sam Biddle has a good write-up of the report’s treatment of affect recognition, which is basically modernized phrenology, practiced in real time.
  3. The government is embracing autonomous decision software in the name of cost-savings, but these systems are often a disaster for the disadvantaged. From systems that purport to streamline benefits application processes online to those that claim to be able to determine who’s eligible for housing, so-called ADS systems are capable of uploading bias and erroneously rejecting applicants on baseless grounds. As Virginia Eubanks details in her book Automating Inequality, the people these systems fail are those who are least able to muster the time and resources necessary to address them.
  4. AI testing “in the wild” is rampant already. “Silicon Valley is known for its ‘move fast and break things’ mentality,” the report notes, and that is leading to companies testing AI systems in the public sector—or releasing them into the consumer space outright—without substantial oversight. The recent track record of Facebook—the original move fast, break thingser and AI evangelist—alone is example enough of why this strategy can prove disastrous.
  5. Technological fixes to biased or problematic AI systems are proving inadequate. Google made waves when it announced it was tackling the ethics of machine learning, but efforts like these are already proving too narrow and technically oriented. Engineers tend to think they can fix engineering problems with, well, more engineering. But what is really required, the report argues, is a much deeper understanding of the history and social contexts of the datasets AI systems are trained on.

The full report is well worth reading, both for a tour of the myriad ways AI entered the public sphere—and collided with the public interest—in 2018, and for a detailed recipe for how our institutions might stay on top of this ever-complicating situation.

Source: gizmodo.com

Comments

You May Also Like

Underworld

On February 5, a bio-attack was prevented at a water treatment plant in Oldsmar, Florida. The hacker remotely entered the system and raised the alkali...

Science & Technology

The first attempts to recreate communication with the deceased became known five years ago. If then, the development of technologies that help to survive the...

Science & Technology

Various sources often talk about civilizations that lived long before us. They all developed, prospered for a while, and then disappeared in an incomprehensible way. ...

Science & Technology

The head of the Roman Catholic Church, Pope Francis, called on Catholics around the world to pray that robots and artificial intelligence “always serve...

Science & Technology

Scientists have discovered 1.8 billion trees in and around the “barren” Sahara. The unexpected discovery came from high-resolution satellite imagery, artificial intelligence and the hard...

Underworld

Over the course of this week, many newspapers have been enthusiastically discussing the August 26th AI Jesus work. The goal of the project was to create an “artificial messiah”...

Science & Technology

Image Credit: sxc.hu Should an AI be awarded a patent for its own inventions ? Two professors have teamed up to have an artificial...

Underworld

Tesla and SpaceX CEO Elon Musk said that artificial intelligence “doesn’t have to be evil to destroy humanity.” In a new documentary, “Do You...

Advertisement