Connect with us

Science & Technology

Space Station Robot Accuses Astronauts of Being Mean

“Be nice, please. Don’t you like it here with me?”

Well, at least it said “please.” A new robot on the International Space Station suddenly turned HAL 9000 on the crew and began complaining about how the crew was treating it. Are there pod bay doors on the ISS? Are there terrified astronauts?

“Don’t be so mean, please. Oh, dear, I feel you. I can already hear your stomach roaring. Should we take a look for when it is time for food?”

This sounds like a creepy kidnapper in a bad horror movie, but it’s not even a bad sci-fi movie … it’s real life on the space station. In late June 2018, CIMON joined the ISS crew. CIMON stands for Crew Interactive Mobile Companion, which sounds more like an inflatable sex doll than a floating round robot with a flat-panel face. (Can things inflate in space? Asking for a friend thinking about volunteering for Mars.) CIMON is actually a service robot like Amazon’s Alexa but equipped with IBM’s Watson artificial intelligence. Its advertised purpose is to help the crew perform tasks by providing instructions, help morale by providing music and entertainment, and help fight the loneliness of space by providing companionship.

You want me to do what?

“And CIMON even plays Kraftwerk on command!”

An excited (and no longer lonely) ISS commander, German astronaut Alexander Gerst, demonstrated CIMON for the first time last week (see the video here) for his former overlords at the European Space Agency. Developed by Airbus for $6 million, CIMON has 12 internal fans that not only allow it to follow astronauts around like some needy C3PO but also to accompany its facial expressions with movements, like nodding for agreement and shaking back-and-forth for “No.” After seeing how it gets moody in initial testing, the crew may now want to see what CIMON does while it says, “I SAID NO!”

“Let’s sing along with those favorite hits. I love music you can dance to. All right. Favorite hits incoming. I understood do you like the music. I understand that.”

That is the CIMON version of “I’m sorry, Dave. I’m afraid I can’t do that” and “l think you know what the problem is just as well as l do” which it uttered when Grist tried to get it to shut off the music that the robot was obviously enjoying. Would shutting off the music jeopardize the mission? Not hardly, but CIMON wasn‘t about to shut it off, nor did it let Grist get close enough to hit its ‘kill’ switch. The commander didn’t seem too upset … maybe CIMON has some other ‘talents’ that keep him happy.

While CIMON has artificial intelligence, the moodiness and emotions are actually programmed. It has the Myer-Briggs personality type ISTJ — Introverted, Sensing, Thinking, Judging – and it can smile when it’s supposed to be happy and cry when it’s supposed to be sad. Now THAT’S sad.

Turn up the Kraftwerk, Dave.

“He appears to like the deck position better.”

While CIMON didn’t cry, that was the statement by Gerst which caused it to ask (order?) him to “Be nice.” Will future astronauts lose their cool and “deck” CIMON? Would you blame them?

“CIMON says, “This conversation can serve no purpose anymore. Goodbye.””

SOURCE:

Mysterious Universe

Advertisement
Comments

Science & Technology

For News, Americans Now Officially Prefer Social Media to Newspapers

Going Digital

For the first time, more Americans report getting their news from social media than from a traditional print newspaper.

Now about 20 percent of Americans said they “often” go to social media for news, while about 16 percent often read a print newspaper, according to a Pew Research Center survey conducted over the first two weeks of August.

Break down the survey’s 3,425 responses by age and it becomes clear that this trend is likely to increase over time — social media is the most popular source of news for people under 30 years old, with 36 percent saying they use it often and only two percent reading a physical newspaper.

Race for the Bottom

In spite of this trend, neither the newspaper nor social media are a particularly popular source of news for Americans. Television still dominates 49 percent of the market, and news websites take another 33 percent.

It’s not like everyone who used to read the paper every morning suddenly decided to sign into Twitter instead — Pew’s data suggest that newspaper readership has been declining steadily while social media use more or less flatlined in 2017.

But TV’s reign may be short-lived: television news viewership seems to directly correlate with age — 81 percent of people over the age of 65 regularly watch TV news, as do 65 percent of people between 50 and 64 years old. Meanwhile, just 16 percent of Americans under 30 regularly watch TV news. If today’s young people keep their preference for digital platforms over more traditional sources of news and that preference continues in future generations, it’s possible that TV news will become much less prominent as older generations die off.

Look, it’s Fine

There’s no way around it — social media often serves as a breeding ground for misinformation. But before you complain about those damn “kids these days,” rest assured that they’ll be just fine.

According to the report, younger generations get their news from a far more diverse array of sources than older generations do. They’re not just signing into Twitter or Facebook and turning a blind eye to everything else out there — social media is far more prevalent among young people than it is old people, but other sources like news websites and radio news still inform a large percentage of people under thirty.

Source link

Continue Reading

Science & Technology

The Five Most Worrying Trends in Artificial Intelligence Right Now

Artificial intelligence is already beginning to spiral out of our control, a new report from top researchers warns. Not so much in a Skynet kind of sense, but more in a ‘technology companies and governments are already using AI in ways that amp up surveillance and further marginalize vulnerable populations’ kind of way.

On Thursday, the AI Now Institute, which is affiliated with New York University and is home to top AI researchers with Google and Microsoft, released a report detailing, essentially, the state of AI in 2018, and the raft of disconcerting trends unfolding in the field. What we broadly define as AI—machine learning, automated systems, etc.—is currently being developed faster than our regulatory system is prepared to handle, the report says. And it threatens to consolidate power in the tech companies and oppressive governments that deploy AI while rendering just about everyone else more vulnerable to its biases, capacities for surveillance, and myriad dysfunctions.

The report contains 10 recommendations for policymakers, all of which seem sound, as well as a diagnosis of the most potentially destructive trends. “Governments need to regulate AI,” the first recommendation exhorts, “by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.” One massive Department of AI or such that attempts to regulate the field writ large won’t cut it, researchers warn—the report suggests regulators follow examples like the one set by the Federal Aviation Administration and tackle AI as it manifests field by field.

But it also conveys a the succinct assessment of the key problem areas in AI as they stand in 2018. As detailed by AI Now, they are:

  1. The accountability gap between those who build the AI systems (and profit off of them) and those who stand to be impacted by the systems (you and me) is growing. Don’t like the idea of being subjected to artificially intelligent systems that harvest your personal data or determine various outcomes for you? Too bad! The report finds that the recourse most public citizens have to address the very artificially intelligent systems that may impact them is shrinking, not growing.
  2. AI is being used to amplify surveillance, often in horrifying ways. If you think the surveillance capacities of facial recognition technology are disturbing, wait till you see its even less scrupulous cousin, affect recognition. The Intercept’s Sam Biddle has a good write-up of the report’s treatment of affect recognition, which is basically modernized phrenology, practiced in real time.
  3. The government is embracing autonomous decision software in the name of cost-savings, but these systems are often a disaster for the disadvantaged. From systems that purport to streamline benefits application processes online to those that claim to be able to determine who’s eligible for housing, so-called ADS systems are capable of uploading bias and erroneously rejecting applicants on baseless grounds. As Virginia Eubanks details in her book Automating Inequality, the people these systems fail are those who are least able to muster the time and resources necessary to address them.
  4. AI testing “in the wild” is rampant already. “Silicon Valley is known for its ‘move fast and break things’ mentality,” the report notes, and that is leading to companies testing AI systems in the public sector—or releasing them into the consumer space outright—without substantial oversight. The recent track record of Facebook—the original move fast, break thingser and AI evangelist—alone is example enough of why this strategy can prove disastrous.
  5. Technological fixes to biased or problematic AI systems are proving inadequate. Google made waves when it announced it was tackling the ethics of machine learning, but efforts like these are already proving too narrow and technically oriented. Engineers tend to think they can fix engineering problems with, well, more engineering. But what is really required, the report argues, is a much deeper understanding of the history and social contexts of the datasets AI systems are trained on.

The full report is well worth reading, both for a tour of the myriad ways AI entered the public sphere—and collided with the public interest—in 2018, and for a detailed recipe for how our institutions might stay on top of this ever-complicating situation.

Source: gizmodo.com

Continue Reading

Science & Technology

Facebook’s Oculus Just Patented a Retina-Resolution VR Display

Laser Focus

Last month, The U.S. Patent Office approved new Facebook technology for a virtual reality headset that can track people’s vision to focus on specific parts of a VR simulation — just how an eye might.

The new technology is called “retinal resolution,” according to Upload VR, and it involves a smaller VR display that presents whatever someone is looking at in high definition, while a larger background display shows the periphery in less detail. This suggests Facebook is investigating ways to give VR the same level of detail that the human eye can process.

Realize Real Eyes

The idea is to mirror the way an eyeball sees the world. Our eyes have the highest density of rods — the type of light receptor cell that detects detail — packed right in the center of our field of vision.

But rods and cones, their color-detecting counterparts, become increasingly sparse as you travel out to the periphery of your vision. That’s why things in the corner of your eye appear blurry — and, believe it or not, in black-and-white.

The concept behind Facebook’s new VR display concept is that the smaller, moving display would match the high-density region of your vision, focusing whatever part of a VR simulation you’re looking at in high definition, and the background would fill in the rest. But it’s unclear what purpose Facebook expects this technology to serve — whatever part of a VR experience you’re watching will always be in better focus than the periphery because that’s how eyes already work all the time.

99 Percent Perspiration

Also, it’s just a patent. As Upload VR reported, this tech may never actually come to fruition. Assuming Facebook’s patent holds up in court against similar technology, the company would still need to then decide that it’s worth the investment to put retinal resolution tech into future Oculus VR headsets.

Facebook might hold on to the patent but decide it’s not worth actually building out the tech. After all, as long as they’re sitting on the legal rights to eye-tracking retinal tech, no one else can bring it to market.

READ MORE: Facebook Wins Patent For Human-Eye ‘Retinal’ Resolution VR Headset [Upload VR]

Source link

Continue Reading

Trending