Connect with us

Science & Technology

Here’s What It’s Like to Fly Into Hurricanes to Get Forecast Data For Science

Before he heads to work, Jon Zawislak sometimes pops a ginger pill in his mouth to settle his stomach. He also prefers to stick to bland foods like pretzels and crackers before he gets to the office, because he wouldn’t want to hurl all over his desk.

Zawislak is a Hurricane Hunter.

He spends 8-hour long days soaring 10,000 miles (16,000 kilometres) in the air, collecting data on the wind, temperature, pressure, humidity, and rain falling inside big storms, where hurricane-force winds top 75 miles per hour (120 kilometres per hour).

While others on the ground are figuring out the best ways to avoid the eyes of these dangerous storms, he flies right into them.

“Aircraft are still the single best platform that we have to measure the state of a storm,” Zawislak told Business Insider.

“When it comes to the windfield, or the central pressure of the storm, that kind of data can only really come from an aircraft, and the instruments on the aeroplane.”

In the past week, he’s travelled through both Tropical Storm Isaac and Hurricane Florence, collecting vital data that the National Hurricane Center uses to upgrade a storm’s category, or better track where it’s headed next.

What an 8 hour workday in the air is like

Hurricane hunting flights have been around for 75 years, ever since British fighter pilots essentially dared a US Colonel to fly directly into a storm during WWII.

Today, Zawislak says there are two critical devices on the Lockheed Martin WP-3D he flies in for the National Oceanic and Atmospheric Administration (NOAA) that help inform our National Hurricane Center forcasts.

First, there’s the plane’s on-board radar that measures wind and rain, and then there’s a little device that’s essentially a paper towel roll with a parachute on its back, called a dropsonde.

The dropsonde is a disposable instrument outfitted with a GPS receiver, as well as pressure, temperature and humidity sensors. The throw-away package gets stuffed out a window, and then sucked away from the plane.

Over the course of a typical 8 hour flight, a dropsonde operator might plop 20 of them down into a storm, everywhere from the eye to the very outer rim, to examine how the windfield changes at different locations and heights in the storm.

“It really allows us to profile the atmosphere, which is one of the most important things,” Zawislak said. “So we can see how the wind speed changes with height.”

All this information can dramatically shift how forecasters characterise a storm.

Take Zawislak’s Monday flight into Hurricane Florence, for instance. “It went from what looked like a category 2 hurricane, all the way to a category 4 hurricane, just because we had the aircraft,” he said.

Getting a job as a flying scientist

Zawislak, who holds a PhD in atmospheric science, has been working on both planes and unmanned drones that fly through hurricanes for roughly a decade.

As a Hurricane Field Program Director for NOAA, he is essentially in charge of a plane-sized research lab in the sky. He decides where the flight path will head to collect its best data, and makes sure the instruments on board are getting all the information they will need to answer key research questions in flight.

One of the biggest unanswered questions Zawislak still has about hurricanes is how they get so fierce, so fast. It’s still not well understood how storms organise and gather strength, developing from uneven messes of rain and light wind to powerful, swirling hurricanes that can rip through homes and pummel the shore with water.

It’s an important research question for Zawislak, because if he can better understand why and how the storms are intensifying, forcasts will improve.

Zawislak says he’s “not crazy,” he just wants to learn more about big storms

Zawislak tries to steer clear of greasy foods before he boards the plane, but he says that flying into a storm isn’t always a bumpy ride. In fact, inside the storm it can feel just like a commercial flight, with the seatbelt sign off and all.

The pilots Zawislak flies with (there are three of them in the cockpit) typically try to keep the plane level, for the sake of the instruments, and maintain a height of about 10,000 feet (3,000 metres).

“We have the best pilots, the best engineers, the best mechanics, this is the best-maintained aeroplane you can find,” he said.

Still, the turbulence inside the plane can be unnerving at times, even with a harness on.

“You have flights where you’re in moderate to severe turbulence for two to three hours,” Zawislak said.

Inside the eye of a big storm like Florence, things clear up. At its very inner core, a hurricane is a place of peace, surrounded by violent chaos. Hurricane hunters say it looks like a big stadium, clear and serene.

“It’s much bigger than any stadium you’ve been in,” Zawislak said. When he flew through the eye of Florence, as a category 4 storm, the center was more than 15 miles (24 kilometres) wide, and took four minutes to fly through.

Despite the fact that Zawislak has to muscle his stomach through several long and bumpy rainy joy rides every hurricane season, he still wants you to know that he’s not completely out of his mind for taking this job.

“We’re not crazy” he said, before boarding another flight into tropical storm Isaac.

“We are playing a humongous role in getting the information to the National Hurricane Center, so that they can tell the public how strong the storm is.”

This article was originally published by Business Insider.


Science & Technology

For News, Americans Now Officially Prefer Social Media to Newspapers

Going Digital

For the first time, more Americans report getting their news from social media than from a traditional print newspaper.

Now about 20 percent of Americans said they “often” go to social media for news, while about 16 percent often read a print newspaper, according to a Pew Research Center survey conducted over the first two weeks of August.

Break down the survey’s 3,425 responses by age and it becomes clear that this trend is likely to increase over time — social media is the most popular source of news for people under 30 years old, with 36 percent saying they use it often and only two percent reading a physical newspaper.

Race for the Bottom

In spite of this trend, neither the newspaper nor social media are a particularly popular source of news for Americans. Television still dominates 49 percent of the market, and news websites take another 33 percent.

It’s not like everyone who used to read the paper every morning suddenly decided to sign into Twitter instead — Pew’s data suggest that newspaper readership has been declining steadily while social media use more or less flatlined in 2017.

But TV’s reign may be short-lived: television news viewership seems to directly correlate with age — 81 percent of people over the age of 65 regularly watch TV news, as do 65 percent of people between 50 and 64 years old. Meanwhile, just 16 percent of Americans under 30 regularly watch TV news. If today’s young people keep their preference for digital platforms over more traditional sources of news and that preference continues in future generations, it’s possible that TV news will become much less prominent as older generations die off.

Look, it’s Fine

There’s no way around it — social media often serves as a breeding ground for misinformation. But before you complain about those damn “kids these days,” rest assured that they’ll be just fine.

According to the report, younger generations get their news from a far more diverse array of sources than older generations do. They’re not just signing into Twitter or Facebook and turning a blind eye to everything else out there — social media is far more prevalent among young people than it is old people, but other sources like news websites and radio news still inform a large percentage of people under thirty.

Source link

Continue Reading

Science & Technology

The Five Most Worrying Trends in Artificial Intelligence Right Now

Artificial intelligence is already beginning to spiral out of our control, a new report from top researchers warns. Not so much in a Skynet kind of sense, but more in a ‘technology companies and governments are already using AI in ways that amp up surveillance and further marginalize vulnerable populations’ kind of way.

On Thursday, the AI Now Institute, which is affiliated with New York University and is home to top AI researchers with Google and Microsoft, released a report detailing, essentially, the state of AI in 2018, and the raft of disconcerting trends unfolding in the field. What we broadly define as AI—machine learning, automated systems, etc.—is currently being developed faster than our regulatory system is prepared to handle, the report says. And it threatens to consolidate power in the tech companies and oppressive governments that deploy AI while rendering just about everyone else more vulnerable to its biases, capacities for surveillance, and myriad dysfunctions.

The report contains 10 recommendations for policymakers, all of which seem sound, as well as a diagnosis of the most potentially destructive trends. “Governments need to regulate AI,” the first recommendation exhorts, “by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.” One massive Department of AI or such that attempts to regulate the field writ large won’t cut it, researchers warn—the report suggests regulators follow examples like the one set by the Federal Aviation Administration and tackle AI as it manifests field by field.

But it also conveys a the succinct assessment of the key problem areas in AI as they stand in 2018. As detailed by AI Now, they are:

  1. The accountability gap between those who build the AI systems (and profit off of them) and those who stand to be impacted by the systems (you and me) is growing. Don’t like the idea of being subjected to artificially intelligent systems that harvest your personal data or determine various outcomes for you? Too bad! The report finds that the recourse most public citizens have to address the very artificially intelligent systems that may impact them is shrinking, not growing.
  2. AI is being used to amplify surveillance, often in horrifying ways. If you think the surveillance capacities of facial recognition technology are disturbing, wait till you see its even less scrupulous cousin, affect recognition. The Intercept’s Sam Biddle has a good write-up of the report’s treatment of affect recognition, which is basically modernized phrenology, practiced in real time.
  3. The government is embracing autonomous decision software in the name of cost-savings, but these systems are often a disaster for the disadvantaged. From systems that purport to streamline benefits application processes online to those that claim to be able to determine who’s eligible for housing, so-called ADS systems are capable of uploading bias and erroneously rejecting applicants on baseless grounds. As Virginia Eubanks details in her book Automating Inequality, the people these systems fail are those who are least able to muster the time and resources necessary to address them.
  4. AI testing “in the wild” is rampant already. “Silicon Valley is known for its ‘move fast and break things’ mentality,” the report notes, and that is leading to companies testing AI systems in the public sector—or releasing them into the consumer space outright—without substantial oversight. The recent track record of Facebook—the original move fast, break thingser and AI evangelist—alone is example enough of why this strategy can prove disastrous.
  5. Technological fixes to biased or problematic AI systems are proving inadequate. Google made waves when it announced it was tackling the ethics of machine learning, but efforts like these are already proving too narrow and technically oriented. Engineers tend to think they can fix engineering problems with, well, more engineering. But what is really required, the report argues, is a much deeper understanding of the history and social contexts of the datasets AI systems are trained on.

The full report is well worth reading, both for a tour of the myriad ways AI entered the public sphere—and collided with the public interest—in 2018, and for a detailed recipe for how our institutions might stay on top of this ever-complicating situation.


Continue Reading

Science & Technology

Facebook’s Oculus Just Patented a Retina-Resolution VR Display

Laser Focus

Last month, The U.S. Patent Office approved new Facebook technology for a virtual reality headset that can track people’s vision to focus on specific parts of a VR simulation — just how an eye might.

The new technology is called “retinal resolution,” according to Upload VR, and it involves a smaller VR display that presents whatever someone is looking at in high definition, while a larger background display shows the periphery in less detail. This suggests Facebook is investigating ways to give VR the same level of detail that the human eye can process.

Realize Real Eyes

The idea is to mirror the way an eyeball sees the world. Our eyes have the highest density of rods — the type of light receptor cell that detects detail — packed right in the center of our field of vision.

But rods and cones, their color-detecting counterparts, become increasingly sparse as you travel out to the periphery of your vision. That’s why things in the corner of your eye appear blurry — and, believe it or not, in black-and-white.

The concept behind Facebook’s new VR display concept is that the smaller, moving display would match the high-density region of your vision, focusing whatever part of a VR simulation you’re looking at in high definition, and the background would fill in the rest. But it’s unclear what purpose Facebook expects this technology to serve — whatever part of a VR experience you’re watching will always be in better focus than the periphery because that’s how eyes already work all the time.

99 Percent Perspiration

Also, it’s just a patent. As Upload VR reported, this tech may never actually come to fruition. Assuming Facebook’s patent holds up in court against similar technology, the company would still need to then decide that it’s worth the investment to put retinal resolution tech into future Oculus VR headsets.

Facebook might hold on to the patent but decide it’s not worth actually building out the tech. After all, as long as they’re sitting on the legal rights to eye-tracking retinal tech, no one else can bring it to market.

READ MORE: Facebook Wins Patent For Human-Eye ‘Retinal’ Resolution VR Headset [Upload VR]

Source link

Continue Reading