UC Berkeley’s robotic pogo stick can bounce its way up and over obstacles
Moons are everywhere in our solar system — we have one, Mars has two, Jupiter and Saturn have dozens each — but we’ve never known whether moons are as common, or even exist at all, outside our Sun’s orbit.
That may have just changed.
On Wednesday, Columbia University researchers announced that they’d found the first evidence of an “exomoon” — a moon orbiting a planet beyond Earth’s solar system.
The discovery of the exomoon began with a survey of 284 transiting planets — meaning they pass between a star and an instrument we use to observe space (in this case NASA’s Kepler space telescope).
A planet passing in front of a star causes a noticeable dip in the star’s brightness, which astronomers can analyze to deduce information about a planet, such as its size and composition. In the case of Kepler-1625b, a Jupiter-sized planet about 4,000 lightyears from Earth, the Kepler survey data was a little different than the typical exoplanet, suggesting that it may have a moon.
The Columbia team then used the more-powerful Hubble telescope to study Kepler-1625b, looking for any additional dimming that would confirm the exomoon, or any sign that it was affecting the planet’s gravity. They found both.
Despite that evidence, the researchers caution that their work is preliminary, and it will need confirmed by future studies.
“We are trying to be cautious with our claims at this point… we want to see a little more before we come out and say, ‘Yes, this thing is definitely there,’” researcher Alex Teachey told reporters during a press briefing. “In that sense, we are not cracking open champagne bottles just yet on this one.”
And as of this week, though, it’s safe to say that many fresh pairs of eyes will set their sights on Kepler-1625b.
READ MORE: Thanks to Help From Hubble, the First Confirmed Exomoon? [EurekAlert]
More on exomoons: Do Exomoons Exist?
At 2:18 PM Eastern Time on Wednesday, every cell phone in America will buzz, beep, or maybe play a jaunty tune. Everyone will receive a special, handy-dandy emergency broadcast sent by the White House.
The test of the National Wireless Emergency Alert System, which was supposed to happen in September but was pushed back, will join the ranks of Amber Alerts for missing children and those flash flood warnings that you only seem to get on sunny days — yet another thing that makes your pocket buzz but typically doesn’t affect your day-to-day life all that much.
Granted, these alerts are actually from FEMA and are intended to warn us about natural disasters, cyberattacks, and acts of war (they are at the directive of the president but they don’t actually contain a message from President Trump). But the audacity of the federal government to contact us in a way that is so personal and immediate — let alone all of us at once — has left many wondering how to opt out.
The consensus among media reports, such as in WIRED’s oral history of the alert, is that we’re SOL.
But we’re not giving up. Here are the top three strategies we’ve come up with to stop your phone from buzzing on cue for today’s emergency alert.
ASK YOUR PHONE NICELY
If you go to your phone’s settings menu, you’ll have a number of ways to limit the notifications that your phone sends your way. On a Droid, you’ll find an Emergency Alerts app that you can’t disable or uninstall, but you can hit the “force stop” button. Note that the app will start back up after a few minutes, so your best bet will be to strategically time the force stop before 2:18.
On an iPhone, you can turn off “government alerts” in your notifications settings, but that may be just as useless as hitting the “close doors” button on an elevator.
More extreme measures might be in order.
TURN YOUR PHONE OFF FOREVER
You can’t get the message if you’ve got nothing to receive it on, right? We’re betting that turning your phone back on at 2:19 will just delay rather than cancel the alert, so you really need to commit to this one. And if you want to avoid accidentally turning your phone back on in your pocket, we have one more idea.
Use a hammer.
The star system SS 433 is something of a celebrity in the world of astronomy.
It’s the first known example of a microquasar — a black hole that feeds off a nearby companion star and emits two powerful jets of material. Plus, at just 15,000 light-years away, it’s relatively close to us.
And now, an international team of researchers has discovered something new about SS 433: it emits a type of electromagnetic radiation known as high-energy gamma rays. This new insight could help astronomers understand what’s going on at the centers of galaxies, where huge quasars sometimes feed on many stars at once.
The team discovered the rays using the High-Altitude Water Cherenkov Gamma-Ray Observatory (HAWC) in Mexico.
HAWC features more than 300 water tanks, each about 24 feet across. When gamma rays reach Earth’s atmosphere from elsewhere in the universe, they cause showers of particles that hit the water in these tanks, causing shockwaves of light. Special cameras detect these, and from their recordings, researchers can pinpoint the source of the gamma rays.
For their study, published Wednesday in Nature, the team examined 1,017 days’ worth of HAWC data and determined that SS 433 was a source of gamma rays. Perhaps even more remarkably, they figured out that the rays were coming from the ends of the microquasars’ jets — a source of gamma rays they’d never seen before.
“This new observation of high-energy gamma rays builds on almost 40 years of measurements of one of the weirdest objects in the Milky Way,” said study co-author Segev BenZvi in an emailed press release. “Every measurement gives us a different piece of the puzzle, and we hope to use our knowledge to learn about the quasar family as a whole.”
Of the roughly one dozen microquasars in the Milky Way, only two appear to emit high-energy gamma rays. The fact that the closest microquasar to Earth also emits these rays — and in a previously unknown way — could afford researchers a better way to study them, all while helping us get to know our favorite microquasar a little bit better.
At one point, it seemed like Tesla Model 3s were more in demand than tickets to a hypothetical Coachella co-headlined by the newly reunited Jay-Z and Kanye.
An estimated 500,000 people pre-ordered one of the $35,000 vehicles. In an effort to make all those cars, Tesla had to resort to moving assembly into a tent. A used Model 3 was even (briefly) listed for sale online, marked up to $150,000.
So the Model 3 is in demand — right? So why are hundreds of Teslas now sitting dormant in parking lots across the nation?
EYES IN THE SKIES
The New York Times reports that a group calling itself the Shorty Air Force — that’s “shorty” as in “short selling” — has shared photos of large numbers of seemingly unsold Model 3s and other Teslas, parked together in lots and garages across the U.S. According to the Times, at least some of the members of the group think the stashed cars are evidence that Tesla is hiding poor sales.
Some of the photos, which the anonymous members of the Shorty Air Force take using drones and airplanes, feature hundreds of the vehicles, while others feature just a few dozen.
SUPPLY AND DEMAND
The company could be running low on delivery trailers, as Musk told one Twitter user. Maybe the photographed cars are in need of repairs before Tesla can deliver them to drivers — at least one photo showed a Tesla with a needed repair spelled out on its windshield. Or, you know, maybe the demand for Teslas simply isn’t as high as it seemed.
We may want to know the answer to the mystery of the extra Teslas even more than we want to know what changed Musk’s mind about settling that SEC lawsuit. And that’s something we really want to know.
READ MORE: Unraveling a Tesla Mystery: Lots (and Lots) of Parked Cars [The New York Times]
More on Tesla: A Used Tesla Model 3 Was Briefly Listed for $150,000
The Food and Drug Administration (FDA) just raided the headquarters of popular e-cigarette maker JUUL Labs in San-Francisco.
It seized thousands of pages of documents about the vape maker’s sales and marketing strategies, according to the Wall Street Journal. The raid comes on the tail of a huge effort by the agency to stop e-cigarette makers from marketing to minors.
Juul holds considerable power in the e-cigarette market. According to a recent Wells Fargo analysis as reported by CNBC, Juul’s sales skyrocketed 783 percent in just one year; experts estimate the company controls 68 percent of the e-cigarette market, according to data compiled by Bloomberg.
The FDA got interested because adolescents are such huge fans. The National Institute on Drug Abuse found that 7 in 10 teens are exposed to e-cig ads. Even the New York Times and the Wall Street Journal have reported that vaping’s popularity has taken off among teens.
The jury is still out whether using e-cigarettes is bad for your health, especially for teens who weren’t smokers before. Vaping was found to leave toxic chemicals in the lungs. But there’s a lot we don’t know yet.
The unfortunate reality: teens do what they want, but they often don’t know what’s good or bad for them, either. It’s time for us to do a better job at parenting them, parents and governing bodies alike.
READ MORE: FDA Raids Vape Maker Juul, Seizes ‘Thousands’ of Documents [Gizmodo]
More on e-cigarettes: The FDA Just Threatened to Crack Down on E-Cig Companies Like Juul
The U.S. military’s research agency wants help navigating the subterranean world. And since it doesn’t look like a ragtag team of sewer-dwelling mutant turtles will be enlisting in the armed forces any time soon, the agency is turning to the next best option: engineers.
Last week, the Defense Advanced Research Projects Agency (DARPA) announced six teams of roboticists that will compete in its Subterranean Challenge, a multi-year competition with $2.75 million up for grabs. Each team will receive funding to create robots that can map, navigate, and search complex underground environments, such as man-made tunnels, natural cave systems, and subterranean urban infrastructure.
The teams can choose to compete in just one of two tracks — systems or virtual — or both.
For the systems track, a team will create robots it can demonstrate on a physical course. Some of the spaces on this course may be barely large enough for a human to navigate, while others could be big enough for an all-terrain vehicle.
For the virtual track, a team will need to create the software for subterranean robots and demonstrate it in a simulated environment boasting a wider range of scenarios than in the physical tests.
In any case, the teams must be ready to compete in the first of DARPA’s tests in the fall of 2019. After a final competition in the fall of 2021, DARPA will award the winner of the systems track $2 million and the winner of the virtual track $750,000. The competition’s judges will place a premium on autonomy, since communicating with robots while they’re deep underground can be difficult.
Perhaps more important than earning the big payout, though, these teams of roboticists could help the military save lives if the U.S. ever finds itself in a situation like the daring rescue of children from a Thai cave earlier this year. We’d like to see mutant turtles manage that.
READ MORE: Modular Robots Being Developed to Navigate Tunnels and Caves [The Engineer]
More on cave rescues: Elon Musk Is Sending a Team of Engineers to Help Rescue Trapped Thai Boys
Ecovative, the startup that makes biodegradable packaging for furniture seller IKEA, says that the same mushroom roots it uses to pack up tables and chairs could be used to create the next generation of delicious lab-grown meats.
“This is the next natural step in this evolution to use natural products to make things,” said co-founder Eben Bayer, in an interview with Business Insider.
The problem Ecovative wants to solve, Bayer told Business Insider, is that while many lab-grown meat startups have succeeded in growing individual cells from livestock into sausages and burgers, they’ve struggled to recreate the complex anatomy of a chicken breast or a fatty steak.
That’s where Bayer thinks his company’s mycelium, or mushroom roots, could help. Using a formula similar to the mixture of mycelium and discarded farm materials it’s turned into green packaging for IKEA and Dell, he said that Ecovative has created a “scaffold” that lets meat cells grow into ropes of muscle and layers of succulent fat.
The carbon emissions of farm-grown meat are colossal. A tasty lab-grown meat with a low carbon footprint could be a game changer — not just for your dinner plate, but for the future of the planet.
Bayer didn’t say whether his company has locked down any industry partners, like Memphis Meats or New Age Meats. Unless Ecovative plans to market its own fake meat, that’ll be a key step to getting their food into hungry mouths.
READ MORE: A startup that turns mushrooms into IKEA packaging wants to become the backbone of the lab-grown meat industry [Business Insider]
More on lab-grown meat: Companies Are Betting on Lab-Grown Meat, but None Know How to Get You to Eat It
Getting humans to Mars could cost upwards of $1 trillion. But for the astronauts making the journey, the cost could be even higher — they might pay with their lives.
According to a new NASA-funded study conducted by researchers at Georgetown University Medical Center, exposure to galactic cosmic radiation during a trip to Mars could leave astronauts with permanent and potentially deadly damage to their gastrointestinal tissue.
On Earth and in its orbit, people and animals are protected from certain types of cosmic radiation by the planet’s magnetic field. For their study, published Monday in the Proceedings of the National Academy of Sciences, the researchers attempted to simulate the conditions found in deep space by blasting 10 male mice with doses of heavy ion radiation — the equivalent, they said, of what a human astronaut would be exposed to on a deep space journey that lasted several months.
The researchers then euthanized the mice and studied samples of their intestinal tissue. They found that the guts of irradiated mice hadn’t been absorbing nutrients properly, and had formed cancerous polyps. Even worse, the damage appears to be permanent — mice killed and dissected after a year still hadn’t recovered.
A GIANT LEAP
The findings are alarming because we don’t yet have a way to protect astronauts from cosmic radiation.
“With the current shielding technology, it is difficult to protect astronauts from the adverse effects of heavy ion radiation,” said researcher Kamal Datta in a press release. “Although there may be a way to use medicines to counter these effects, no such agent has been developed yet,”
Of course, mice aren’t the same as humans, so the actual effect of radiation on astronauts is still largely unknown. However, if we ever hope to send humans to Mars and beyond, nailing down the effects of deep space on astronaut health will need to be a top priority.
READ MORE: Animal Study Suggests Deep Space Travel May Significantly Damage GI Function in Astronauts [Georgetown University]
More on astronaut health: Traveling to Mars Will Blast Astronauts With Deadly Cosmic Radiation, New Data Shows
DON’T PROGRAM WHEN YOU’RE HUNGRY
Artificial intelligence and human intelligence aren’t the same, to be sure. But it seems we have one big thing in common: we spend our time fantasizing about dinner.
In a research paper published Friday in the preprint server arXiv, a team of AI researchers from Google DeepMind teamed up with a scientist from Heriot-Watt University to develop what they’re calling the largest, most advanced Generative Adversarial Network (GAN) ever. And to prove that it works better than any other, they had it create photorealistic images of landscapes, (very) good dogs and other animals, and of course some hot, juicy burgers.
WHAT’S THE BEEF
Generative Adversarial Networks are one of the more sophisticated types of AI algorithms out there. In short, one network creates something — in this case an image — as realistically as possible. Meanwhile, another network checks its work against examples of the real deal. This back and forth makes the two networks gradually improve to the point where the latter system is really good at detecting AI-generated images but the other is still able to fool it.
GANs are generally used to create media, whether its a new level of a video game or 3D models. And though their ability to fool us and themselves presents a bit of a double-edged sword, their ability to discriminate algorithmic from human output can be used to find and fight misleading deepfakes.
This process is what allowed DeepMind’s burger-cooking GAN to go from creating, as Quartz reported, a weird, beefy blob in 2016 to what actually looks like an appetizing (albeit overcooked) slab of burg today.
Some argue that we need to move away from red meat, but AI isn’t ready to join us yet. While the burger-generating algorithm is in great shape, others aren’t quite there. For instance, the twists and turns of brass tubing that make up a French horn baffled the network. The butterfly image is just a little off, and any attempt to render a photo of a human results in a horrifying blob monster.
It’s an ugly reality we see in every corner of the web: racism, bigotry, misogyny, political extremism. Hate speech seems to thrive on the internet like a cancer.
It persists and flourishes on social media platforms like Facebook, Twitter, and Reddit — they certainly don’t claim to welcome it, but they’re having a hell of a time keeping it in check. No AI is yet sophisticated enough to flag all hate speech perfectly, so human moderators have to join the robots in the trenches. It’s an imperfect, time-consuming process.
As social media sites come under increasing scrutiny to root out their hate speech problem, they also come up against limits for how much they can (or will) do. So whose responsibility is it, anyway, to mediate hate speech? Is it up to online platforms themselves, or should the government intervene?
The British government seems to think the answer is both. The Home Office and the Department of Digital, Culture, Media, and Sports (DCMS) — a department responsible for regulating broadcasting and the internet — is drafting plans for regulation that would make platforms like Facebook and Twitter legally responsible for all the content they host, according to Buzzfeed News.
In a statement to Futurism, the DCMS says that it has “primarily encouraged internet companies to take action on a voluntary basis.” But progress has been too slow — and that’s why it plans to intervene with “statutory intervention.”
But is this kind of government intervention really the right way forward when it comes to hate speech online? Experts aren’t convinced it is. In fact, some think it may even do more harm than good.
Details on about DCMS’ plan are scant — it’s still early in development. What we do know so far is that the legislation, Buzzfeed reports, would have two parts. One: it would introduce “take down times” — timeframes within which online platforms have to take down hate speech, or face fines. Two: it would standardize age verification for Facebook, Twitter, and Instagram users. A white paper detailing these plans will allegedly be published later this year.
Why should the government intervene at all? Internet platforms are already trying to limit hate speech on their own. Facebook removed more than 2.5 million pieces of hate speech and “violent content” in the first quarter of 2018 alone, according to a Facebook blog post published back in May.
Indeed, these platforms have been dealing with hate speech for as long as they’ve existed. “There’s nothing new about hate speech on online platforms,” says Brett Frischmann, a professor in Law, Business and Economics at Villanova University. The British government might be trying to put in a law to stop hate speech too quickly to come up with anything that will work the way it’s supposed to.
Unfortunately, hate speech is a whack-a-mole that moves far faster than publishers seem to be able to. As a result, a lot of it goes unmediated. For instance, hate speech from far right extremist groups in the U.K. often still falls through the cracks, fueling xenophobic beliefs. In extreme cases, that kind of hate speech can lead to physical violence and the radicalization of impressionable minds on the internet.
Jim Killock, executive director for the Open Rights Group in the U.K. — a non-profit committed to preserving and promoting citizens’ rights on the internet — thinks the legislation, were it to pass tomorrow, wouldn’t be just ineffective. It might even prove to be counterproductive.
The rampant hate speech online, Killock believes, is symptomatic of a much larger problem. “In some ways, Facebook is a mirror of our society,” he says. “This tidal wave of unpleasantness, like racism and many other things, has come on the back of [feeling] disquiet about powerlessness in society, people wanting someone to blame.”
Unfortunately, that kind of disillusionment with society won’t change overnight. But when a policy only addresses the symptoms of systemic injustice instead of the actual issues, the government is making a mistake. By censoring those who feel like they are being censored, the government is reinforcing their beliefs. And that’s not a good sign, especially when those who are being censored are actively spreading hate speech online themselves.
Plus, a law like the one DCMS has proposed would effectively make certain kinds of speech illegal, even if that’s not what the law says. Killock argues that while a lot of online material may be “unpleasant,” it often doesn’t violate any laws. And it shouldn’t be to companies to decide where the line between the two lies, he adds. “If people are breaking the law, it frankly is the job of courts to set those boundaries.”
But there’s good reason to avoid redrawing those legal boundaries for what kind of behavior online should be enforced (even if it is technically not illegal). The government might have to adjust much wider sweeping common law that concerns the freedom of speech. That is probably not going to happen.
The UK government’s plans are still in the development stage, but there are already plenty of reasons to be skeptical that the law would do what the government intends. Muddying the boundaries between illegal and non-illegal behavior online sets a dangerous precedent, and that could have some undesirable consequences — like wrongfully flagging satirical content as hate speech for instance.
The DCMS is setting itself up for failure: censoring content online will only embolden its critics, while failing to address the root issues. It has to find that middle ground if it wants a real shot: too much censorship, and the mistrust of those who feel marginalized will keep building. Too little regulation, and internet platforms will continue to make many users feel unwelcome or lead to violence.
The U.K. government has a few tactics it could try before it decides to regulate speech online. The government could incentivize companies to strengthen the appeal process for taking down harmful content. “If you make it really hard for people to use appeals, they may not use them at all,” Killock argues. For instance, the government could introduce legislation that would ensure each user has a standardized way of reporting problematic content online.
But it will take a much bigger shift before we are able to get rid of hate speech in a meaningful way. “Blaming Facebook or the horrendous people and opinions that exist in society is perhaps a little unfair,” Killock says. “If people really want to do and say these [hurtful] things, they will do it. And if you want them to stop, you have to persuade them that it’s a bad idea.”
What do those policies look like? Killock doesn’t have the answer yet. “The question we have really is, how do we make society feel better about itself?” says Killock. “And I’m not pretending that that’s a small thing at all.”
More on regulating speech online: Social Media Giants Need Regulation From a Government That’s Unsure How To Help
Imagine, for a second, that human life has been snuffed out like the flame of a candle. Wouldn’t it be nice if we’d archived the sum of our knowledge for whoever might come along next?
That’s the idea behind the Arch Mission Foundation, a nonprofit exploring ways to store vast amounts of information. in formats that will last for “thousands to millions” of years. And after all, spreading caches of information around the solar system is the ultimate backup.
Its latest project: encoding important books and crowdsourced images into synthetic DNA molecules, and storing them on the Moon.
To tackle the DNA project, Arch Mission is collaborating with Microsoft, the University of Washington, and the Twist Bioscience Corporation. The collaborators chose DNA, they wrote in a press release, because it can store information in an ultra-compact form.
“Using DNA as a building block you can write and store information in an extremely small volume,” said Arch Mission co-founder Nova Spivack in an interview with Scientific American. “A tiny liquid droplet could contain Amazon’s entire data center. You can then replicate it inexpensively to create literally billions of copies.”
Once they’ve stored the data in the DNA — which will include 20 books selected by Project Gutenberg and 10,000 crowdsourced images of everyday life — the plan is to send it to the Moon on an Atlas V rocket in 2020, according to Scientific American.
The Moon project is just an early step in Arch Mission’s very ambitious plans, which include building a vast repository of human knowledge and placing copies on planets, asteroids, comets, and moons around the solar system.
It’s not clear, Spivack acknowledged in the Scientific American interview, how likely it is that future life forms would ever actually stumble across one of those archives. But the project, he argued, is also a “grand gesture that brings together our hopes and dreams about becoming a spacefaring civilization.”
READ MORE: Lunar library to include photos, books stored in DNA [University of Washington]