Tesla Seems to Be Producing Plenty of Cars. They Just Aren’t Ending up on the Road.


At one point, it seemed like Tesla Model 3s were more in demand than tickets to a hypothetical Coachella co-headlined by the newly reunited Jay-Z and Kanye.

An estimated 500,000 people pre-ordered one of the $35,000 vehicles. In an effort to make all those cars, Tesla had to resort to moving assembly into a tent. A used Model 3 was even (briefly) listed for sale online, marked up to $150,000.

So the Model 3 is in demand — right? So why are hundreds of Teslas now sitting dormant in parking lots across the nation?


The New York Times reports that a group calling itself the Shorty Air Force — that’s “shorty” as in “short selling” — has shared photos of large numbers of seemingly unsold Model 3s and other Teslas, parked together in lots and garages across the U.S. According to the Times, at least some of the members of the group think the stashed cars are evidence that Tesla is hiding poor sales.

Some of the photos, which the anonymous members of the Shorty Air Force take using drones and airplanes, feature hundreds of the vehicles, while others feature just a few dozen.


That’s not the only explanation for why these cars are simply chilling in lots while drivers who pre-ordered Model 3s beg Elon Musk on Twitter to deliver their vehicles.

The company could be running low on delivery trailers, as Musk told one Twitter user. Maybe the photographed cars are in need of repairs before Tesla can deliver them to drivers — at least one photo showed a Tesla with a needed repair spelled out on its windshield. Or, you know, maybe the demand for Teslas simply isn’t as high as it seemed.

We may want to know the answer to the mystery of the extra Teslas even more than we want to know what changed Musk’s mind about settling that SEC lawsuit. And that’s something we really want to know.

READ MORE: Unraveling a Tesla Mystery: Lots (and Lots) of Parked Cars [The New York Times]

More on Tesla: A Used Tesla Model 3 Was Briefly Listed for $150,000

The FDA Just Raided the Headquarters of E-Cigarette Maker JUUL


The Food and Drug Administration (FDA) just raided the headquarters of popular e-cigarette maker JUUL Labs in San-Francisco.

It seized thousands of pages of documents about the vape maker’s sales and marketing strategies, according to the Wall Street Journal. The raid comes on the tail of a huge effort by the agency to stop e-cigarette makers from marketing to minors.


The FDA has been on Juul’s case since April 2018, telling it off for targeting users under the age of 21. Earlier this month, the agency gave Juul 60 days to get its act together.

Juul holds considerable power in the e-cigarette market. According to a recent Wells Fargo analysis as reported by CNBC, Juul’s sales skyrocketed 783 percent in just one year; experts estimate the company controls 68 percent of the e-cigarette market, according to data compiled by Bloomberg.

The FDA got interested because adolescents are such huge fans. The National Institute on Drug Abuse found that 7 in 10 teens are exposed to e-cig ads. Even the New York Times and the Wall Street Journal have reported that vaping’s popularity has taken off among teens.


The jury is still out whether using e-cigarettes is bad for your health, especially for teens who weren’t smokers before. Vaping was found to leave toxic chemicals in the lungs. But there’s a lot we don’t know yet.

The unfortunate reality: teens do what they want, but they often don’t know what’s good or bad for them, either. It’s time for us to do a better job at parenting them, parents and governing bodies alike.

READ MORE: FDA Raids Vape Maker Juul, Seizes ‘Thousands’ of Documents [Gizmodo]

More on e-cigarettes: The FDA Just Threatened to Crack Down on E-Cig Companies Like Juul

The Military Wants Robots That Can Explore Tunnels Barely Large Enough to Fit a Human


The U.S. military’s research agency wants help navigating the subterranean world. And since it doesn’t look like a ragtag team of sewer-dwelling mutant turtles will be enlisting in the armed forces any time soon, the agency is turning to the next best option: engineers.

Last week, the Defense Advanced Research Projects Agency (DARPA) announced six teams of roboticists that will compete in its Subterranean Challenge, a multi-year competition with $2.75 million up for grabs. Each team will receive funding to create robots that can map, navigate, and search complex underground environments, such as man-made tunnels, natural cave systems, and subterranean urban infrastructure.


The teams can choose to compete in just one of two tracks — systems or virtual — or both.

For the systems track, a team will create robots it can demonstrate on a physical course. Some of the spaces on this course may be barely large enough for a human to navigate, while others could be big enough for an all-terrain vehicle.

For the virtual track, a team will need to create the software for subterranean robots and demonstrate it in a simulated environment boasting a wider range of scenarios than in the physical tests.


In any case, the teams must be ready to compete in the first of DARPA’s tests in the fall of 2019. After a final competition in the fall of 2021, DARPA will award the winner of the systems track $2 million and the winner of the virtual track $750,000. The competition’s judges will place a premium on autonomy, since communicating with robots while they’re deep underground can be difficult.

Perhaps more important than earning the big payout, though, these teams of roboticists could help the military save lives if the U.S. ever finds itself in a situation like the daring rescue of children from a Thai cave earlier this year. We’d like to see mutant turtles manage that.

READ MORE: Modular Robots Being Developed to Navigate Tunnels and Caves [The Engineer]

More on cave rescues: Elon Musk Is Sending a Team of Engineers to Help Rescue Trapped Thai Boys

The Stuff in IKEA’s Biodegradable Packaging Will Now Make Tasty Lab-Grown Meats


Ecovative, the startup that makes biodegradable packaging for furniture seller IKEA, says that the same mushroom roots it uses to pack up tables and chairs could be used to create the next generation of delicious lab-grown meats.

“This is the next natural step in this evolution to use natural products to make things,” said co-founder Eben Bayer, in an interview with Business Insider.


The problem Ecovative wants to solve, Bayer told Business Insider, is that while many lab-grown meat startups have succeeded in growing individual cells from livestock into sausages and burgers, they’ve struggled to recreate the complex anatomy of a chicken breast or a fatty steak.

That’s where Bayer thinks his company’s mycelium, or mushroom roots, could help. Using a formula similar to the mixture of mycelium and discarded farm materials it’s turned into green packaging for IKEA and Dell, he said that Ecovative has created a “scaffold” that lets meat cells grow into ropes of muscle and layers of succulent fat.


The carbon emissions of farm-grown meat are colossal. A tasty lab-grown meat with a low carbon footprint could be a game changer — not just for your dinner plate, but for the future of the planet.

Bayer didn’t say whether his company has locked down any industry partners, like Memphis Meats or New Age Meats. Unless Ecovative plans to market its own fake meat, that’ll be a key step to getting their food into hungry mouths.

READ MORE: A startup that turns mushrooms into IKEA packaging wants to become the backbone of the lab-grown meat industry [Business Insider]

More on lab-grown meat: Companies Are Betting on Lab-Grown Meat, but None Know How to Get You to Eat It

Traveling to Mars Could Cause Life-Threatening Damage to Astronauts’ Guts, Says Study


Getting humans to Mars could cost upwards of $1 trillion. But for the astronauts making the journey, the cost could be even higher — they might pay with their lives.

According to a new NASA-funded study conducted by researchers at Georgetown University Medical Center, exposure to galactic cosmic radiation during a trip to Mars could leave astronauts with permanent and potentially deadly damage to their gastrointestinal tissue.


On Earth and in its orbit, people and animals are protected from certain types of cosmic radiation by the planet’s magnetic field. For their study, published Monday in the Proceedings of the National Academy of Sciences, the researchers attempted to simulate the conditions found in deep space by blasting 10 male mice with doses of heavy ion radiation — the equivalent, they said, of what a human astronaut would be exposed to on a deep space journey that lasted several months.

The researchers then euthanized the mice and studied samples of their intestinal tissue. They found that the guts of irradiated mice hadn’t been absorbing nutrients properly, and had formed cancerous polyps. Even worse, the damage appears to be permanent — mice killed and dissected after a year still hadn’t recovered.


The findings are alarming because we don’t yet have a way to protect astronauts from cosmic radiation.

“With the current shielding technology, it is difficult to protect astronauts from the adverse effects of heavy ion radiation,” said researcher Kamal Datta in a press release. “Although there may be a way to use medicines to counter these effects, no such agent has been developed yet,”

Of course, mice aren’t the same as humans, so the actual effect of radiation on astronauts is still largely unknown. However, if we ever hope to send humans to Mars and beyond, nailing down the effects of deep space on astronaut health will need to be a top priority.

READ MORE: Animal Study Suggests Deep Space Travel May Significantly Damage GI Function in Astronauts [Georgetown University]

More on astronaut health: Traveling to Mars Will Blast Astronauts With Deadly Cosmic Radiation, New Data Shows

Google Used Its Most Sophisticated AI yet to Create Pictures of Burgers


Artificial intelligence and human intelligence aren’t the same, to be sure. But it seems we have one big thing in common: we spend our time fantasizing about dinner.

In a research paper published Friday in the preprint server arXiv, a team of AI researchers from Google DeepMind teamed up with a scientist from Heriot-Watt University to develop what they’re calling the largest, most advanced Generative Adversarial Network (GAN) ever. And to prove that it works better than any other, they had it create photorealistic images of landscapes, (very) good dogs and other animals, and of course some hot, juicy burgers.


Generative Adversarial Networks are one of the more sophisticated types of AI algorithms out there. In short, one network creates something — in this case an image — as realistically as possible. Meanwhile, another network checks its work against examples of the real deal. This back and forth makes the two networks gradually improve to the point where the latter system is really good at detecting AI-generated images but the other is still able to fool it.

GANs are generally used to create media, whether its a new level of a video game or 3D models. And though their ability to fool us and themselves presents a bit of a double-edged sword, their ability to discriminate algorithmic from human output can be used to find and fight misleading deepfakes.

This process is what allowed DeepMind’s burger-cooking GAN to go from creating, as Quartz reported, a weird, beefy blob in 2016 to what actually looks like an appetizing (albeit overcooked) slab of burg today.


Some argue that we need to move away from red meat, but AI isn’t ready to join us yet. While the burger-generating algorithm is in great shape, others aren’t quite there. For instance, the twists and turns of brass tubing that make up a French horn baffled the network. The butterfly image is just a little off, and any attempt to render a photo of a human results in a horrifying blob monster.

READ MORE: In just two years, AI has managed to make a slightly more appetizing cheeseburger [Quartz]

More on GANs: If DARPA Wants To Stop Deepfakes, They Should Talk To Facebook And Google

The UK Government Is Planning to Regulate Hate Speech Online

It’s an ugly reality we see in every corner of the web: racism, bigotry, misogyny, political extremism. Hate speech seems to thrive on the internet like a cancer.

It persists and flourishes on social media platforms like Facebook, Twitter, and Reddit — they certainly don’t claim to welcome it, but they’re having a hell of a time keeping it in check. No AI is yet sophisticated enough to flag all hate speech perfectly, so human moderators have to join the robots in the trenches. It’s an imperfect, time-consuming process.

As social media sites come under increasing scrutiny to root out their hate speech problem, they also come up against limits for how much they can (or will) do. So whose responsibility is it, anyway, to mediate hate speech? Is it up to online platforms themselves, or should the government intervene?

The British government seems to think the answer is both. The Home Office and the Department of Digital, Culture, Media, and Sports (DCMS) — a department responsible for regulating broadcasting and the internet — is drafting plans for regulation that would make platforms like Facebook and Twitter legally responsible for all the content they host, according to Buzzfeed News.

In a statement to Futurism, the DCMS says that it has “primarily encouraged internet companies to take action on a voluntary basis.” But progress has been too slow — and that’s why it plans to intervene with “statutory intervention.”

But is this kind of government intervention really the right way forward when it comes to hate speech online? Experts aren’t convinced it is. In fact, some think it may even do more harm than good.

Details on about DCMS’ plan are scant — it’s still early in development. What we do know so far is that the legislation, Buzzfeed reports, would have two parts. One: it would introduce “take down times” — timeframes within which online platforms have to take down hate speech, or face fines. Two: it would standardize age verification for Facebook, Twitter, and Instagram users. A white paper detailing these plans will allegedly be published later this year.

Why should the government intervene at all? Internet platforms are already trying to limit hate speech on their own. Facebook removed more than 2.5 million pieces of hate speech and “violent content” in the first quarter of 2018 alone, according to a Facebook blog post published back in May.

Indeed, these platforms have been dealing with hate speech for as long as they’ve existed. “There’s nothing new about hate speech on online platforms,” says Brett Frischmann, a professor in Law, Business and Economics at Villanova University. The British government might be trying to put in a law to stop hate speech too quickly to come up with anything that will work the way it’s supposed to.

Unfortunately, hate speech is a whack-a-mole that moves far faster than publishers seem to be able to. As a result, a lot of it goes unmediated. For instance, hate speech from far right extremist groups in the U.K. often still falls through the cracks, fueling xenophobic beliefs. In extreme cases, that kind of hate speech can lead to physical violence and the radicalization of impressionable minds on the internet.

Image Credit: Pathum Danthanarayana

Jim Killock, executive director for the Open Rights Group in the U.K. — a non-profit committed to preserving and promoting citizens’ rights on the internet — thinks the legislation, were it to pass tomorrow, wouldn’t be just ineffective. It might even prove to be counterproductive.

The rampant hate speech online, Killock believes, is symptomatic of a much larger problem. “In some ways, Facebook is a mirror of our society,” he says. “This tidal wave of unpleasantness, like racism and many other things, has come on the back of [feeling] disquiet about powerlessness in society, people wanting someone to blame.”

Unfortunately, that kind of disillusionment with society won’t change overnight. But when a policy only addresses the symptoms of systemic injustice instead of the actual issues, the government is making a mistake. By censoring those who feel like they are being censored, the government is reinforcing their beliefs. And that’s not a good sign, especially when those who are being censored are actively spreading hate speech online themselves.

Plus, a law like the one DCMS has proposed would effectively make certain kinds of speech illegal, even if that’s not what the law says. Killock argues that while a lot of online material may be “unpleasant,” it often doesn’t violate any laws. And it shouldn’t be to companies to decide where the line between the two lies, he adds. “If people are breaking the law, it frankly is the job of courts to set those boundaries.”

But there’s good reason to avoid redrawing those legal boundaries for what kind of behavior online should be enforced (even if it is technically not illegal). The government might have to adjust much wider sweeping common law that concerns the freedom of speech. That is probably not going to happen.

The UK government’s plans are still in the development stage, but there are already plenty of reasons to be skeptical that the law would do what the government intends. Muddying the boundaries between illegal and non-illegal behavior online sets a dangerous precedent, and that could have some undesirable consequences — like wrongfully flagging satirical content as hate speech for instance.

The DCMS is setting itself up for failure: censoring content online will only embolden its critics, while failing to address the root issues. It has to find that middle ground if it wants a real shot: too much censorship, and the mistrust of those who feel marginalized will keep building. Too little regulation, and internet platforms will continue to make many users feel unwelcome or lead to violence.

The U.K. government has a few tactics it could try before it decides to regulate speech online. The government could incentivize companies to strengthen the appeal process for taking down harmful content. “If you make it really hard for people to use appeals, they may not use them at all,” Killock argues. For instance, the government could introduce legislation that would ensure each user has a standardized way of reporting problematic content online.

But it will take a much bigger shift before we are able to get rid of hate speech in a meaningful way. “Blaming Facebook or the horrendous people and opinions that exist in society is perhaps a little unfair,” Killock says. “If people really want to do and say these [hurtful] things, they will do it. And if you want them to stop, you have to persuade them that it’s a bad idea.”

What do those policies look like? Killock doesn’t have the answer yet. “The question we have really is, how do we make society feel better about itself?” says Killock. “And I’m not pretending that that’s a small thing at all.”

More on regulating speech online: Social Media Giants Need Regulation From a Government That’s Unsure How To Help

A Nonprofit Plans to Store Human Knowledge in DNA and Store It on the Moon


Imagine, for a second, that human life has been snuffed out like the flame of a candle. Wouldn’t it be nice if we’d archived the sum of our knowledge for whoever might come along next?

That’s the idea behind the Arch Mission Foundation, a nonprofit exploring ways to store vast amounts of information. in formats that will last for “thousands to millions” of years. And after all, spreading caches of information around the solar system is the ultimate backup.

Its latest project: encoding important books and crowdsourced images into synthetic DNA molecules, and storing them on the Moon.


To tackle the DNA project, Arch Mission is collaborating with Microsoft, the University of Washington, and the Twist Bioscience Corporation. The collaborators chose DNA, they wrote in a press release, because it can store information in an ultra-compact form.

“Using DNA as a building block you can write and store information in an extremely small volume,” said Arch Mission co-founder Nova Spivack in an interview with Scientific American. “A tiny liquid droplet could contain Amazon’s entire data center. You can then replicate it inexpensively to create literally billions of copies.”


Once they’ve stored the data in the DNA — which will include 20 books selected by Project Gutenberg and 10,000 crowdsourced images of everyday life — the plan is to send it to the Moon on an Atlas V rocket in 2020, according to Scientific American. 

The Moon project is just an early step in Arch Mission’s very ambitious plans, which include building a vast repository of human knowledge and placing copies on planets, asteroids, comets, and moons around the solar system.

It’s not clear, Spivack acknowledged in the Scientific American interview, how likely it is that future life forms would ever actually stumble across one of those archives. But the project, he argued, is also a “grand gesture that brings together our hopes and dreams about becoming a spacefaring civilization.”

READ MORE: Lunar library to include photos, books stored in DNA [University of Washington]

More on the Moon: Why Are We Going to the Moon, Again? Oh Right, to Make It a “Gas Station for Outer Space”

An AI Took a Road Trip and Wrote a Terrible Novel About It


Have you ever dreamed of leaving it all behind, setting out on that cross-country road trip, and coming back with the manuscript you always swore you’d write? Too late. Now robots are even coming after your great American novel.

A bunch of interconnected neural networks wrote a novel called “1 the Road” during a road trip from Brooklyn to New Orleans. Programmer Ross Goodwin built the system to churn out bursts of prose based on inputs including GPS coordinates, Foursquare data, and a camera with image recognition software.

There’s not much in the way of a story in the novel — though to be fair, that’s seldom stopped mediocre human novelists. And what it lacks in a compelling narrative it makes up for by demonstrating the experimental, artsy potential of AI-generated text.


While The Atlantic compared the AI to Jack Kerouac and guessed at hidden meanings in the text, it’s important to remember that we’re talking about nonsense passages like:

The table is black to be seen, the bus crossed in a corner. A military apple breaks in. Part of a white line of stairs and a street light was standing in the street, and it was a deep parking lot.

If you’re having a hard time gleaning the deep, hidden meaning from that excerpt, it’s likely because there isn’t one.


Goodwin trained his neural nets with databases of poetry and literature. So while “1 the Road” might seem artsy compared to Siri or Alexa, that’s only because the AI that produced it had been forced to read more lyrical, provocative language than a typical chatbot.

And to say that “artificial intelligence wrote a novel” is a bit of a stretch, given than doing so usually involves crafting coherent sentences and stringing thoughts together, all things that artificial intelligence isn’t yet creative enough to do.

A page turner, this is not.

READ MORE: When an AI Goes Full Jack Kerouac [The Atlantic]

More on AI-generated literature: Artificial Intelligence Writes Bad Poems Just Like An Angsty Teen

Scientists Used CRISPR to Domesticate This Delicate Fruit

Years ago, the idea of genetic splicing was a fictional pipe dream that invoked images of bat winged-lions or bears with laser eyes. That’s not on the horizon quite yet, but scientists recently accomplished a feat of genetic manipulation that’s nearly as exciting.

They altered the genes of a fruit, the strawberry groundcherry (Physalis pruinose) so that it can be readily cultivated and enjoyed outside of its native region — Mexico as well as Central and South America — for the first time. They published their results Monday in the journal Nature Plants.

Groundcherries aren’t unheard of to American consumers, but they’re pretty hard to get. See, the plant is notoriously difficult to keep alive in farms or gardens long enough to get the darn things to blossom in the first place. Like the cherry tomatoes to which they are closely related, groundcherries are particularly vulnerable to destruction by pests and cool temperatures.

The researchers used CRISPR-Cas9 gene editing tools to improve the size and rate of flower production for a particular kind of groundcherry called the strawberry groundcherry, known for its tropical vanilla flavor. They’ll use those techniques to help make the plant hardier so it can grow outside its native range.

“I firmly believe that with the right approach, the groundcherry could become a major berry crop,” said study author Zachary Lippman in a press release.

That’s exciting news, especially compared to what we normally hear from the world of CRISPR research: resilient corn and water-efficient tobacco, for example. These are important but overwhelmingly unsexy developments.

But this groundcherry research has a much greater chance of affecting everyday people. Sure, it’s just a berry that grows inside of a weird sack-like husk, but this research shows that we can domesticate a new crop in the lab over just a few years instead of a millennium on farms. If a more resilient, accessible groundcherry suddenly becomes available around the world, people can add a new type of healthy fruit to their diet.

This is all possible because scientists had already studied the tomato genome and experimented on it with CRISPR and other techniques. The groundcherry’s genes are similar enough that much of the work came down to fine-tuning existing tricks to a slightly-different genetic code. As such, we may soon see CRISPR-altered versions of other hyper-local plants that have historically been hard to tame in the near future alongside our new supply of groundcherries.

More on CRISPR-altered food: Scientists Used Genetic Modification to Create Low-Fat Pigs

The FBI Forced A Suspect To Unlock His iPhone With His Face


The advent of FaceID, a feature that lets iPhone owners log into their devices using facial recognition, means it’s never been easier to unlock a smartphone — all you’ve gotta do is look at the camera. But the technology also makes it easier for police to access data stored on suspects’ phones, raising thorny new legal questions.

Take the case of Ohio resident Grant Michalski. Forbes reports that the FBI raided Michalski’s home in August, on suspicion that he’d sent and received child pornography. Then investigators forced Michalski to unlock his personal iPhone X using FaceID, allowing them to access his chats and photos. It’s the first documented case of its kind.


The battle over whether law enforcement should be able to access suspects’ phones is hotly contested.

In 2016, the FBI tried to get into the iPhone of one of the shooters who perpetrated the 2015 terrorist attack in San Bernardino, California. The attackers were killed in a shootout with police, but officials wanted information off the one suspect’s phone to find out if the couple had accomplices or had been planning further attacks. The phone was protected by a pass code that only allowed a limited number of login attempts. The FBI asked Apple for help, but the tech giant didn’t comply with requests.

Also in 2016, the FBI tried to access the iPhone of an alleged gang member in California. This time, prosecutors wanted the suspect to provide a fingerprint to unlock the iPhone using TouchID, which a Los Angeles judge granted. But the next year, a federal judge in Chicago rejected a request to force suspects to provide their fingerprints to unlock a personal device.


We use phones to store some of the most intimate details of our lives. It might seem like a good, legal plan to allow the FBI to force a child porn suspect like Michalski to give up his data. But what if that decision weakens rights for the rest of us?

One thing is for sure: this case is bound to rekindle that discussion.

MORE ON THE FACEID UNLOCKING CASE: Feds Force Suspect To Unlock An Apple iPhone X With Their Face [Forbes]

Read More: When Can Law Enforcement Look at Your Devices? A Definitive List

Here’s What It’s like to Drive with Tesla’s New “Mad Max” Autopilot Mode


If the traffic is really bad, maybe you’ll need the spirit of action hero Mad Max to be your copilot.

In June, Tesla CEO Elon Musk tweeted that the Autopilot system on the company’s Semi truck would feature three settings for lane changes: “Standard,” “Aggressive,” and “Mad Max.”

Now it turns out the Semi isn’t the only Tesla vehicle getting a Mad Max option — the company has included it in the Autopilot Version 9 update it’s currently rolling out to all Tesla vehicles. One driver has posted two videos online of Mad Max mode in action on the freeway.


In two videos posted to YouTube, a Tesla driver going by the name Jasper Nuyens shared about 10 minutes of footage taken from behind the wheel of a Tesla with Mad Max mode enabled.

In the clips, you can see the Tesla autonomously navigate the mostly-deserted freeway and overtake a truck. Aside from that, though, the clips mostly just feature the effusive narration of Nuyens and video of the Tesla plowing straight ahead.

It’s not exactly the adrenaline-pumping action you’d expect from a car bearing the Mad Max moniker, but Nuyens seems impressed in the clips, thanking Elon Musk for adding the feature.


Nuyens does note that Mad Max mode isn’t yet perfect. “I noticed that one time it wanted to change lanes to the non-existing lane — the security lane, basically — and to a closed off lane,” he said in the video.

Yeah, that doesn’t sound like a good thing.

Like any true Teslaphile, though, Nuyens doesn’t see this “viewing non-lanes as lanes” situation as a major problem, noting that drivers will just need to pay extra attention. So, while “Mad Max” mode won’t channel Charlize Theron as Furiousa quite yet, it just might keep you alert while you’re behind the wheel.

READ MORE: Watch Tesla’s New Autopilot on ‘Mad Max’ Mode at Work [Electrek]

More on Mad Max mode: Elon Musk Says Autopilot Is Getting A “Mad Max” Option