But have you noticed just how many Elon Musks there are on Twitter? A lot.
The controversial billionaire has been the target of countless scams and schemes that often use bots to rope not-so-diligent users into believing it’s him. Fake Elon Musks have been roaming Twitter for years now, some of them with handles that only slightly deviate from the real thing by a letter (like @elonmsuk), while others don’t try nearly as hard (they use Elon’s photo and user name, but with a completely different handle).
The problem is a lot of users peruse the replies and often extremely long threads of the verified Elon Musk’s tweets. Chances are that many of them will fall for “official ETH and BTC giveaways.” A phishing scam that works like this:
Twitter users are reeled in — a tiny “donation” (usually an infinitesimally small amount in an obscure cryptocurrency) that will get me a huge payout in return? Yes, please.
That “donation” comes with sensitive information like credentials for cryptocurrency wallets.
The “giveaway” they get in return for that donation will often include coin mining malware that installs itself on the victim’s computer.
In some instances, hackers are able to steal cryptocurrencies from the victim’s wallets outright.
The scheme is so common, it’s even caught the attention of the real Elon himself.
I want to know who is running the Etherium scambots! Mad skillz …
Twitter decided to crack down on these scammers. Back in July, it started locking any accounts that used the display name “Elon Musk.” But the scammers are staying one step ahead of Twitter. Every day, they are finding new ways to stay ahead of the curve, sometimes with amusing results. The scammers’ latest ruse involves altering the Elon Musk avatar in bizarre (and hilarious) ways to circumvent Twitter’s algorithms, as Twitter user @vogon points out.
lmfao the ethereum scammers are using some sort of content-aware scale thing to mess up elon musk’s avatar a bit to keep from getting caught now
Granted, we don’t actually know if Twitter is using algorithms like this to scan for duplicates and fake accounts. The scammers are likely trying to come out ahead, and preempt Twitter’s future actions. It also wouldn’t be in Twitter’s best interest to announce its actions to the world if it is scanning avatars for fake accounts.
For now, it seems, the scammers are still kicking around. To find some, just (carefully) go through any replies to the real Elon Musk’s recent tweets.
Dissatisfied with Twitter’s ineffective ways to ban the scammers, Musk took matters into his own hands. As Coin Telegraph reports, he even asked Dogecoin creator Jackson Palmer to “help get rid of the annoying scam spammers.” Notably absent of this conversation: Twitter itself.
In truth, Ethereum scammers are small fish for Twitter, but it doesn’t seem to be able to get rid of them. How can a platform of a third of a billion users be trusted to delete misleading information if it can’t even address that?
This doesn’t bode well for the impending midterm elections. Once equipped with targeted misinformation and countless fake Twitter accounts, foreign forces such as the Russian Internet Research Agency will likely have an incredibly easy time spreading politically charged “fake news” on Twitter.
The freedom that comes with owning a car is almost as old as the American dream itself.
But the safety risks associated with automobiles are immense: in 2017, more than 37,000 people died in motor vehicle accidents in the U.S., according to the U.S. Department of Transportation (DOT).
One solution: accelerate the development of autonomous vehicles.
The DOT released a report Wednesday that outlines sweeping changes that would allow automated cars on public roads.
To get started, the DOT wants to “adapt the definitions of ‘driver’ to include ‘automated systems.’” It also wants to allow automated vehicles with no “traditional cabin and control features of human-operated vehicles.”
In other words, the law would no longer require vehicles to have a steering wheel or gas pedal.
There are still plenty of skeptics that believe it’s too early to make the switch to fully autonomous cars. As repeated accidents show, car makers still have a lot of kinks to iron out.
But with a solid regulatory foundation from the federal government, maybe the U.S. has shot at becoming a world leader in autonomous vehicle tech.
Did you know some of IKEA’s biodegradable packaging is made out of mushrooms? No? Well would you eat lab-grown meat out of the stuff? That’s what one startup wants to happen.
In case we haven’t completely ruined your appetite, get ready for a new fruit to be on your plate: scientists are working to domesticate groundcherries — a sweet fruit that’s in the tomato family — using cutting-edge gene-editing techniques.
Read on to find out what else has been captivating minds in science over the last seven days.
The Apollo Lunar Module weighed less than 11,000 pounds and was just 22 feet tall, even with its legs extended. And that’s after years of space- and weight-saving considerations.
Almost 50 years later, landers are getting bigger, not smaller (the way so many of our devices do). Bigger landers mean more room for astronauts, more fuel, and longer trips.
Take Lockheed Martin’s newest design. The aerospace corporation, which is already working on NASA’s Orion spacecraft, just proposed a towering lunar lander that’s twice the height of the Apollo module and weighs more than four times as much. The lander, should it ever be sent to the lunar surface, would be an impressive piece of engineering that could also help us envision what kind of craft we would need to send humans to Mars.
Lockheed’s lander would carry four astronauts to the moon, and keep them there for a two week mission. It estimates that the four engines could last for five to ten flights.
It gets better: the proposal includes plans for a freakin’ “lift elevator platform to get the crew down from the cabin to the surface,” according to Lockheed Martin principal space exploration architect Tim Cichan in an interview with Ars Technica.
When exactly the lander will materialize or launch is still up in the air. NASA will only start deciding on how it plans to send humans to the moon in 2024, according to recent documents.
For now, Lockheed Martin’s plans are just renders. But it’s an exciting look at what future lunar (or Martian) landings could look like.
Amazon is the second largest employer in the United States. Its CEO just overtook Bill Gates as the richest man on the planet.
It’s hard to square that reality with the working conditions of the 575,000 Americans Amazon employs. Reports claim employees have to pee in bottles to keep up with the relentless pace at Amazon’s factories. The Economist found that Amazon warehouses openings actually caused the wages to drop at other warehouses in the same regions.
But the days of sub-par working conditions are behind us, right? On Monday, the online retailer announced it would raise the minimum wage to $15 an hour for some 250,000 workers. It’s not a fluke: some of those employees will see significant boosts in income. Amazon is even announcing it will advocate for a higher minimum wage on a federal level in Congress.
But what drove the company to announce this now?
Compensating Amazon workers fairly is an inherently good thing, but the timing of such an announcement is a little suspect.
Amazon might have agreed to the age hike:
1. Because Bezos Actually Cares
You can bet that Amazon didn’t get to its #324 rank on Forbes’ America’s Best Employers list by offering outstanding benefits that cost the company. But maybe Bezos has a heart, after all. What brand would want to be associated with atrocious working conditions?
2. To Prevent Competitors from Poaching Workers
Here’s one for the skeptics out there: maybe raising the minimum wage of Amazon workers was a play to keep employees from working for smaller competitors, or even steal workers from smaller competitors, as an opinion piece in the Wall Street Journal suggests.
To take the argument further, lobbying Congress to raise the national minimum wage will also raise costs for Amazon’s competitors. And not every small business will be able to afford to compete with Amazon’s wages.
Bezos might be aligning his company with the left in the U.S. to get ahead the “blue wave” — a wave of Democratic wins for House and Senate seats in the last couple of months leading up to the midterm elections.
Sanders seemed to be satisfied by the $15 minimum wage announcement and urged other companies to follow Bezos’ lead. “You cannot continue to pay your workers starvation wages,” Sanders told CNN‘s Wolf Blizter in an interview. “Learn from what Bezos has done. He has done the right thing. You have got to do it as well.” The rest of the appalling work conditions? Well hopefully Amazon will just figure it out.
4. To Appease Restless Employees
Amazon has been suppressing all of its employees’ efforts to unionize ever since the company was founded in 1994, The Guardian reports.
A higher minimum wage often reduces employees’ desire to unionize to fight for better working conditions (at least a little bit). “And now, amid growing labour unrest and intense anti-union activity on Amazon’s part, a conveniently-timed wage hike,” as Motherboard notes.
Amazon has made the right decision to hold itself accountable for paying (at least more of) its half a million employees a fair, livable wage, and fighting for more companies to do the same.
Bezos is not a hero, and it would be a mistake to call him that. Money talks — and that goes not only for all the minimum wage earners out there, but especially for the richest man on the planet.
The Food and Drug Administration (FDA) just raided the headquarters of popular e-cigarette maker JUUL Labs in San-Francisco.
It seized thousands of pages of documents about the vape maker’s sales and marketing strategies, according to the Wall Street Journal. The raid comes on the tail of a huge effort by the agency to stop e-cigarette makers from marketing to minors.
Juul holds considerable power in the e-cigarette market. According to a recent Wells Fargo analysis as reported by CNBC, Juul’s sales skyrocketed 783 percent in just one year; experts estimate the company controls 68 percent of the e-cigarette market, according to data compiled by Bloomberg.
The jury is still out whether using e-cigarettes is bad for your health, especially for teens who weren’t smokers before. Vaping was found to leave toxic chemicals in the lungs. But there’s a lot we don’t know yet.
The unfortunate reality: teens do what they want, but they often don’t know what’s good or bad for them, either. It’s time for us to do a better job at parenting them, parents and governing bodies alike.
It’s an ugly reality we see in every corner of the web: racism, bigotry, misogyny, political extremism. Hate speech seems to thrive on the internet like a cancer.
It persists and flourishes on social media platforms like Facebook, Twitter, and Reddit — they certainly don’t claim to welcome it, but they’re having a hell of a time keeping it in check. No AI is yet sophisticated enough to flag all hate speech perfectly, so human moderators have to join the robots in the trenches. It’s an imperfect, time-consuming process.
As social media sites come under increasing scrutiny to root out their hate speech problem, they also come up against limits for how much they can (or will) do. So whose responsibility is it, anyway, to mediate hate speech? Is it up to online platforms themselves, or should the government intervene?
The British government seems to think the answer is both. The Home Office and the Department of Digital, Culture, Media, and Sports (DCMS) — a department responsible for regulating broadcasting and the internet — is drafting plans for regulation that would make platforms like Facebook and Twitterlegally responsible for all the content they host, according to Buzzfeed News.
In a statement to Futurism, the DCMS says that it has “primarily encouraged internet companies to take action on a voluntary basis.” But progress has been too slow — and that’s why it plans to intervene with “statutory intervention.”
But is this kind of government intervention really the right way forward when it comes to hate speech online? Experts aren’t convinced it is. In fact, some think it may even do more harm than good.
Details on about DCMS’ plan are scant — it’s still early in development. What we do know so far is that the legislation, Buzzfeed reports, would have two parts. One: it would introduce “take down times” — timeframes within which online platforms have to take down hate speech, or face fines. Two: it would standardize age verification for Facebook, Twitter, and Instagram users. A white paper detailing these plans will allegedly be published later this year.
Why should the government intervene at all? Internet platforms are already trying to limit hate speech on their own. Facebook removed more than 2.5 million pieces of hate speech and “violent content” in the first quarter of 2018 alone, according to a Facebook blog post published back in May.
Indeed, these platforms have been dealing with hate speech for as long as they’ve existed. “There’s nothing new about hate speech on online platforms,” says Brett Frischmann, a professor in Law, Business and Economics at Villanova University. The British government might be trying to put in a law to stop hate speech too quickly to come up with anything that will work the way it’s supposed to.
Unfortunately, hate speech is a whack-a-mole that moves far faster than publishers seem to be able to. As a result, a lot of it goes unmediated. For instance, hate speech from far right extremist groups in the U.K. often still falls through the cracks, fueling xenophobic beliefs. In extreme cases, that kind of hate speech can lead to physical violence and the radicalization of impressionable minds on the internet.
Jim Killock, executive director for the Open Rights Group in the U.K. — a non-profit committed to preserving and promoting citizens’ rights on the internet — thinks the legislation, were it to pass tomorrow, wouldn’t be just ineffective. It might even prove to be counterproductive.
The rampant hate speech online, Killock believes, is symptomatic of a much larger problem. “In some ways, Facebook is a mirror of our society,” he says. “This tidal wave of unpleasantness, like racism and many other things, has come on the back of [feeling] disquiet about powerlessness in society, people wanting someone to blame.”
Unfortunately, that kind of disillusionment with society won’t change overnight. But when a policy only addresses the symptoms of systemic injustice instead of the actual issues, the government is making a mistake. By censoring those who feel like they are being censored, the government is reinforcing their beliefs. And that’s not a good sign, especially when those who are being censored are actively spreading hate speech online themselves.
Plus, a law like the one DCMS has proposed would effectively make certain kinds of speech illegal, even if that’s not what the law says. Killock argues that while a lot of online material may be “unpleasant,” it often doesn’t violate any laws. And it shouldn’t be to companies to decide where the line between the two lies, he adds. “If people are breaking the law, it frankly is the job of courts to set those boundaries.”
But there’s good reason to avoid redrawing those legal boundaries for what kind of behavior online should be enforced (even if it is technically not illegal). The government might have to adjust much wider sweeping common law that concerns the freedom of speech. That is probably not going to happen.
The UK government’s plans are still in the development stage, but there are already plenty of reasons to be skeptical that the law would do what the government intends. Muddying the boundaries between illegal and non-illegal behavior online sets a dangerous precedent, and that could have some undesirable consequences — like wrongfully flagging satirical content as hate speech for instance.
The DCMS is setting itself up for failure: censoring content online will only embolden its critics, while failing to address the root issues. It has to find that middle ground if it wants a real shot: too much censorship, and the mistrust of those who feel marginalized will keep building. Too little regulation, and internet platforms will continue to make many users feel unwelcome or lead to violence.
The U.K. government has a few tactics it could try before it decides to regulate speech online. The government could incentivize companies to strengthen the appeal process for taking down harmful content. “If you make it really hard for people to use appeals, they may not use them at all,” Killock argues. For instance, the government could introduce legislation that would ensure each user has a standardized way of reporting problematic content online.
But it will take a much bigger shift before we are able to get rid of hate speech in a meaningful way. “Blaming Facebook or the horrendous people and opinions that exist in society is perhaps a little unfair,” Killock says. “If people really want to do and say these [hurtful] things, they will do it. And if you want them to stop, you have to persuade them that it’s a bad idea.”
What do those policies look like? Killock doesn’t have the answer yet. “The question we have really is, how do we make society feel better about itself?” says Killock. “And I’m not pretending that that’s a small thing at all.”
The advent of FaceID, a feature that lets iPhone owners log into their devices using facial recognition, means it’s never been easier to unlock a smartphone — all you’ve gotta do is look at the camera. But the technology also makes it easier for police to access data stored on suspects’ phones, raising thorny new legal questions.
Take the case of Ohio resident Grant Michalski. Forbesreports that the FBI raided Michalski’s home in August, on suspicion that he’d sent and received child pornography. Then investigators forced Michalski to unlock his personal iPhone X using FaceID, allowing them to access his chats and photos. It’s the first documented case of its kind.
The battle over whether law enforcement should be able to access suspects’ phones is hotly contested.
In 2016, the FBI tried to get into the iPhone of one of the shooters who perpetrated the 2015 terrorist attack in San Bernardino, California. The attackers were killed in a shootout with police, but officials wanted information off the one suspect’s phone to find out if the couple had accomplices or had been planning further attacks. The phone was protected by a pass code that only allowed a limited number of login attempts. The FBI asked Apple for help, but the tech giant didn’t comply with requests.
Also in 2016, the FBI tried to access the iPhone of an alleged gang member in California. This time, prosecutors wanted the suspect to provide a fingerprint to unlock the iPhone using TouchID, which a Los Angeles judge granted. But the next year, a federal judge in Chicago rejected a request to force suspects to provide their fingerprints to unlock a personal device.
We use phones to store some of the most intimate details of our lives. It might seem like a good, legal plan to allow the FBI to force a child porn suspect like Michalski to give up his data. But what if that decision weakens rights for the rest of us?
One thing is for sure: this case is bound to rekindle that discussion.
It’s hard to think Elon Musk knew the full extent of what he was doing when he tweeted that he was taking Tesla private.
And the consequences are substantial, even for someone who uses Twitter as recklessly as Musk. The Securities and Exchange Commission (SEC) slapped him with subpoena and a lawsuit. On Saturday, Tesla and Musk settled with the SEC. They’ll have to cough up $20 million each, Musk will have to relinquish his seat as chairman for three years, and lawyers will have to oversee his communications.
Musk could have avoided much of the outrage and stock price instability (it dropped 37 points overnight when the SEC first announced its lawsuit, then bounced back even higher than before the suit by Monday morning) if he hadn’t turned down a much tamer settlement last Thursday, the Wall Street Journaland New York Timesreport.
We may never know what convinced Musk to change his mind and settle with the SEC just two days after the lawsuit (uncharacteristically, he didn’t offer any tweeted insight into his thought process).
But from where we sit now, this could be a great move for the company — and for the future of electric vehicles. The company will get all of the benefits Musk offers, with fewer of the liabilities that come with his leadership.
You’ve likely heard about some of these liabilities. From lashing out at short sellers to making weed jokes (allegedly to impress his girlfriend Grimes) to even getting sued for libel by a British diver he accused of being a pedophile, Musk’s shenanigans have pulled Tesla into financial uncertainty. The SEC settlement really just seems like the culmination of years of shenanigans.
But there are perks of Musk’s leadership, too. He’s charismatic and ambitious with a devoted following. Tesla’s stock has probably only done as well as it has because many people regard Musk as a visionary and savior of the electric car. “Historically, Tesla has had easy access to capital markets, largely due to the public’s perception of Musk as a visionary,” UBS analyst Colin Langan said in a research note, as quoted by Forbes. Kicking him out completely probably wouldn’t be great for Tesla’s business.
The settlement means Tesla gets the best of all worlds. As CEO, Elon still has quite a bit of power over the company, while a new chairman will take control of the board. That means Musk will have to answer to someone new. And the requirement for a lawyer who oversees Musk’s communications — a.k.a his tweets — will help, too. Maybe having an intermediary that can rein in a hotblooded Musk could put an end to the weed jokes, and potshots aimed at short sellers.
The result, if Tesla’s lucky, will be a more predictable Musk, which will mean more stability for shareholders.
The dust is still settling over at Tesla; Musk has 45 days until he has to step down as chairman. We still don’t know how shareholders will react to a future chairman of the board. What will a Tesla without Musk as chairman look like? Whether Musk will relinquish some of that decision-making power to a new chairman (and who that chairman will be) is difficult to say. And will that chairman be able to keep Musk in line when he decides to tweet himself into a corner again?
That’s also hard to say, but one thing is clear: Musk made the right decision in stepping down as chairman. A company like Tesla with such a bold vision of the future is too important.