For Your Society

Twitter Says It Is Ready for the Midterms, but Rogue Accounts Aren’t Letting Up

Twitter Says It Is Ready for the Midterms, but Rogue Accounts Aren’t Letting Up

Twitter Says It Is Ready for the Midterms, but Rogue Accounts Aren’t Letting Up

Twitter Says It Is Ready for the Midterms, but Rogue Accounts Aren’t Letting Up

Twitter Says It Is Ready for the Midterms, but Rogue Accounts Aren’t Letting Up

share this with your people

SAN FRANCISCO — Twitter created a team to automatically seek out suspicious activity, like thousands of messages intended to suppress the vote. It began coordinating closely with the Department of Homeland Security. And it recently introduced a tool to help people more easily report misleading tweets.

Ahead of the midterm elections on Tuesday, “we are more prepared than we have ever been,” said Del Harvey, Twitter’s head of trust and safety.

Yet over the past few months, Twitter has also grappled with a profusion of accounts masquerading as state Republican officials, and accounts pushing memes that falsely claimed immigration officials would be patrolling polling stations. Last week, researchers at Oxford University said Twitter now had 5 percent more false content than it did during the 2016 American presidential election.

“Never has it actually reached this threshold that we’ve seen now,” said Lisa-Maria Neudert, one the Oxford researchers.

With Americans going to the polls on Tuesday, it is down to the wire for social media companies to show that they have clamped down on disinformation and foreign interference through their sites. The companies want to prove that the midterm elections will not be a repeat of 2016, when Russian operatives used Facebook, Twitter and YouTube to spread divisive messages in an attempt to influence how the American electorate voted.

Facebook, which has borne the brunt of scrutiny over election interference, has introduced measures to limit who can buy political ads, has hired more people to monitor what gets posted, and has constructed a “war room” to root out false information and stop it from spreading. Last week, Facebook’s chief executive, Mark Zuckerberg, told investors that the company was getting “better and better” at detecting election interference but that “there are going to be things that our systems miss no matter how well tuned we are.”

A close look at Twitter also suggests that disinformation and election interference are far from under control. Even as the company has taken steps to reduce problems, new instances of misinformation campaigns continue to surface.

“Our work on this issue is not done, nor will it ever be,” Jack Dorsey, Twitter’s chief executive, told Congress in September. He said that the company had learned valuable lessons since the 2016 election and that it was now removing 214 percent more accounts a year for violating its policies against manipulation.

[Here’s a guide to everything you need to know about the November elections.]

Twitter began looking more closely at election interference after the 2016 presidential vote. Late that year, it assembled a data science team to use technology to detect malicious and misleading behavior on its service. In particular, the team tries to identify oddities, such as a cluster of accounts that were registered with the same email address or phone number, or accounts that were engaging in spammy behavior like tweeting constantly at high-profile accounts to amplify their posts.

In the fall of 2017, the data science team led a study of 2016 election interference campaigns on Twitter. The findings, released in November 2017, included 50,000 Russia-linked accounts that were automated and tweeting election-related content. Twitter said it used the discoveries to home in on how to prepare for the midterms, such as improving its ability to find a high volume of automated tweets.

Image

Jack Dorsey, Twitter’s chief executive, has said of manipulation of the service: “Our work on this issue is not done, nor will it ever be.”CreditTom Brenner for The New York Times

“Unless we really build out our enforcement capacity and have teams that are focused on identifying and patterning these behavior models, we’re not going to be successful,” Ms. Harvey said.

Twitter said it also began asking for more help. In 2016, the company was not in regular contact with the Homeland Security Department or other government agencies, Ms. Harvey said. Over the summer, it began coordinating more with Homeland Security and is now also in regular contact with the F.B.I. and secretaries of state for various states, as well as Democratic and Republican campaign committees and nonprofits that track misinformation, Ms. Harvey said.

The F.B.I. and Homeland Security Department did not respond to requests for comment.

Despite these steps, Twitter’s issues have not subsided. In August, it found 50 accounts that were posing as state Republican officials, which it pulled down. In September and October, Twitter removed over 10,000 accounts masquerading as Democrats that were posting messages to discourage voting. One of the memes tweeted by the accounts falsely claimed that the Democratic National Committee was urging men not to vote in order to give women more sway over the midterms.

Over the past week, another series of memes on Twitter falsely said that Immigrations and Customs Enforcement would be patrolling voting stations on Tuesday. The company removed the posts.

Last week, the Oxford researchers released their study that concluded political news from conservative news outlets and right-wing sites such as Breitbart, Gateway Pundit and The Daily Caller were circulating on Twitter more than articles from traditional sources. The researchers said they classified stories from those outlets as “junk news.” In addition, the researchers said, an increasing number of liberal groups were also spreading false news.

Twitter does not categorize some of the outlets mentioned in the study as “junk news” because it said those publications “reflect views within American society.” The company also said the researchers’ methodology might have involved looking at a wide pool of tweets that would not have shown up to regular users. As a result, the research could contain “a staggering margin of error,” Yoel Roth, Twitter’s head of site integrity, tweeted last week.

This weekend, Guardians A.I., which is a consortium of technologists and academics focused on protecting pro-democracy groups from misinformation and cyberattacks, said it found a troubling spike in tweets that amplified false messages about voter fraud.

The researchers, who had been tracking 200 accounts that promoted the hashtag #VoterFraud, said those accounts posted a surge of tweets calling for violence and civil war that garnered more than 112 million impressions from Friday through Sunday, compared with about 37 million impressions in the previous seven days. There may be more such misinformation campaigns, the researchers added.

“We’ve seen a proliferation of a lot of voter fraud conspiracies that all spun up in the last 24 hours,” said Brett Horvath, a co-founder of Guardians A.I.

A Twitter spokesman said that because researchers pull data en masse from the service, they sometimes collect tweets that are caught by the company’s automated filters — meaning that some examples of misleading content are never viewed by regular users.

On election night, Twitter intends to follow a template it developed during recent international elections: It plans to monitor its service for automated activity and will rely on partners like the Homeland Security Department as well as users to report misleading content. The company also said it planned to have hundreds of employees around the world working to make sure nothing goes wrong.

“We’ll be in various places looking at our computers, mostly,” Ms. Harvey said. “The team is always on and working on this.”

🌼SHOW MORE

🌈 The more you know…

🤝 Thank you for shopping

🗞️
         More stories