9 things you need to know about the Russian social media election ads


9 things you need to know about the Russian social media election ads

Russian interference in the U.S. election is no longer just speculation: Special counsel Robert Mueller brought criminal charges against 13 people and three businesses on February 16. His indictment charges the defendants with conducting “information warfare” against the U.S. — predominantly through social media. Digging into what happened, and working to prevent a similar situation, is an ongoing effort — on April 3, Facebook announced hundreds more Internet Research Agency accounts were removed from the company’s platforms.

Facebook, Twitter, and Google testified before Congress in November on multiple occasions in regards to accounts based in Russia that purchased politically motivated ads leading up to the 2016 presidential election. Since then, social media networks have released some of those ads and shared data on how many users were affected, including, most recently, Tumblr. While the investigation continues into just how much impact those Russian social media election ads may have had, the companies have already uncovered a number of different facts on those ads — here’s what social media users need to know about the data.

Criminal charges have been filed — but the investigation doesn’t say if the fakes made an impact

Special counsel Mueller filed charges for conspiracy and identify theft, along with failing to register as foreign agents. The charges also include laws that prohibit foreign funds from being used in U.S. elections. According to the charges, the Russian nationals and businesses pretended to be Americans during the 2016 presidential campaign in activities from social media posts to rallies. The indictment also claims that the Internet Research Agency (IRA) and other Russian-origin posts also discouraged minorities from even going to the polls.

While social media posts that are part of the investigation supported President Donald Trump, the indictment doesn’t determine if those social media posts made an actual impact on the election, according to USA Today. Along with using bots and stolen identities, the foreign agents also used groups or social media pages, including one pretending to be associated with Black Lives Matter. The investigation also uncovered an email from one of those 13 nationals, Irina Kaverzina, which says, “I created all these pictures and posts, and the Americans believed it was written by their people.”

The interference crosses multiple platforms with a wider reach than first estimated

Facebook, Instagram, Twitter, and YouTube all have data indicating some political ads leading up to the election were purchased by Russian organizations. (Tumblr has now shared that 84 accounts were also used on the platform, but the network believes only organic posts, not ads, on the platform had an IRA link). Facebook ads had the largest audience, with sponsored posts reaching as many as 126 million Americans (based on estimates from November). The Facebook-owned Instagram also had 120,000 posts with Russian links, though it’s unclear how many users saw those posts.

The reach of Russian trolling ads is also much wider than originally thought. Facebook originally said 3,000 ads were purchased by Russian trolls, with a reach of around 10,000, but now that number is 80,000 ads and a 126-million reach, though that new data also encompasses non-paid posts, images, and events.

On Twitter, at least 2,752 accounts and over 36,000 bots sharing political posts were connected to Russia, the platform shared in November. Twitter says, however, that only 0.74 percent of election-related Tweets came from those accounts, getting just 0.33 percent of impressions out of all the political Tweets between September 1 and November 15, 2016.

Google says that one group spent $4,700 on search and display ads during the election, though none of those ads were targeted toward specific states or political interests. YouTube had 1,108 English-language videos from 18 Russian trolling accounts, though not all of those were political and only three percent saw upward of 5,000 views. The company didn’t find any related Google+ ads in English, though there were some written in Russian.

The IRA isn’t quitting their efforts either — but social media platforms are getting better at detecting them. Facebook recently removed another set of accounts related to the organization — 70 Facebook accounts, 138 Facebook Pages and 65 Instagram accounts that were associated with the IRA. Facebook says the pages were removed solely because of who controlled them — not because of content. Ninety-five percent of those accounts, Facebook says, were written in Russian. In total, those Pages had more than 1 million followers while the Instagram accounts had nearly 500,000 followers.

“The IRA has consistently used inauthentic accounts to deceive and manipulate people,” the company wrote in a blog post. “It’s why we remove every account we find that is linked to the organization — whether linked to activity in the U.S., Russia or elsewhere. We know that the IRA — and other bad actors seeking to abuse Facebook — are always changing their tactics to hide from our security team. We expect we will find more, and if we do we will take them down too.”

Platforms are already making changes as a result of the interference

While the impact of the ads isn’t yet fully understood, the ads have already sparked changes that users will begin to see rolling out on social media platforms. Facebook and Twitter will soon start labeling political posts, including who paid for those posts. While ads on TV, radio, and in print are required to have that “paid for by” spiel, ads online and on social media don’t fall under the same regulations. The bi-partisan Honest Ads Act aims to bring online ads up to the same regulations, but it still needs to make it all the way through the law-making process.

While the “paid for by” label will be easy to see, social media companies are also making changes behind the scenes. During its quarterly conference call with investors, Facebook CEO Mark Zuckerberg said that the platform’s efforts to enhance security will cut into the company’s profitability. The company will be doubling the 10,000 employees handling safety and security, along with expanding AI programs for automatically flagging suspicious activity — though that change isn’t just for spotting inauthentic political ads.

Many ads were difficult to pick up as fakes

Many of the ads had few clues indicating that they came from outside the United States. The names used were often misleading, and the largest group behind those ads is simply called the Internet Research Agency. On Twitter, where usernames can be pretty much anything, one account linked to Russian trolls pretended to be Tennessee Republicans and used the handle @Ten_GOP — and even members of the Trump administration retweeted some posts from that account. Twitter user handles among the list of known troll accounts also included regular names and misspellings of celebrities like “ashleysimpsn.”

The investigation leading to the current charges also uncovered an email where one of those 13 nationals, Irina Kaverzina, wrote, “I created all these pictures and posts, and the Americans believed it was written by their people.”

Not all of the ads targeted the 2016 election directly, but they could have had an effect nonetheless. As the New York Times points out, the Internet Research Agency created Back the Badge and Blacktivists, two groups on opposite sides of the issues during the Black Lives Matters movements. These groups weren’t necessarily election-related but could have been designed to spark chaos and unrest, the Times suggests. Others called for immigration reform and support for the second amendment.

Spending wasn’t as high, but reach was still wide

The Internet Research Agency spent $46,000 on Facebook ads, while Trump and Clinton together spent $81 million on the platform. While it’s unclear if other Russian groups were behind some of those other ads, spending by Russian trolls was significantly less than U.S. political groups and candidates.

Advertising reach doesn’t always translate into an exact number of impressions for the same amount spent. When an ad gets more engagements in the form of likes and comments, that ad will reach more people than an ad with the same budget but fewer interactions.

While it may be impossible to determine if ads had a direct effect, they did spark actions in real life

Establishing a number and estimated reach is one thing, but there isn’t a way to determine if (or how many of) those ads actually swayed voters to change their candidate. While the outcome of the Russian interference may never be completely uncovered, some of those ads have already been proven to have led to reactions. Republican Senator Richard Burr of North Carolina shared during the November 1 hearing that ads from the Russian-backed Heart of Texas and United Muslims of America pages sparked a protest in Houston, Texas, with about 12 anti-Muslim protesters and over 50 countering that protest.

Ads are only part of the problem

While ads and sponsored posts are a main focus of the investigation, organic reach played a role as well. The November hearing revealed that Russian-backed Facebook pages used non-paid posts to spread misinformation about a number of hot-topic issues. On Twitter, bots helped spread the reach of organic tweets.

On March 23, Tumblr confirmed that the platform had identified and removed 84 accounts linked to the Internet Research Agency. Unlike the political ads, however, Tumblr’s investigation turned only organic native posts, not ads. The platform said that the organization has over 1,000 staff members using phony social media accounts, which means the problem isn’t limited to just bots and ads. The platform said users that interacted with an IRA-linked account will be notified by email, while a public record of the IRA-linked accounts will also be created.

Uncovering, then banning troll accounts isn’t so cut and dried

Social media is often forced into a tight spot between preventing abuse and hindering free speech. RT (formerly called Russia Today), a Russian international TV network with a YouTube channel, is still an active account despite being named the Kremlin’s “principal international propaganda outlet.” Google says RT doesn’t violate any community guidelines and that the content is also available on cable and satellite.

The 80,000 ads Facebook removed were taken down not because of what was in them, but because the account creators misrepresented who they are. After the first presentation to Congress a month ago, Facebook said that its advertising guidelines help prevent abuse without inhibiting free speech.

Spotting abuse on a platform with 2.1 billion monthly active users is also a challenge, as evidenced by more than just political ads but inappropriate ad targeting, which Facebook recently apologized for. Facebook uses a mix of A.I. algorithms and human reviewers, and says it will be expanding both to improve effectiveness.

There could be more to uncover

While a few dozen Russian-backed ads have already been shared with the public, the House intelligence committee says it is working to release all the findings after removing personal data from them. With the full release of the information yet to come, and social media companies continuing to work on solutions, social media users can expect to see more changes stemming from the investigation as well as legislation like the Honest Ads Act.

Updated on April 6: Added Facebook’s latest round of deleting IRA accounts.

Editors’ Recommendations

Published at Fri, 06 Apr 2018 21:13:00 +0000


Comments are closed.