It’s not just third-party apps getting the ax from Facebook — it’s fake accounts, too. On Tuesday, May 15, Facebook published its first ever Community Standards Enforcement Report in a continuing effort to restore public faith in the social network as it continues to combat fake news and privacy scandals.
And as it turns out, when it comes to fighting the fake, there’s a lot to contend with. In fact, the company’s vice president of product management, Guy Rosen, revealed that Facebook disabled around 583 million fake accounts in the first three months of 2018 alone. For context, that’s about a quarter of the social network’s entire user base.
On average, around 6.5 million fake accounts were created every day between the beginning of 2018 and March 31. Luckily, Rosen notes that the majority of these spam accounts were disabled within just minutes of registration. This is largely thanks to Facebook’s artificial intelligence tools, which relieve humans of the burden of combing through the site to find the bots. That said, while A.I. is obviously useful, it’s not entirely foolproof. Facebook still estimates that between 3 and 4 percent of Facebook accounts are not real. That means that with 2.2 billion users, around 66 million of those accounts are fake.
Moreover, Facebook managed to find and delete 837 million spam posts in the first quarter of 2018, the vast majority of which were deleted before users got the chance to report them. “The key to fighting spam is taking down the fake accounts that spread it,” Rosen noted. And this, of course, is an ongoing effort within Facebook, who also recently revealed that it’s blocking the access of more than 200 third-party apps found to be in violation of data policies.
While Facebook has been quite effective at taking down instances of adult nudity and sexual activity, as well as graphic violence, the team admits that its technology “still doesn’t work that well” when it comes to hate speech. While Facebook ultimately removed 2.5 million pieces of hate speech in the first three months of the year, only 38 percent was flagged by A.I. tools.
“As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse,” Rosen noted. “It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important.”
That said, Facebook says that it is “investing heavily in more people and better technology to make Facebook safer for everyone,” and is also dedicated to transparency. “We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly, too,” Rosen concluded. “This is the same data we use to measure our progress internally — and you can now see it to judge our progress for yourselves. We look forward to your feedback.”
Published at Tue, 15 May 2018 16:23:25 +0000