All hell continues to break loose around Facebook, following a New York Times investigation into how the company responded to privacy and public-policy critics over the last year. Yesterday, CEO Mark Zuckerberg published several thousand words on ‘A blueprint for content governance and enforcement’ on the social network, as part of Facebook’s attempts to get out in front of the criticism and show what it’s doing to solve its problems.
There’s plenty to chew on, but the part on ‘borderline’ content is what jumped out at us. This is about content that isn’t banned on Facebook, but which is edging closer to that line. And as Zuckerberg puts it: “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average – even when they tell us afterwards they don’t like the content”. Facebook’s solution? “This is a basic incentive problem that we can address by penalising borderline content so it gets less distribution and engagement… We train AI systems to detect borderline content so we can distribute that content less”.
Clickbait and misinformation are the kinds of content in Facebook’s sights here, but the move to make users “see less content that is close to the line, even if it doesn’t actually violate our standards” could have an impact on, for example, some videos posted by musicians.