Facebook starts policing deepfakes – with room for improvement

Facebook starts policing deepfakes – with room for improvement

Facebook has “strengthened” its policy towards manipulated media to combat misinformation by adding removal criteria for deepfakes and the like.

According to the company’s vice president of global policy management, Monika Bickert, Facebook will now remove media which “has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”

The same is true if the media is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic” and wasn’t made for satirical or parody purposes.

This paragraph clearly is meant to keep so-called “deepfakes” at bay. Deepfakes are generated using a special type of artificial neural network, which is trained with for example key features of a person’s face and can later be superimposed with features in another video. Systems mostly detect them by using similar approaches that look for slight inconsistencies in the material.

While Bickert mentions in her post, that this form of AI-generated content is still a rarity, usage is picking up. So in the light of controversies around voter manipulation across a range of countries and elections in the last couple of years, getting policies in place sooner rather than later seems like a wise move. 

The new policy is the result of conversations with experts from “technical, policy, media, legal, civic and academic backgrounds”. To implement it, Facebook makes use of independent third-party fact checkers, without going into the details of how those are meant to tell real from fake, amongst other things. Identified fakes will be marked as such and see less distribution in News Feed. If the fake content is meant to be run as an ad, the company says it will be rejected.

Since spotting AI-generated or manipulated media is tricky, Facebook started the Deep Fake Detection Challenge in 2018 which is meant to produce more deepfake research and detection tools. It also partnered with news organisation Reuters to develop and release online courses to train newsrooms in the art of spotting tampered materials.

However, anyone who has edited audio or video material before, will know that omitting or changing the order of words can indeed alter the meaning of a sentence. Spreading material like this could therefore be used to misinform, which would classify it as misleading media. 

Yet, reading through Bickert’s post, this is exactly the kind of material not included in the new regulation. There’s no question that identifying whether someone cut out a superfluous “hmm” or a meaning-altering “not” isn’t an easy task, but that doesn’t make it any less misleading. Maybe that’s something some of their global experts can bring up in the next discussion. Or create an AI for.