Microsoft is the latest company to announce its efforts to stop the mass spread of misinformation ahead of the 2020 presidential election. While fake news stories have been increasing in volume, another major issue is the overwhelming amount of deepfake videos on the internet.
Deepfakes alter a video to make it appear as though someone other than who is actually in the video is guilty of saying the message featured in the videos.
To combat the spread of deepfakes, Microsoft announced a new tool, the Microsoft Video Authenticator, which will “analyze a still photo or video” and give it a “confidence store,” a blog post read. The tool will identify “the blending bondary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Monika Bickert, Facebook’s vice president of global policy management, wrote in an announcement.
Facebook will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Facebook has previously made efforts to cut back on “fake news” which littered the platform during the 2016 election cycle. In an effort to avoid repeating history, Facebook has made multiple changes.
Twitter has also installed a similar policy as social media services look to combat fake news at a very crucial time.