Twitter has been working on multiple policies to enforce their terms of services, minimize the exposure of fake news, and to avoid an overactive political climate on the social media service. In the past month, we have seen Twitter announce it will stop running political ads ahead of the 2020 election cycle and that it will restrict user interaction with world leaders who violate Twitter rules.
Now, Twitter is aiming to combat another serious problem that has been growing over the course of the last year: Deepfake videos.
Twitter has drafted a policy to help fight deepfake videos and “manipulated media,” and the company has shared its drafted policy in an effort to gain feedback from users.
While its proposal wouldn’t immediately remove any deepfake videos, Twitter would flag the manipulated media and issue a warning to users that the media they are viewing has been altered. Here is a look at the proposal from the company:
- place a notice next to Tweets that share synthetic or manipulated media;
- warn people before they share or like Tweets with synthetic or manipulated media; or
- add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
In addition, if a Tweet including synthetic or manipulated media is misleading and could threaten someone’s physical safety or lead to other serious harm, we may remove it.
The feedback period will close on Wednesday, Nov. 27 at 11:59 p.m. GMT.
Once the feedback is received, Twitter says it will “review the input we’ve received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules, as well as train our enforcement teams on how to handle this content.”
After reviewing the feedback and fine-tuning the new policy, Twitter will announce its updated policy within 30 days of putting it into full effect.