Recently, Facebook has been bombarded with criticism from social media users for not doing enough to keep extremist content off of social media platforms. In light of the criticism, Facebook will introduce artificial intelligence in collaboration with human moderators, who will review content on a case-by-case basis. According to Monika Bickert, the head of Global Policy Management at Facebook, the website hopes that the system will be expand over time.
One blog post from Thursday illustrated how Facebook’s artificial intelligence would teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group. Not only that, the system could learn how to detect Facebook users who are part of a cluster of groups relating to extremist activities and movements. Those who create fake accounts to spread content throughout the web will be monitored and exposed as well.
Even though the technology is under development and is currently not in its established state, Brian Fishman, Facebook’s lead policy manager for counterterrorism, said the company had a team of 150 specialists working in 30 languages doing such reviews.
The partial reason behind Facebook’s new proposal was the criticism of the “safe spaces” allowed on the web. Prime Minister Theresa May of Britain challenged Facebook and other social media platforms to monitor suspicious activities, including probable terrorist bombings.
“We cannot allow this ideology the safe space it needs to breed,” May said after the bombing of a concert in Manchester that killed 22 people. “Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”
The controversial question is what exactly qualifies something as extremist.
What is your take? Will Facebook discourage people from joining terrorist groups or stop them from posting about terrorism through its artificial intelligence technology?