magnifier menu chevron-left chevron-right chevron-down

Finding Extremist Posts: Facebook Launches Artificial Intelligence

|

Patricia De Melo Moreira/Agence France-Presse — Getty Images

Recently, Facebook has been bombarded with criticism from social media users for not doing enough to keep extremist content off of social media platforms. In light of the criticism, Facebook will introduce artificial intelligence in collaboration with human moderators, who will review content on a case-by-case basis. According to Monika Bickert, the head of Global Policy Management at Facebook, the website hopes that the system will be expand over time.

One of the prime applications of this technology is identifying content that clearly violates Facebook’s terms of use. This includes videos or images of horrendous beheadings, violent content and other disturbing events. The addition of this application stops users from uploading those materials onto the site.

One blog post from Thursday illustrated how Facebook’s artificial intelligence would teach itself to identify key phrases that were previously flagged for being used to bolster a known terrorist group. Not only that, the system could learn how to detect Facebook users who are part of a cluster of groups relating to extremist activities and movements. Those who create fake accounts to spread content throughout the web will be monitored and exposed as well.

Even though the technology is under development and is currently not in its established state, Brian Fishman, Facebook’s lead policy manager for counterterrorism, said the company had a team of 150 specialists working in 30 languages doing such reviews.

The partial reason behind Facebook’s new proposal was the criticism of the “safe spaces” allowed on the web. Prime Minister Theresa May of Britain challenged Facebook and other social media platforms to monitor suspicious activities, including probable terrorist bombings.

“We cannot allow this ideology the safe space it needs to breed,” May said after the bombing of a concert in Manchester that killed 22 people. “Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”

The controversial question is what exactly qualifies something as extremist.

Facebook is hopeful that with the implementation of artificial intelligence, any form of extremism that violates its company’s terms of use can be prevented. For now, the search for danger is narrowly focused.

What is your take? Will Facebook discourage people from joining terrorist groups or stop them from posting about terrorism through its artificial intelligence technology?

  • SPONSORED VIDEO
  • COED Writer
    Film, popcorn, & polo shirt addict.
    Comments