(Photo Courtesy)
Advertisement Advertisement  

Facebook Inc has offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.

According to Facebook’s director of global policy management, Monika Bickert, and Brian Fishman, counter-terrorism policy manager explained in a blog post that Facebook has ramped up use of artificial intelligence (AI) such as image matching and language understanding to identify and remove content quickly,.

The world’s largest social media network, with 1.9 billion users, Facebook has not always been so open about its operations, and its statement was met with scepticism by some who have criticised US technology companies for moving slowly.

This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place.

Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other providers of social media such as Google and Twitter to do more to remove militant content and hate speech.

Government officials have threatened to fine Facebook and strip the broad legal protections it enjoys against liability for the content posted by its users.

Facebook uses artificial intelligence for image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates.

YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organisations to develop text-based signals for such propaganda.

Bickert claims, more than half the accounts removed for terrorism are accounts the Facebook staff finds themselves hence want the community know that they are really committed to making Facebook a hostile environment for terrorists.