Facebook’s War On Terrorism
As the largest social network, Facebook has the unenviable task of navigating carefully between keeping hateful messages and the promotion of terrorism removed from the service screens, but also to maintain free speech and privacy. It’s a problem that all social networks large and small encounter, but as Facebook is the largest social network it is at the forefront of this battle. Facebook has social responsibilities and holds great sway over large numbers of people.
Facebook has been at the centre of several high profile investigations and events. In 2015 it removed the profile of one of the San Bernardino shooters because it contained pro-ISIS content. This user was identified by an internal content monitoring system but Facebook has other tools, but unfortunately these are far from perfect. The families of three of the victims in the San Bernardino attack sued Facebook because the platform did not do enough to prevent the tragedy. A journalist from British newspaper, The Times, set up a fake terrorist profile earlier in 2017 and Facebook did not remove the information quickly enough.
Part of the responsibility lies with users, who need to report offensive user profiles. However, as Facebook has discovered, this simply isn’t enough. It seems that users click away from offensive material before they complain. Instead, Facebook is having to use different tactics and pieced together a team “focused on terrorist content and is helping promote ‘counter speech,’ or posts that aim to discredit militant groups like Islamic State” in early 2016.
Human intervention is one thing, but Facebook is also pushing into artificial intelligence (or AI) systems. AI has the advantage that it can scan through significantly more material than humans. Facebook is using AI to identify videos and pictures of terrorism by matching imagery to a database of existing content. This system moves to prevent the spread of this material. Facebook is refining and developing this artificial intelligence service by analysing post and comment text content in support of terrorism. The AI identifies suspicious posts so that further action may be taken, and will refine the technology over time. This includes the ability to identify related material and use this to identify wider circles of potentially pro-terrorist users.
On a similar note, Facebook is also working on a means of identifying banned users who have created a new account. The idea here is that Facebook can stop these banned users before they are able to continue spreading their same message.
Facebook is not alone in this battle. The US government has engaged with technology companies to talk about what the company can do. We know that Apple, Facebook, Google, and Twitter have met with the White House and the intention is for companies to collaborate. Facebook is also working with Microsoft, Twitter and YouTube to build a collective database in order to help detect terrorist organisations. As large as these individual companies are, it seems that combining efforts is the answer.
SOURCE [The Next Web]