For all the good that has come from the internet, the online world has also served as a powerful device for recruiting terrorists and spreading their propaganda too.
Facebook, Microsoft, Twitter and YouTube today announced they would cooperate on a plan to help limit the spread of terrorist content online. The companies said that together they will create a shared industry database that will be used to identify this content, including what they describe as the “most extreme and egregious terrorist images and videos” that have been removed from their respective services.
The group plans to create a kind of shared digital database, “fingerprinting” all of the terrorist content that is flagged. By collectively tracking that information, the companies said they could make sure a video posted on Twitter, for instance, did not appear later on Facebook.
How It Will Be Done :
Facebook describes how this database will work in an announcement in its newsroom. The content will be hashed using unique digital fingerprints, which is how its identification and removal can be handled more easily and efficiently by the company’s computer systems and algorithms.
Using a database of hashed images is the same way that organizations keep child pornography off their services. Essentially, a piece of content is given a unique identifier. If any copies of that file are analyzed, they will also produce this same hash value. Similar systems are also used to identify copyright-protected files.
The effort follows a year of intense scrutiny of technology companies and the role they play in unintentionally aiding the rise of recruitment efforts by groups like the Islamic State or in the spread of terrorist messages after mass shootings. Facebook, Twitter and Google’s YouTube have repeatedly been criticized for engaging in what is essentially a game of whack-a-mole, as terrorist accounts are created just as fast as they are deleted from the services.
While the effort is beginning with the top social networks, the larger goal is to make this database available to other companies in the future, Facebook says.
Given the recent discussions about the spread of fake news on social media, one hopes this new collaboration could potentially pave a path for the companies working together on other initiatives going forward.
The problem of false news also damages all of social media, and has raised questions about what role should the companies play in battling that content. There are some who would claim that these companies have no business being arbitrators of the news or what’s right and wrong — and companies themselves would be glad to be “dumb” platforms, as well, in order to escape their responsibility in the matter.