News & Blog
Terrorist? There’s An Algorithm For That
By Francis West on 23rd February 2017
Facebook is reportedly planning to create Artificial Intelligence (AI) based algorithms that can spot posts that relate to terrorism and bullying, thereby making the platform safer for users, and making it less likely for those supporting and exporting violence and prejudice from using Facebook.
What’s The Problem?
The vast number of Facebook users, and the ease of signing up to and sharing widely across the platform have meant that Facebook has become a place where posts reflect all views in society, from very good to very bad. It has also famously been a platform that defends the privacy of its users (and the use of encryption) e.g. as demonstrated by Mark Zuckerberg’s public support of Apple’s decision to not help the FBI to hack an iPhone that was used by one of the San Bernardino shooters.
Outlined in a Letter From Mark Zuckerberg.
Now, in a 5,500-word letter discussing the future of Facebook, Mark Zuckerberg has admitted that, due to the sheer scale and popularity of Facebook, it is now impossible to police and censor the more subverted and criminal use of the platform using conventional and existing means.
Facebook has also faced criticism in recent times over how one of Fusilier Lee Rigby’s killers spoke on the platform about murdering a soldier months before the attack, and the live streaming over Facebook of an attack on a Chicago Donald trump supporter in the so-called ‘Black Lives Matter kidnapping’.
How Will The Algorithm Work?
According to Mr Zuckerberg’s letter, the intention is to create an AI algorithm that can read text, and look at photos and videos in order to understand if anything dangerous may be happening, or to tell the difference between news stories about terrorism and actual terrorist propaganda. The aim is to create an algorithm that allows Facebook users to post (largely) whatever they like, within the law, because the algorithm will enable users to filter their news feed.
When?
The complexity of developing an AI algorithm of this kind means that it may not be fully ready for several years, although Mark Zuckerberg has stated that Facebook intends to start dealing with some of the cases this year.
What Does This Mean For Your Business?
An algorithm of this kind could lead to a much better and safer Facebook experience for all users. The ability of users to customise and to have greater power control over their own online experiences is attractive, although it is important that the algorithm is not so constrictive as to simply be a kind of heavy censoring.
If the algorithm makes the platform more attractive to more users, this would be good news for business advertisers, although the algorithm may mean that businesses will have to be more careful about the content of their promotional videos, photos and messages in order to be compliant and not to be screened out.
Terrorist? There’s An Algorithm For That
Facebook is reportedly planning to create Artificial Intelligence (AI) based algorithms that can spot posts that relate to terrorism and bullying, thereby making the platform safer for users, and making it less likely for those supporting and exporting violence and prejudice from using Facebook.
What’s The Problem?
The vast number of Facebook users, and the ease of signing up to and sharing widely across the platform have meant that Facebook has become a place where posts reflect all views in society, from very good to very bad. It has also famously been a platform that defends the privacy of its users (and the use of encryption) e.g. as demonstrated by Mark Zuckerberg’s public support of Apple’s decision to not help the FBI to hack an iPhone that was used by one of the San Bernardino shooters.
Outlined in a Letter From Mark Zuckerberg.
Now, in a 5,500-word letter discussing the future of Facebook, Mark Zuckerberg has admitted that, due to the sheer scale and popularity of Facebook, it is now impossible to police and censor the more subverted and criminal use of the platform using conventional and existing means.
Facebook has also faced criticism in recent times over how one of Fusilier Lee Rigby’s killers spoke on the platform about murdering a soldier months before the attack, and the live streaming over Facebook of an attack on a Chicago Donald trump supporter in the so-called ‘Black Lives Matter kidnapping’.
How Will The Algorithm Work?
According to Mr Zuckerberg’s letter, the intention is to create an AI algorithm that can read text, and look at photos and videos in order to understand if anything dangerous may be happening, or to tell the difference between news stories about terrorism and actual terrorist propaganda. The aim is to create an algorithm that allows Facebook users to post (largely) whatever they like, within the law, because the algorithm will enable users to filter their news feed.
When?
The complexity of developing an AI algorithm of this kind means that it may not be fully ready for several years, although Mark Zuckerberg has stated that Facebook intends to start dealing with some of the cases this year.
What Does This Mean For Your Business?
An algorithm of this kind could lead to a much better and safer Facebook experience for all users. The ability of users to customise and to have greater power control over their own online experiences is attractive, although it is important that the algorithm is not so constrictive as to simply be a kind of heavy censoring.
If the algorithm makes the platform more attractive to more users, this would be good news for business advertisers, although the algorithm may mean that businesses will have to be more careful about the content of their promotional videos, photos and messages in order to be compliant and not to be screened out.
Comments