The politically charged atmosphere that pervaded all of 2016 prompted a lot of debate about the growing clout of viral online content. Such internet powerhouses like Google and Facebook, which have for a long time claimed to be socially neutral technology companies, have come under pressure to accept and act on the responsibility for damaging content being spread on their platforms. Much of the public ire has been directed at ‘fake news’, false, usually inflammatory stories intentionally written to capitalize on some current, highly contentious political event. These were seen to push truly pertinent coverage out of the social media spotlight and so provide the public with a false picture of reality.
Since the end of the US presidential election, both Google and Facebook have acknowledged that this is a problem and have committed to fighting the spread of misinformation on their platforms, albeit in small ways.
Bang! contacted researchers at the Oxford Internet Institute to inquire about their opinions on this crisis. Dr. Victoria Nash, Deputy Director at the OII, said that “there are conflicting opinions at the OII about fake news. One view says to completely clamp down on it: platforms should be persuaded to step in and moderate their content. Another view is less strict and asks whether fake news [stories] have been truly as destructive as everyone makes them out to be.” Nonetheless, she does agree that misinformation is something worth fighting and, in light of Mark Zuckerberg’s recent interview with Sheryl Sandberg, it seems like Facebook is ready to do its part.
“I think that what we are seeing right now is that there is a bit of anarchy because social media is so young. I know that they seem old now because they’ve been around for ten years but in terms of human history, centuries of perspective, they were born yesterday and so we’re still adapting to this new medium. Facebook’s announcement to provide tools for fact-checking is obviously a move towards more responsibility, better information online and healthier information habits.” Says Professor Luciano Floridi, Director of Research at the OII, who has written about fake news for The Guardian. He insists that more supervision should be happening on what gets shared on social media, given its importance in mediating public discourse. “I think there is a wide gap between anything goes, which is not freedom, it’s total anarchy, and censorship. There’s an enormous amount of space in-between which is decent, civilized communication online and which I believe we’re going to see growing in the following years.”
Dr. Nash is a bit sceptical on whether we should jump to implementing regulations. “The conclusion of some wide-ranged research is that people who use social media are exposed to a way more varied pool of news sources than those that don’t. So before regulating social media we should make sure we don’t do anything that harms this diversity.”
As for what can be done, Prof. Floridi points out that frameworks of regulating content online already exist and, even though fake news are a more intricate and harder-to-define thing than, say, violent images, their regulation could be guided by these existing frameworks. “The answer to the question of who is the best chess champion is not the human or the computer, it’s the human with the computer. If we combine together human intelligence with technological solutions, that will be unbeatable,” Dr. Nash agrees, claiming that labelling fake news would be too delicate of a task for artificial intelligence and would be impractical if only human editors were used. “These platforms are already finding it hard to deal with things like controversial images, they have to outsource the work to developing countries and since the expertise and training required to conclusively delineate fake and real news stories would be hard-to-attain, it’s hard to see them being able to cope.”
The problem with regulating fake news also lies with the fact the ‘fakeness’ of a story is not a binary property. “I’m actually more worried about things that sound plausible than outright fake news stories. Why wouldn’t campaign machines, why wouldn’t the alt-right, why wouldn’t the moderate left be less careful about some of the social media news articles that they put on their platforms, using wrong statistics, or having an outright lack of statistics or sources?” Says Dr. Nash.
Facebook has tried using human editors to supervise what stories get promoted on its ‘Trending Topics’ tab before, which brought them much negative publicity earlier last year due to accusations of liberal bias. This prompted the company to discontinue this practice and completely automate the process of choosing which stories made it to this highly coveted spot of digital real estate. Predictably, this led to disastrous consequences.
Dr. Nash speculates that the most workable solution to content supervision will be some kind of flagging system like YouTube already has in place. However, engineers of such a system would find it hard to defend it from being exploited. “Algorithmically it would be hard to implement fake news nets that would also block people from systematically flagging content they simply don’t like or consider untrue due to their biases instead of the stories’ inherent fakeness.” This could potentially create an environment in which minority views and unpopular opinions could be easily silenced.
As shown by a report on BuzzFeed, the creation of fake news are not always politically motivated attempts to sabotage public discourse, their makers often simply craft their sensationalist messages to attract as many people to their sites as possible in order to reap the ad revenue their presence brings. “There’s the incentive, the financial incentive to show the worst of human nature to millions of people online technologically speaking. The big Facebook, the big Google, the big X that are out there, they can do it, they can make this way less profitable for those people” Says Prof. Floridi.
Key in the fight against fake news, according to Prof. Floridi, is to make sure that it’s more profitable for tech companies to start dealing with this crisis than letting misinformation further fester in their platforms. “Social media are not utilities, they are social actors and as such they have responsibilities. Simply saying that this is a scandal, urging them to do something about it, letting them know that this is not good for business would help. Also governments, the European Union, say, can say, ‘Look, within Europe we don’t like this.’”
Dr. Floridi is confident that successful past projects like Wikipedia prove the internet can be a haven where truth is championed and that the crisis of fake news will be dealt with in due time. “I think there is hope for the internet as a medium. We just have to work a bit harder on how we design the mechanisms to make sure that there is neither censorship, nor abuse in terms of anything goes. Shifting into censorship is very dangerous, but just because it’s tricky doesn’t mean that it shouldn’t be done or it couldn’t be done.”
(featured image courtesy of Niau33, Wikimedia Commons, CC BY-SA 4.0)