PARIS–At the heart of the spread of fake news are the algorithms used by search engines, websites and social media which are often accused of pushing false or manipulated information regardless of the consequences.
– What are algorithms? –
They are the invisible but essential computer programmes and formulas that increasingly run modern life, designed to repeatedly solve recurrent problems or to make decisions on their own.
Their ability to filter and seek out links in gigantic databases means it would be impossible to run global markets without them, but they can also be refined down to produce personalised quotes on everything from mortgages to plane tickets.
They also run our Google searches, our Facebook newsfeed, recommend articles or videos to us and sometimes censor questionable content because it may contain violence, pornography or racist language.
Other algorithms charged with the most complex and sensitive tasks can be opaque “black boxes” which develop their own artificial intelligence based on our data.
A skewed view of the world?
“Algorithms can help us find our way through the huge amount of information on the internet,” said Margrethe Vestager, the European commissioner for competition.
“But the problem is that we only see what these algorithms — and the companies that use them — choose to show us,” she added.
In organizing your online content, algorithms also tend to create “filter bubbles”, insulating us from opposing points of view.
During the US presidential election in 2016, Facebook was accused of helping Donald Trump by allowing often false information about his rival Hillary Clinton to circulate online, closing people into a news bubble.
Algorithms also tend to make extreme opinions “and fringe views more visible than ever”, according to Berlin-based Lorena Jaume-Palasi, founder of the Algorithm Watch group.
However, their effects can be difficult to measure, she warned, saying that algorithms alone are not to blame for the rise in nationalism in Europe.
Spreading fake news?
Social media algorithms tend to push the most viewed content without checking if it is true or not, which is why they magnify the impact of fake news.
On YouTube in particular, conspiracy theory videos get a great deal more traffic than accurate and properly sourced ones, said Guillaume Chaslot, one of the Google-owned platform’s former engineers.
These videos, which may claim that the moon landings or climate change are lies, get far more views and comments, keeping users on the platform longer and undermining credible, traditional media, Chaslot insisted.
More ethical algorithms?
Some observers believe that algorithms could be programmed “to serve human freedom”, with many non-governmental groups demanding far more transparency.
“Coca-Cola doesn’t reveal its formula but its products are tested for their effect on our health,” Jaume-Palasi argued, insisting on the need for clear regulation.
The French privacy protection body, the CNIL, last year recommended state oversight of algorithms and that there should be a real push to educate people “so they understand the cogs of the (information technology) machine”.
New European data protection rules also allow people to contest the decision of an algorithm and “demand a human intervention” in case of conflict.
Some internet giants have themselves begun to act to some degree: Facebook has started an effort to automatically label suspicious posts, while YouTube is reinforcing its “human controls” on videos aimed at children.
However, former Silicon Valley insiders who make up the Center for Humane Technology, which was set up to combat tech’s excesses, have warned that “we can’t expect attention-extraction companies like YouTube, Facebook, Snapchat, or Twitter to change, because it’s against their business model.”