How Facebook Breeds Civil Conflict and Hatred Around the World
Technology Policy Brief #66 | By: Stephan Lherisson | November 28, 2021
Header photo taken from: The New York Times
Follow us on our social media platforms above
Browse more technology policy briefs from the top dashboard
Photo taken from: USA Today
Frances Haugen, a former Facebook employee, leaked Facebook internal documents to the press, federal regulators, and Congress. The content of those documents showed how the social media platform uses potentially damaging algorithms to drive up use by its users while disregarding the negative effects of those algorithms including polarizing attitudes and divisiveness. Such attitudes have been proven to contribute to violence in places like Myanmar, Ethiopia, Sri Lanka, and India.
According to Haugen, Facebook’s algorithms produce information on your news feed that are driven by clicks. The more clicks a post gets the more likely it is to show up on your news feed. The issue is that content that gets angry responses from users is more likely to get clicks and because Facebook algorithms are programmed to promote content that has been clicked on. In countries that suffer from division by ethnicity for example, negative yet clickable content against one ethnic group can become wide spread and can influence further divisiveness and even lead to acts of violence.
For example, the Rohingya are a Muslim minority ethnic group that can mostly be found in Rakhine State, Myanmar. There have been tensions between the Rohingya and the majority Buddhist population in the country. In August of 2017, due to attacks by Myanmar’s military hundred of thousands of Rohingya escaped to neighboring Bangladesh.
Buddhists burned down Rohingya villages. They carried out murders and rapes. Their actions were described as genocidal by a report from UN investigators.
There was hate speech against the Rohingya that posted on Facebook, and Facebook commissioned an investigation into the matter. After the results of this commission were revealed Facebook admitted that its posts were used to incite violence against the Rohingya people. “Without dismissing the gravity of Facebook’s actions around the world, in Myanmar Facebook has acted like the internet for the entire country. It has been how the people have accessed the internet and how the people get information about things like COVID. At the moment Facebook has also condemned the military coup in the country and has been banned by the military as a result.”
In Ethiopia Facebook’s platform encouraged violence against ethnic minority populations in the midst of a civil war. The war was being carried out by Abiy Ahmed, Ethiopia’s Prime Minister and the Tigray People’s Liberation Front (TPLF), the former rulers of the country that were deposed by a public movement. Facebok was used to spread hate speech against Tigrayans .
In 2018 Sri Lanka saw riots that resulted from anti-Muslim attitudes. These riots followed hate speech against Muslims and rumors spread on the Facebook platform. The events caused the Sri Lankan government to block access to Facebook. In this case a probe commissioned by the social media platform revealed Facebook content may have lead to violence against Muslims.
In northeastern Indian state of Assam Assam there is an Assamese speaking majority group that is Hindu and a Bangali speaking minority Muslim group. There are Facebook posts in which Bengali Muslims are called parasites, rats and rapists. These posts have been viewed 5.4 million times.
George Floyd was a black man that was killed by a police officer who held his knee against Floyd’s neck as Floyd suffocated. Hours after Floyd’s death there was a spike of offensive posts on Facebook. Facebook was having trouble with hate speech closer to home.
Facebook increases profit by getting users to stay online on the site for as long as possible and anger-inducing posts help with that goal. According to Haugen producers of this negative content are incentivized to continue to produce this clickable content. Haugen also warned about the results of Facebook’s policies fanning the flames of hatred in foreign countries.
Part of the issue is the fact that Facebook does not police its content in foreign countries the way it does in the United States. However, even though Facebook said it was successful in detecting and erasing hate speech in the US , Facebook’s employees , such as Haugen, have warned that the platform was only removing a small percentage of the hate speech found in posts from the United States.
The social media company says it is attempting to reduce the number of hate filled posts that can be found on the site using artificial intelligence (AI).
Photo taken from: Time Magazine
One problem is that AI systems have a difficult time evaluating a person’s intent. In addition, Facebook is not programmed to understand the languages of the local people in the foreign countries it finds its way into. This lack of language can make it difficult for the website to distinguish what is being said in the context of said country.
Lawmakers in the US and Europe are crafting legislation to mitigate the harmful effects of platforms like Facebook.
For example the Filter Bubble Transparency Act would require internet platforms to offer an algorithm free version of their services. The proposed Digital Services Act in the E.U. would stop the misuse of algorithms to spread disinformation.