Facebook AI

Social media platform, Facebook has put itself in a difficult spot again because of its artificial intelligence (AI) algorithms. According to a report by The New York Times, it featured clips of Black men in dispute with white civilians and police cops. The viewers received an automated prompt regulated by Facebook AI, asking if they wanted to “keep seeing videos about Primates. It had no connection to monkeys or primates. 

On 3rd September, Friday, the social media platform apologized for the decisions it's AI seemingly made. 

Darci Groves, an ex-content design manager at Facebook said that a friend had recently sent her a screenshot of the prompt. She then reported to a product feedback forum for existing and ex-Facebook employees. In response, a product manager for Facebook AI Watch, the company’s video service, called it “unacceptable” and said the company was “looking into the source of origin.” Ms. Groves mentioned the prompt was horrifying and terrible.

Last Thursday, 2nd September, Groves posted the screenshot on Twitter and called on the company to execute a process in fixing the terrible error. Facebook apologized for what they described as an “unacceptable error” and said they were exploring how to stop this from happening again. But the company’s artificial intelligence fails and its overdue act of repentance fits into a familiar structure among tech companies when they have to deal with humiliating flaws in their technologies. At first, they mentioned fixing them and then they apologized, without fully estimating the built-in biases, racism, and sexism incorporated in the algorithms in the first place. 

Growing AI biases 

Technology companies like Amazon and Google also had problems with the subtle ways biases have been injected into the algorithms. 

Google Photo: The New York Times had pointed out the Google Photo scrutiny of 2015 where Google apologized after photos of Black people were labeled as gorillas. To address such disgraceful problems Google simply eliminated labels for gorillas, chimps, and monkeys. 

Amazon Facial Recognition: Before last year’s nationwide protests over George Floyd’s killing, Amazon made money from its facial recognition software and sold it to police departments even when research has shown that facial recognition programs falsely identify people of color compared to white people, and also that its use by police could result in an unjust arrest that will largely affect Black people. Amazon discontinued the distribution of facial recognition software to police departments last June. Computer engineers have battled with the historical use of coding terms that include racism such as “master” and “slave,” while some have pushed for more neutral language.

How does AI bias take place? 

AI outputs often include anomalies that take place because of algorithm bias. The use of Facebook AI is growing in all areas starting from automotive, healthcare, manufacturing to criminal justice, and hiring. This has given birth to a debate about AI bias and fairness. AI bias occurs from the partial presumption made during the algorithm development process and training data. 

The success of any AI implementation is tied to its training data. It is not just having the right data volume or right quality data, it is also important for organizations and companies to ensure that AI engineers are not partial to their creations. When engineers pass their own biases and assumptions to influence data sets, implementation depending on AI becomes biased which is inaccurate and unuseful. Biases in data sets lead to data supply which is restricted to certain focal points and demographics. 

This is not the first time Facebook AI has scuffled with fighting these biases on its platforms. The New York Times reported that the social media company and Instagram failed to curb racist abuse faced by three Black English soccer players after they missed penalty kicks in a shootout in the Euro 2020 finals. Bukayo Sayo, one of the soccer players involved, damned the social media companies’ casual responses to combating racist abuse.