Breaking the bias ice, AI algorithms are now wedged with government censorship issues
The world of technology is already facing enough crisis to cry out on racial, gender and social issues. What adds up to the trouble is the government censorship that is blurring the narrow line between algorithms and government ideologies. Not all governments open their door to free speech and expression, even though they say they do. However, the artificial intelligence algorithm is also a victim of these societal changes.
We are already at the edge where people shout for their rights and equality. Many countries have seen protests and sometimes violence due to these causes. But shifting our focus to technology based on censorship stands out from the others as we usually envision artificial intelligence as the technology of tomorrow. As a futuristic technology, AI has a long way to go when it comes to winning over governments and their censorship norms across the globe. The fight against bias and feeding AI algorithms with neutral content is already a handful. Researchers are immensely working on bending the discrimination curve. Forget the present bias that algorithms shouldn’t use, even words like ‘Nazi’ and ‘Negro’ that reflects on history are considerably chaotic in the tech radar. Censorship is a complicated issue and determining how we deal with it as a society is incredibly difficult, especially, when censoring information for AI algorithms. Authoritarian governments frequently limit people’s views on certain information to control narratives and prevent dissent. I’m not talking about North Korea; the country is off the league. Let us focus on less authoritarian countries that provide technological liberty, but with limitations.
AI algorithm is under the government censorship spell
Margaret Roberts, a political science professor at UC San Diego and Eddie Yang, a Phd student there wanted to explore how government censorship reflects on AI algorithms. They conducted research that shows how government censorship affects AI algorithms and influences the applications built with those models. In order to see the impacts, the researchers trained AI language algorithms in two sources. One is the Chinese-language version of Wikipedia, which is blocked within China and the other is Baidu Baike, a similar site operated by China’s dominant search engine. Baidu comes under the Chinese Government’s censorship. The algorithm learned different words from a large quantity of texts.
As a result, the researchers found a key difference in the final AI algorithm. It directly reflected on the information that is censored in China. For example, the algorithm trained on Chinese Wikipedia content showed words like ‘democracy’ with positive meanings such as ‘stability’ and the closer terms also reflected a good side like ‘election’ and ‘freedom.’ It was not the same with AI algorithms trained using Baidu. The algorithm assigned more positive scores to headlines featuring ‘surveillance,’ ‘social control’ and ‘CCP.’
Breaking the censorship parameters
It is pretty unclear if AI algorithms breaking government censorship is a crime. However, researchers have managed to evade internet censorships in the past. Initially, researchers were bound to manually search for ways to circumvent censorship, a process that takes considerable time. But a project led by the University of Maryland computer scientists shifted the balance of the censorship race. The researchers came up with a tool called ‘Geneva’ that automatically learns to circumvent censorship by exploiting gaps in censors’ logic and finding bugs that the researchers say would have been virtually impossible for humans to find manually. Geneva is a biologically inspired type of AI that combines small pieces of code to experiment with sophisticated evasion strategies for breaking up, arranging, and sending data packets transferred through the internet.