Artificial intelligence has become a widespread technology as its adoption across industries is rapidly increasing. Enterprises across almost every industry are harnessing the power of this tech to drive their businesses efficiently. However, as companies stimulate their AI efforts, some impediments can slow its adoption. And the first and foremost barrier is its bias which could erode trust between humans and machines. Many AI systems are trained using biased data. As we train machines with good data, it would be good. While with bad data they can expose with implicit racial, ideological and gender biases.
In a recent experiment, the best AI vision system might perceive a picture of an individual’s face and fork out a racial slur, a gender stereotype, or a term that impugns his/her good character. Now, the scientists who taught machines to see have removed some of the human prejudice lurking in the data they used during the exercises. According to them, the changes can assist AI to see things more fairly. However, the attempt indicates that removing bias from AI systems remains intricate as they still rely on humans to train them to some extent.
Moreover, the use of artificial intelligence for deepfake videos and audio, misinformation, governmental surveillance, security and failure of the technology to accurately identify the objects have also created concern around the technology’s long-term future. A recent report from Pega shows consumers don’t trust AI. The report reveals that only 25 percent of consumers would trust a decision made by an AI system.
In one another report commissioned by KPMG International found that just 35 percent of global information technology and business decision-makers out of 2,200 surveyed had a high level of trust in their own organisation’s analytics.
The growing use of AI in susceptible areas, such as hiring, criminal justice, and healthcare, has provoked a debate about bias and fairness around the technology. For instance, in 2012, a project called ImageNet played a key role in unleashing the potential of AI by providing developers a vast library for training computers to classify visual concepts, everything from flowers to snowboarders. Scientists from Stanford, Princeton, and the University of North Carolina paid Mechanical Turkers small sums to label over 14 million images, gradually gleaning a large data set that they released for free.
The data set then created an image recognition system capable of recognizing things with startling accuracy when the data set was fed to a large neural network. The algorithm learned from several instances to identify the patterns that unveil high-level concepts like the pixels that constitute the texture and shape of cats.
The issues around AI has also once shout out when its use come up with facial recognition systems driven by considerable growth in the installation of surveillance cameras across the world. According to a report by IHS Markit, China leads the world with 349 million surveillance cameras, while the U.S. has 70 million cameras.