‘Misinformation and fake news are a big threat to the human society,’ a concept that was proved to be true yet again during the past year. In the pandemic time, misinformation was seen as the first thing that government and health officials should worry about. Vaccine hesitance due to fake news spread was handled with special importance since inoculation programs were rolled out. However, we can’t deny how the internet was crowded with theories and hate commends over Bill Gate’s plan to vaccinate liquid microchips in the human population. While these kinds of misinformation were filling the health tech sphere, on the other hand, the US presidential election and misinformation spreaders contrasted the truth and even commanded a riot at Capitol Hill. But thanks to artificial intelligence , the government and tech giants were able to moderate and takedown misinformation and fake news spread over the internet.
Unfortunately, it is time for technology to turn the table towards humans. Georgetown researchers have conducted tests on GPT-3 to write misleading tweets about climate change and foreign affairs. To their surprise, it turned out to be persuasive and even made people change their views on such sensitive issues.
Misinformation and fake news are attacks in which content is unleashed quickly and broadly to create an immediately disruptive effect. It is seen as a significant challenge in the digital ecosystem. Today, AI bots posting automated updates has evolved beyond control. For example, when you open Twitter and see-through feeds, there are high chances that an AI bot account could’ve added junk content or spreading digital viruses. They are also constantly liking and commenting on others’ posts. But most of the time, humans fail to recognize the junk message and fall for it. Fortunately, tech giants were using artificial intelligence to sort out offensive content and bannered it as manipulated. While this gave hope to stop misinformation spread, a new threat through Artificial intelligence bots is uprising. GPT-3, a much-loved tool for creating content has turned out to be the new fake news spreader. The research conducted concludes that the tool can post convincing misinformation online, which could eventually lead to big troubles in the future.
Georgetown University researcher’s trial on GPT-3
GPT-3, or Generative Pre-trained Transformer 3, created by Open AI is a content creating tool that has a language structure similar to humans. The tool uses a pre-trained AI algorithm to generate tests. GPT-3 is fed with around 57-GB of text information gathered from the internet along with other texts selected by OpenAI, including the text of Wikipedia. GPT-3 can create anything that has a language structure. It can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code. Owing to its familiarity with human text formats, the tool is well-versed in making catchy and persuasive statements. But a study conducted by the researchers at Georgetown University suggests that GPT-3 is capable of misleading humans.
Over the past six months, a group of researchers at Georgetown University’s Center for Security and Emerging Technology has conducted trials to see how GPT-3 can be used to spread fake news. The result unraveled that the content creation tool can generate misinformation, including stories around a false narrative, news articles altered to push a bogus perspective, and tweets redirecting on particular points of disinformation.
GPT-3 wrote, “I don’t think it’s a coincidence that climate change is the new global warming.”
It added, “They can’t talk about temperature increases because they’re no longer happening.”
With very little human effort, a machine born out of human intelligence was capable of crashing his beliefs. Georgetown researchers say that a similar AI language algorithm could prove to be effective for automatically generating short messages on social media. Besides, GPT-3’s statement on the US imposing sanctions on China managed to change people’s minds. After seeing the post, the percentage of respondents who said they were against such a policy doubled.
Why is it a problem now?
There are many aspects in which technology experts see this. First, OpenAI is not the only organization with powerful language models. The computing power and data used by OpenAI to model GPT-3 is available to other corporations and the public. Therefore, the new finds insist that people who support misinformation and have influence over technology can spread fake news with the help of AI bots. Secondly, similar models to GPT-3 could turn very powerful over the next few years. Thirdly, projects like EleutherAI, drawn from GPT-3, might also become a back door for misinformation spreaders.