OpenAI’s Viral AI Chatbot ChatGPT Can be Misused Claims the CTO

OpenAI’s viral AI chatbot ChatGPT can be misused and should be regulated claims the CTO

According to Mira Murati, CTO of OpenAI, AI can be abused and “used by bad actors.” She believes that it is not too late for various stakeholders to get involved and that some regulations may be required. In this article we have explained about OpenAI’s viral AI chatbot ChatGPT can be misused and should be regulated claims the CTO.  Read to learn more.

The popularity of the viral AI chatbot, ChatGPT demonstrates the power of generative AI (artificial intelligence) and its impact on society. While some argue that AI-powered platforms can save a significant amount of time and resources, others argue that they may pose a threat to several jobs in the future. However, its creator’s concerns are quite different, and far more cynical. However, if proper checks and balances are in place, things may not get any worse.

Time reports that in an interview with Mira Murati, CTO (chief technology officer) at OpenAI, the creator of ChatGPT and Dall-E, the creator fears that AI will be misused and “used by bad actors.” She goes on to say that it is not too late for various stakeholders to get involved and that some regulations may be required.

Speaking about the super-popular AI-powered chatbot, Murati said the company was pleased with the response, but that it still faces challenges. She stated, “We weren’t expecting this level of joy from bringing our child into the world… ChatGPT is essentially a large conversational model—a large neural net trained to predict the next word—and its challenges are similar to those of the base large language models: it may invent facts.”

OpenAI also acknowledged that the AI chatbot ChatGPT can give incorrect answers on multiple instances, and the official website notes that the ChatGPT might even produce harmful instructions or biased content. This is a problem with generative language models in general, not just ChatGPT.

Interestingly, a former Google employee expressed a similar concern about the company’s LaMDA – Google’s ChatGPT rival. According to reports, the module generated stereotypical content on occasion, which some may consider racist and sexist.

Concerning regulations, Murati stated that it is critical for companies such as OpenAI to ensure that their tools are controlled and responsible. “We’re a small group of people, and we need a lot more input in this system, input that goes beyond the technologies – definitely regulators, governments, and everyone else,” she added.

One of the many advantages of ChatGPT is that it can even review codes. It may not always be accurate, but many people have said that the chatbot is a good starting point. As hackers are already using the platform to create malicious applications, regulations of some kind may become necessary in the future. Hackers can use this malware to gain access to sensitive information and even steal money from users.

When asked if the company is surprised that ChatGPT has been banned in schools, Murati responded that it is surprising what people with “different backgrounds and domain expertise” can do with the technology, both positively and negatively. OpenAI recently released a tool to assist examiners in determining whether a project was written with the assistance of generative AI.

Scroll to top
Close
Browse Categories