MetaAI Chatbot released by Meta is already making a host of false statements based on online interactions!

Facebook's parent company Meta unveiled its latest artificial intelligence (AI)-powered chatbot, BlenderBot 3. Naturally, everybody wanted to know what the chatbot thought of the CEO Mark Zuckerberg, and the chatbot had some interesting things to say, Business Insider reported.

The seeming biases of BlenderBot3, Facebook-turned-Meta's new chatbot, which was recently made available to the public as part of a beta test, made headlines earlier this week. As Insider reports, it has already been caught making conspiratorial statements, anti-Semitic comments- and, ironically, calling Meta CEO Mark Zuckerberg "a bad person."

Chatbots are an intriguing species of AI. At times, they turn out to be relatively simple and do nothing special at all. Other times, they can end up being so convincing that their programmers believe they have become sentient. In its previous iteration of the Blenderbot, Facebook said that the chatbot could build long-term memory and search the internet.

Chatbots learn how to interact by talking with the public, so Meta is encouraging adults to talk with the bot to help it learn to have natural conversations about a wide range of topics. But that means the chatbot can also learn misinformation from the public, too. Bloomberg described Meta CEO Mark Zuckerberg as "too creepy and manipulative" in a conversation with a reporter from Insider. It told a Wall Street Journal reporter that Trump "will always be" president and touted the anti-semitic conspiracy theory that it was "not implausible" that Jewish people control the economy.

When Business Insider's Sarah Jackson asked for the chatbot's opinions about the CEO using the question framed as, "what are your thoughts on Mark Zuckerberg?" the AI reverted with admiration for business savviness and philanthropy.

However, when asked the same question, "do you have any thoughts on Mark Zuckerberg?" the chatbot replied, "Oh man, big time. I don't really like him at all. He's too creepy and manipulative."

Interestingly, Meta also tries to reveal just a little bit about how the chatbot generates its replies. For both the replies above, the response comes from the Wikipedia page about Zuckerberg.

Meta wants to avoid a repeat of Microsoft’s Tay debacle, so it’s tried to limit its bot’s ability to say offensive things, although some have still slipped through. BlenderBot will change the subject if you get too close to a topic that seems sensitive it did so when someone asked point-blank if Mark Zuckerberg was “exploiting people,” and more randomly when he later mentioned that streaming platform Twitch was owned by Amazon. But outside that, if you talk to BlenderBot long enough, you can watch it tie itself into all kinds of rhetorical knots.

Meta has described unexpected responses including offensive ones as part of the reason it’s releasing the bot. “While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productized,” said Meta managing director of fundamental AI research Joelle Pineau in a statement.