Meta-Fixes-AI-Chatbot-Bug-That-Exposed-Private-User-Prompts-and-Responses

Meta Paid a Hacker $10K for Uncovering an AI Bug That Let Users Swap IDs to View Others’ Private Chats

A critical flaw in an AI chatbot platform that leaked private user prompts and AI-generated responses to strangers had been fixed quietly by Meta. The problem had been discovered by cybersecurity expert Sandeep Hodkasia in December 2024, and the incident raised another round of questions about the tech giant's prioritization of privacy and user protections.

Simple Exploit Uncovered by Researcher

The vulnerability allowed unauthorized access to private content by simply altering a numeric code assigned to each prompt. Meta’s backend servers failed to verify if the requester was the original prompt creator. As a result, anyone with basic technical knowledge could access conversations that were not meant for them.

How the Flaw Was Discovered

Hodkasia stumbled upon the glitch when studying how Meta AI handled prompt edits and regeneration. By changing prompt ID numbers in the network requests, he could retrieve AI responses associated with other users. The glitch arose from a lack of adequate access checks, an oversight any platform dealing with sensitive user data shouldn’t have.

Meta’s Response and Reward

Meta was quick with its corrective action, implementing a fix on January 24, 2025. The company states that no misuse was detected and rewarded Sandeep Hodkasia with $10,000 in appreciation.

Experts dismiss this patch by saying that this was a wake-up call for the industry about the pressing importance of implementing better access controls and ongoing auditing mechanisms, especially as these AI tools end up handling more personal and creative data. 

Ongoing Privacy Concerns for Meta AI

The AI chatbot in question launched recently to compete with tools like ChatGPT, but early users already faced privacy confusion. Some unintentionally made their AI chats public. Now, this bug adds another layer to Meta’s ongoing struggle with user trust.

A Wake-Up Call for AI Security

Even with no traces of exploitation found, this discovery brings to light how security is almost always left behind on the fast track of AI innovations. Security should be at the forefront when platforms reach a scaling stage; it cannot be an afterthought.