June 14, 2025 - 5:30pm

Major news outlets began reporting this week that Meta’s new AI chatbot has been automatically publishing users’ private conversations to a public feed, exposing information ranging from embarrassing to outright criminal. The feature, which launched earlier this year, defaults to making all interactions public unless users actively change their privacy settings — a decision that has resulted in elderly users and children unknowingly broadcasting their most intimate questions to the world.

The results are exactly what you’d expect: clueless baby boomers asking about genital injuries, very young people seeking help with gender transitions, and even one user who requested assistance with cooperating with authorities to reduce their penal sentence. Others have posted equally compromising queries, such as questions regarding the safety of masturbating while driving or the application of high heat to one’s genitals. These posts often include usernames and profile pictures that trace directly back to social media accounts, turning private medical anxieties and legal troubles into permanent public records.

Did Meta know this would happen? Decades of user experience research shows that virtually no one changes default settings. When you make “public” the default option, you’re effectively choosing to broadcast the vast majority of user interactions. Meta even included a pop-up warning that “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.”

But warnings are useless when users don’t understand they’re publishing to a feed in the first place and most people haven’t been conditioned to expect their AI chatbot interactions to appear on a feed normally associated with social media. The company’s press release cheerfully announced “a Discover feed, a place to share and explore how others are using AI,” as if turning private conversations into public entertainment was a feature rather than a catastrophic bug.

The Meta debacle is merely the most visible symptom of a broader ongoing crisis in AI privacy. According to the Electronic Frontier Foundation, AI chatbots can inadvertently reveal personal information through “model leakage.” A 2024 National Cybersecurity Alliance survey found that 38% of employees share sensitive work information with AI tools without employer permission. The Dutch Data Protection Authority has received multiple breach notifications from companies whose employees fed patient medical data and customer addresses into AI chatbots.

Even AI services that promise better privacy protections offer cold comfort. Anthropic’s Claude claims stronger default protections, while ChatGPT requires paid subscriptions to guarantee data isn’t used for training. But there’s nothing preventing these companies from changing their policies tomorrow and retroactively accessing years of stored conversations. We’re essentially trusting profit-driven corporations to resist the temptation of sitting on goldmines of intimate user data.

Recent breaches underscore this vulnerability. OpenAI suffered a data breach that exposed internal discussions, while over one million DeepSeek chat records were left exposed in an unsecured database. The MIT Technology Review warns we’re heading toward a security and privacy “disaster” as these tools become essential for daily life. Every day, millions of users pour their medical anxieties, work secrets, and intimacy challenges into AI chatbots, creating permanent records that could be exposed, sold, or subpoenaed at any moment.

Meta’s public feed disaster simply makes visible what every AI company is doing behind closed doors: harvesting intimate conversations for profit while users bear all the risk. GDPR violations can result in fines up to €20 million or 4% of global revenue, but actual enforcement against AI companies remains virtually non-existent there or in the United States. Even when companies try to comply, the documentation required by GDPR and CCPA doesn’t address how personal information is handled in AI training data or model outputs.

Put simply, nothing you tell an AI chatbot today is safe from future exposure, whether through corporate policy changes, security breaches, or legal demands. Meta’s ham-fisted episode helps us strip away the comforting illusion of privacy that other companies maintain. At least Meta’s users can see their embarrassing questions posted publicly and try to delete them. The rest of us have no idea what’s happening to our conversations.


Oliver Bateman is a historian and journalist based in Pittsburgh. He blogs, vlogs, and podcasts at his Substack, Oliver Bateman Does the Work

MoustacheClubUS