Meta’s AI ambitions are under scrutiny after contractors revealed that they can view personal details shared by Facebook users when interacting with the company’s AI chatbots. According to multiple contract workers who spoke on the condition of anonymity, they are tasked with reviewing and labeling conversations between users and Meta’s AI assistants—some of which include sensitive or private information that individuals unknowingly disclose.
The contractors report that their work involves flagging inappropriate content, correcting AI misunderstandings, and improving the chatbot’s performance. However, in doing so, they are exposed to user messages containing names, addresses, phone numbers, financial details, and even intimate personal confessions.
While Meta’s privacy policy notes that user interactions with its AI tools may be reviewed for “quality assurance and training purposes,” many users remain unaware of the extent to which human oversight is involved. Privacy experts warn that this process raises concerns about data protection, especially when AI platforms are increasingly being used for personal or sensitive inquiries.
Meta insists it employs strict confidentiality agreements and data-handling protocols to prevent misuse of information. “We take user privacy seriously and have robust systems in place to ensure data is protected,” a company spokesperson stated.
Still, the revelations underscore growing anxieties around AI chatbots and user trust. As Meta and other tech giants race to integrate AI into their platforms, transparency about human oversight and data use is likely to remain a pressing issue.