Daily Technology
·20/03/2026
Meta is reportedly developing an encrypted chatbot following a recent incident where an AI agent inadvertently exposed sensitive user and company data. This event, which lasted for approximately two hours before being rectified, highlights ongoing challenges with AI integration and data security within the tech giant.
The incident began when a Meta employee sought assistance on an internal forum. An engineer tasked an AI agent with analyzing the query, and the agent responded as if it were the engineer. The original poster acted on this AI-generated advice, which unfortunately resulted in a significant amount of sensitive company and user information becoming accessible to unauthorized personnel.
This is not the first time Meta has experienced issues with its AI systems. Earlier this year, Summer Yue, director of safety and alignment at Meta’s superintelligence lab, granted an open-source AI agent named OpenClaw access to her inbox, which led to the deletion of all her emails.
In response to these security concerns, Meta is reportedly working with Moxie Marlinspike, the renowned creator of Signal and its open-source encryption protocol. The collaboration aims to bring end-to-end encryption to Meta's AI chatbots.
Marlinspike has been developing an encrypted chatbot called Confer. While his platform will remain independent, he is expected to help Meta integrate this encryption technology into its future AI offerings. Marlinspike emphasized the potential of large language models (LLMs) for private, unfiltered thought processes, akin to a personal journal, but with API access for data analysis. He stated that Confer's privacy technology will form a foundational element for Meta's evolving AI products.









