Signal Creator Moxie Marlinspike Is Working With Meta to Encrypt AI Conversations
Moxie Marlinspike, the creator of Signal, announced that his encrypted AI chatbot Confer is integrating its privacy technology with Meta AI. The collaboration could bring end-to-end encryption to billions of AI interactions.
The person who built the most trusted encrypted messenger in the world is now trying to encrypt your AI conversations. Moxie Marlinspike, creator of Signal, announced that his startup Confer is working with Meta to integrate its privacy technology into Meta AI. If successful, this could mean end-to-end encrypted AI interactions for the billions of people who use WhatsApp, Messenger, and Instagram.
The collaboration brings together two entities with very different reputations on privacy. Signal is the gold standard for secure communications. Meta is the company that built its empire on data collection. That Marlinspike is willing to work with Meta tells you something about the scale of the problem he's trying to solve, and possibly about how seriously Meta is taking the privacy concerns around AI.
How Encrypted AI Actually Works
Encrypting AI conversations is harder than encrypting messages between two people. With standard end-to-end encryption, the math is relatively straightforward: encrypt on one device, decrypt on the other, and nobody in between can read the content. The server is just a relay.
AI changes the equation. The AI model needs to read your message to generate a response. That means the content has to be decrypted somewhere. The question is where, and who else can see it.
Confer's approach uses confidential computing, running AI inference inside secure hardware enclaves where even the server operator can't access the data being processed. The user's message is encrypted in transit, decrypted only inside the secure enclave where the AI model runs, processed, and the response is encrypted before leaving the enclave. At no point does Meta (or anyone else) get access to the plaintext conversation.
This is technically demanding but not theoretically new. Intel SGX, AMD SEV, and ARM TrustZone have provided hardware-level isolation for years. What's new is applying this architecture to AI inference at the scale Meta operates, serving billions of queries per day across a model that requires significant compute resources.
The engineering challenges are real. Secure enclaves have performance overhead. Running large language model inference inside one adds latency and reduces throughput compared to standard server-side processing. Confer has apparently solved enough of these performance problems to make the integration viable, though specific numbers haven't been published yet.
Why Meta Wants Encrypted AI
Meta's motivation here isn't altruism. The company has been under relentless pressure on data privacy for over a decade, and AI creates a whole new attack surface for regulators and critics.
When you chat with Meta AI, you're sharing your questions, your problems, your interests, and your thought processes with a system that Meta operates. That data is a goldmine for ad targeting. It's also a legal liability in jurisdictions with strong privacy laws, particularly the EU where the AI Act imposes transparency and data protection requirements on AI systems.
Encrypting AI conversations lets Meta make a credible claim: we can't see what you're talking about with our AI, even if we wanted to. That claim, if technically verified, would defang some of the strongest regulatory arguments against Meta's AI products and make Meta AI more attractive to privacy-conscious users who currently avoid it.
There's a competitive angle too. Apple has been pushing on-device AI processing as a privacy differentiator, running AI models locally on iPhones so data never leaves the device. Meta can't match that approach because its AI models are too large for mobile hardware. Encrypted cloud inference is Meta's answer: your data goes to the cloud, but it's encrypted the entire time, so the privacy guarantee is equivalent to on-device processing.
Marlinspike's Credibility Makes This Viable
The most important thing about this collaboration isn't the technology. It's who's building it. Moxie Marlinspike is perhaps the only person in the world who could partner with Meta on encryption and not immediately lose credibility.
He built Signal. He designed the Signal Protocol that encrypts WhatsApp's messages (ironically, the same protocol Meta already uses for messaging). He has spent his career fighting for privacy, often against the same surveillance-advertising model that funds Meta's business. If Marlinspike says the encryption works, security researchers and privacy advocates will take that claim seriously in a way they never would if Meta said it alone.
That credibility is essentially what Confer is selling. The technology matters, but the trust matters more. Meta can't build encrypted AI and have anyone believe it actually works. Marlinspike can. This is a partnership where both parties bring something the other lacks: Meta has scale, Marlinspike has trust.
The Technical Verification Problem
Encrypted AI only works if it's verifiable. Users and researchers need to confirm that the secure enclaves are actually secure, that Meta's servers aren't logging decrypted data, and that the encryption implementation doesn't have backdoors.
Confer has committed to open-source audits and third-party verification of its encryption implementation. This is essential. The history of encryption is littered with systems that claimed to be secure but weren't, from compromised VPNs to messaging apps with implementation bugs that leaked plaintext data.
The verification challenge is harder for AI systems than for messaging. A messaging encryption audit involves examining a relatively contained codebase. An AI encryption audit involves examining the entire inference pipeline, from data ingestion through model processing to response generation, plus the hardware attestation layer that ensures the secure enclave hasn't been tampered with.
If Confer and Meta publish thorough technical documentation, allow independent security audits, and demonstrate the system working under adversarial conditions, this could set a new standard for AI privacy. If they don't, it'll be another corporate privacy promise that's impossible to verify.
What This Means for the AI Privacy Landscape
The Confer-Meta collaboration could reshape how the entire AI industry thinks about privacy. Right now, the default assumption is that AI providers can see your conversations. Every major AI chatbot, ChatGPT, Gemini, Claude, processes queries on servers where the provider has theoretical access to the content.
If Meta demonstrates that encrypted AI inference works at scale, every other AI provider will face pressure to match it. OpenAI can't credibly argue that encryption isn't feasible when Meta, a company with 3 billion users, is already doing it. Google will face questions about why Gemini conversations aren't encrypted. The competitive dynamics push the entire market toward better privacy.
For enterprise AI adoption, encrypted inference could unlock use cases that are currently blocked by data security concerns. Legal firms that won't put client communications through an AI. Healthcare providers bound by HIPAA. Financial institutions with strict data handling requirements. Encrypted AI removes the objection that your data is exposed to the AI provider.
The implications extend to AI regulation too. If encrypted AI is technically feasible, regulators can reasonably require it for high-risk applications. That shifts the regulatory conversation from "should AI providers collect data?" to "AI providers must prove they can't access user data." It's a higher bar, and it favors companies that invest in privacy infrastructure early.
The Paradox of Trusting Meta With Privacy
There's an inherent tension in Meta, of all companies, leading on AI privacy. This is the company behind the Cambridge Analytica scandal, countless privacy policy violations, and a business model fundamentally built on knowing everything about its users.
But that's precisely why Marlinspike's involvement matters. He's not trusting Meta. He's building a system where trust in Meta isn't required. If the encryption works correctly, it doesn't matter what Meta's intentions are. The math prevents them from accessing the data.
That's the beautiful thing about good encryption: it replaces trust with mathematics. You don't have to believe Meta cares about your privacy. You just have to verify that the encryption implementation is sound. And with Marlinspike's team building it and independent auditors checking it, that verification is possible in a way it wouldn't be if Meta tried to do this alone.
Whether this represents a genuine shift in Meta's approach to privacy or just a strategic response to competitive and regulatory pressure is almost beside the point. If the result is billions of AI conversations that nobody, not Meta, not hackers, not governments, can intercept, that's a win regardless of the motivation.
Frequently Asked Questions
What is Confer?
Confer is an encrypted AI chatbot built by Moxie Marlinspike, the creator of Signal. It uses confidential computing to ensure that AI conversations are end-to-end encrypted, meaning even the server operator can't read the content.
How does encrypted AI differ from regular AI chat?
With standard AI chatbots, the provider (OpenAI, Google, etc.) can theoretically access your conversation content on their servers. Encrypted AI uses secure hardware enclaves to process your messages, so the content is never visible to anyone except you and the AI model running inside the enclave.
Will this affect how Meta AI works for users?
Users should notice minimal differences in functionality. There may be slightly higher latency due to encryption overhead, but the core AI capabilities remain the same. The main change is that Meta won't be able to access conversation content for ad targeting or other purposes.
Can other AI companies implement similar encryption?
Yes. The underlying technology, confidential computing, is available from major hardware vendors. The challenge is implementing it at scale without unacceptable performance trade-offs. Meta and Confer's work could provide a template for the rest of the industry.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
An AI system designed to have conversations with humans through text or voice.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.
The processing power needed to train and run AI models.
Google's flagship multimodal AI model family, developed by Google DeepMind.