AI I Can Finally Use (Because I Don’t Trust Those Tech Bro F*ckers With My Data)

Share on facebook
Share on twitter
Share on linkedin
Share on email

I use AI every day. We’ve generated most of the code for Safedrop 2026 with AI. It’s bloody amazing, and frightening (And I don’t know what to tell my kids about career choices!)  But there’s always been this moment — hand hovering over the keyboard — where I think: I can’t paste this in here. 💥

A log file. A client name. Anything that touches personal data. Anything that might be in scope for GDPR. Gone is the AI-assisted productivity, replaced by the quiet paranoia of someone who actually does security for a living.

The problem with mainstream AI

Here’s the thing most people don’t think about. When you chat with ChatGPT, Claude, or Gemini, it doesn’t feel like a privacy risk. It feels like a private conversation. It isn’t.

Moxie Marlinspike — the cryptographer 🔒who built Signal, and whose work ended up encrypting WhatsApp’s billions of messages — put it well when he launched his new project. Your AI chat is less like a private conversation and more like a group chat you didn’t know you’d joined. The AI company, their infrastructure partners, future advertisers, and eventually a lawyer with a subpoena are all in there with you. You’re just not seeing them in the thread.

And if you’re based in the UK or Europe, add the US government to that group chat too.

Every major AI assistant — ChatGPT, Claude, Gemini — is provided by an American corporation, running on American infrastructure. Under the CLOUD Act, US authorities can compel those companies to hand over your data regardless of where you are, where the data is stored, or what GDPR says about it. 🇪🇺 European courts have already invalidated two separate US-EU data transfer frameworks over exactly these concerns. This isn’t paranoia — it’s a live boardroom conversation across the continent, and it’s getting louder.

“Privacy is what lets you think freely.” – moxie marlinspike

For most people, that’s probably fine. For those of us handling sensitive data, it’s a non-starter.

Enter Confer

Confer (confer.to) is Moxie’s new project, and the core idea is straightforward: your conversations are encrypted on your device before they go anywhere near a server. The AI processing happens inside something called a Trusted Execution Environment — hardware-enforced isolation where even the company running the servers can’t see what’s happening. It’s open source. You can verify it. Confer can’t read your conversations, train on them, or hand them over. Structurally can’t, not just “we promise we won’t.”

I’ve been using it for log file analysis — something I’d never run through a standard frontier AI. Those logs contain real user data. Putting them through OpenAI would be a GDPR problem at best and a catastrophic breach of client trust at worst. With Confer, it’s a non-issue.

Fair warning: it’s not going to replace Claude

It’s not going to replace your daily Claude or Perplexity habit either. No real-time web access, and the model isn’t frontier-level. But that’s fine — this isn’t for your general research. This is for the stuff you were never going to put through those tools anyway. You wouldn’t complain that a safe is slower than a filing cabinet.

A small honest caveat

If you work in security, you’ll find the debate on Hacker News: is Confer’s E2EE claim equivalent to Signal-level encryption? Strictly speaking, not quite. You’re still trusting the hardware vendor and the attestation chain. That’s a bigger trust surface than pure Signal-style maths.

But compared to handing your data directly to a tech giant’s data lake? It’s not close. For most real-world sensitive data use cases, it’s a significant and practical improvement on anything else currently available.

Why this matters for anyone handling sensitive data

I run Safedrop and Projectfusion — we do end-to-end encrypted comms and secure data sharing. Safedrop 2026 is built on cutting-edge end-to-end encryption and zero-knowledge architecture — so we know better than most what genuine privacy protection looks like, and what’s just marketing. Even for us, until now, there was a whole category of genuinely useful AI work that was simply off the table.

Confer doesn’t solve everything. But for the workflows where privacy actually matters — legal documents, client data, HR, anything with a regulator lurking in the background — it’s the first tool that doesn’t force you to choose between using AI and keeping your data safe.

I have no commercial relationship with Confer. I’m just a happy user who’s finally got something I can use carefully without my paranoid legal brain screaming.

Worth a look: confer.to

What data are you still not comfortable putting anywhere near AI? Curious whether others have found workable solutions.

#AI #DataPrivacy #GDPR #CyberSecurity #Encryption #LegalTech #DataProtection #InfoSec