Every design decision starts with one question: can this stay on your device? If the answer is yes, it stays there. No exceptions.
These aren't marketing claims. They're hard constraints baked into how Buddy is built.
All sensitive requests — your files, calendar, health, contacts — are handled entirely by local AI models. They never touch the internet.
Audio recording starts only after your wake word fires and stops the moment you finish speaking. Your raw audio is never uploaded anywhere.
Your voice profile, conversation history, and personal context are stored locally with military-grade encryption. Only you hold the key.
Buddy collects no usage analytics, no crash reports, no behavioral data. There is no "home base" Buddy phones home to.
When a request genuinely needs cloud AI, Buddy automatically strips names, locations, and identifying details before it leaves your machine.
Buddy runs without sign-up, login, or any identity tied to you. There is no profile of you on any Buddy server — because there are no Buddy servers.
Every request Buddy receives is automatically classified by an on-device model. Sensitive data never leaves. General questions go to cloud only after PII removal.
Handled entirely by on-device AI models. No network connection. No exceptions. These are requests where your personal context matters most.
Only used when local models genuinely can't handle the complexity. All identifying information is stripped before the request leaves your device.
No guessing. Here's every piece of data Buddy touches and precisely where it lives.
| Data type | What it is | Where stored | Encrypted |
|---|---|---|---|
| Voice profile | A mathematical embedding of your voice (not a recording) | 🏠 Your device | Military-grade |
| Conversation history | Past questions and answers, used to build context over time | 🏠 Your device | Military-grade |
| Personal memory | Facts you've told Buddy (preferences, routines, names) | 🏠 Your device | Military-grade |
| App context | What app is open, screen region for context-aware answers | 🏠 RAM only | Not persisted |
| Audio recordings | Raw voice audio after wake word trigger | ✕ Never saved | — |
| Usage analytics | How often you use Buddy, what features you use | ✕ Not collected | — |
| Cloud query content | Text sent to cloud AI (Tier 2 only, PII removed) | ↗ Cloud, PII-free | TLS in transit |
These are architectural constraints, not policy promises. The system is built so these things are technically impossible.
Privacy by architecture, not by policy. Here's how it's enforced technically.
All locally stored data — voice profiles, memory, conversation history — is encrypted using the strongest standard available, with a key that only your device holds.
Personal memory and conversation history live in a private encrypted database stored entirely on your device. The file is unreadable without your device key.
A tiny program running entirely on your device. Listens for one phrase only. Uses almost no CPU. No audio is streamed or stored until you say "Hey Buddy".
Your voice is converted to text entirely on your device. Your audio never leaves your machine — only the resulting text can move forward, and only after classification.
Every request is classified in a fraction of a second by Buddy's local AI brain before any routing decision. Sensitive requests are blocked from the cloud regardless of content.
Before any cloud request, Buddy's privacy filter strips your name, location, organisation, and any other personal identifiers from the text before it leaves your device.
Join the early access list and get notified the moment Buddy is ready. First wave users get lifetime founding-member pricing.