The black-box memory problem
If you have used ChatGPT Memory, Claude Projects, Gemini's saved info, Mem.ai, or Letta in the last six months, you have used a system that quietly accumulated information about you and surfaced it back across sessions. Some of those products do this well. None of them do it transparently.
I cannot tell you, as a user of any of those products:
- Which memory the model is using right now. A reference appears in the response, or it doesn't. The retrieval set is invisible.
- Why a particular memory exists. There is no audit trail mapping the memory back to the conversation that produced it, the confidence the system had in it, or whether it has ever been revised.
- What the encryption boundary is. The vendor holds the keys. "Your memory" lives in their database with their access controls.
- Whether a piece of memory can take action on my behalf. Increasingly the answer is yes — but the rules are inside the platform, not in my hands.
Three months ago we shipped SaveState as “Time Machine for AI” — encrypted backup and restore for your AI state. The market told us something more interesting. Backup is a feature. The real gap is that nobody owns the memory layer: the encrypted, portable, governed substrate that should sit beneath every AI product you use. We pivoted into that gap. The Trust Kernel is the governance half of that pivot.
What governance actually means
Governance isn't a feature flag. It is three things together: a state machine, a set of enforcement points, and an audit trail.
The state machine. Every memory in SaveState lives in one of five states. Writes land as candidate. A worker promotes them to stable only after they have passed a configurable promotion rule — minimum confidence, minimum age, required tags. If a write matches a denylist pattern or fails confidence, it terminates as rejected and never reaches the database. Things go wrong — bad memories slip through, prompt-injected content lands — and the operator can move them to quarantined for review or revoked, which adds the pattern back to the denylist so it cannot be re-written.
The three gates. The state machine is enforced at three points. The WriteGate runs on every memory write. The TrustGate filters memory at retrieval time so candidate entries don't pollute context windows in production. The ActionGate sits in front of any registered side effect — tool call, API call, mutation — and refuses to run anything that isn't explicitly registered with a known trust level. Deny by default, on the action path. This is the part that compliance teams care about.
The audit trail. Every transition between states is recorded with id, from-state, to-state, reason, actor, and timestamp. You can query it from the CLI:
$ savestate trust audit --limit 5
๐งพ Trust Audit (last 5)
2026-04-28 14:02:11 candidate โ stable (promotion-worker)
id: 8e3a4b9c-1a2f-4f6e-9d1e-2c7b1d4a55b8
reason: confidence 0.92 โฅ 0.8 and age 21m โฅ 10m
2026-04-28 13:58:02 candidate โ rejected (write-gate)
id: a91b3322-77af-4f0d-b3e5-19c8d6b0ee71
reason: Denylisted: matches secret-pattern rule "api-key-prefix"
2026-04-28 13:51:44 stable โ revoked (operator:david)
id: 6f0e1a55-2d13-4b8f-ab44-8a9e1c2d3f10
reason: User requested removal; pattern added to denylist
That's not a log file. That's an answer to the question every compliance review eventually asks: why did the model know that, and who decided it should?
Why this is the Team-tier wedge
Most of the AI memory market is racing on capacity — longer context, more recall, smarter retrieval. That race is fine. It is also commoditizing fast. The thing nobody is racing on is the thing every regulated buyer asks about first: can you tell me, with receipts, what this system remembers, why, and what it is allowed to do with that?
The Trust Kernel is the answer. Three concrete things compliance teams will care about, all shipping today:
- Audit-grade memory. Every promotion, rejection, and revocation is recorded. The CLI emits
--jsonfor cron-able alerting; the same data will surface in the upcoming Team-tier dashboard. - Deny-by-default actions. Tools that aren't registered cannot run. Memory that hasn't passed the gate cannot escalate to action. A prompt-injected note that says “email everyone in finance” doesn't get to send the email.
- Role-scoped decryption. The encryption boundary is yours, not ours. SaveState never sees plaintext. The keys live with you, and as we move into the Team tier, role-scoped decryption lets a security lead audit the trail without having to access the underlying memory content.
The Trust Kernel is open source today, in the same repo as the rest of SaveState. The Team tier — SSO, audit-log retention, data-residency selection, the dashboard UI — is the next thing on the roadmap. We are not promising it ships next week. We are promising the substrate is already in place.
If you have a compliance team breathing down your neck about “what does the AI actually remember about our customers,” this is the layer you have been waiting for. The state machine is in your hands. The audit trail is queryable. The action gate is deny-by-default. The encryption keys never leave your machine.
Memory is the new moat. Governed memory is the new compliance story.
Read the Trust Kernel docs
States, scopes, gates, CLI usage, and how to wire it into your own memory store.
Trust Kernel Docs โ