Security

Trust by architecture,
not by attestation.

Zero-day resistance is structural. Zero-trust applied to every AI agent. Cryptographic compartmentalization at the kernel layer. Audit by default — tamper-evident, queryable, evidence-on-demand. Compliance becomes a query, not a project.

Scroll
Zero-Trust

Zero-Trust Agent Model

The seven pillars of zero-trust applied not just to network connections but to every operation an AI agent performs on your data: verify always, least privilege, just-in-time access, assume breach, explicit deny, identity-as-perimeter, continuous verification.

Compartmentalization

Cryptographic Tenant Isolation

Tenant boundaries enforced underneath the application layer, by the kernel, on every read and every write. Long-lived agent state encrypted with a key bound to the agent's identity. Cross-agent reads return ciphertext, not access errors.

Audit

Audit by Default

Every routing decision, every model invocation, every validation outcome, every state transition is recorded into a tamper-evident audit substrate. The recording predates the audit request. Compliance evidence is the system's natural output.

A prompt-injection attempt buried inside a customer document tries to pivot an AI agent into deleting data.

Most AI platforms try to prevent this with input filtering and output validation — guardrails on top of a system that cannot structurally distinguish instruction from data. The class of attack is open. OWASP lists it as the leading risk to LLM applications, and the mitigations remain incomplete.

Meridian refuses the attack at the architectural level. The model's output is data, never authority. There is no path from a model response to a state-changing operation that bypasses the substrate's validation gates and effect declarations. A delete command in an agent's output is a string the substrate will not act on without passing the same human-approval and audit gates that apply to any other deletion. The injection cannot escalate because there is nowhere for it to escalate to.

A regulator asks for evidence of approval chains for the last ninety days of administrative actions.

In most enterprise AI deployments this is a project. Logs come from one system, approval workflow from another, identity from a third. Reconstructing the chain takes weeks of manual stitching, and the result has gaps the auditor will find.

In Meridian it is a query. Every action is recorded into one substrate. Every approval is bound to the action it authorized. Every approver's identity is cryptographically signed. The evidence packet is generated by running a circuit; the regulator gets a reproducible, replayable record. Compliance evidence is not produced after the fact — the recording predates the request.

A high-value irreversible action requires two-person approval. Today that's an email thread and a screenshot.

Dual-control approval is a common requirement for finance transactions, security policy changes, and infrastructure modifications. Most platforms implement it as a UI affordance — a "this requires two approvers" button — without structural enforcement. Determined insiders work around it.

Meridian implements dual-control approval as a first-class substrate primitive. The action cannot proceed without two cryptographically-signed approvals from authorized identities. The approver set, the action, the timestamps, and the signatures are recorded together. The same gate works for a wire transfer, an IAM policy change, and a production database mutation. One mechanism, every domain.

Meridian's AI executives recommend. They surface signals, draft remediations, propose decisions. They never act unilaterally on anything that matters. When the data says one thing and your judgment says another, you take the gavel — and the system recalibrates around your decision.

This is the philosophy: the founder, the CISO, the operator as governor — not passenger. Every AI executive serves at your discretion. Every consequential action passes through a human-approval gate that you can configure, that you can override, and that the substrate records permanently. The AI is your instrument. Not your overlord.

"The AI recommends. The human decides. Score 101 — the human override — recalibrates the entire system around the chosen path. The substrate adapts to your call, not the other way around."

Most AI platforms treat security as a feature you turn on. Tenant isolation is a column filter. Audit is an export. Approvals are a UI screen. Each of these is a check that can be bypassed when something else goes wrong.

Meridian treats security as the constitution. Tenant isolation is enforced by the kernel — no tenant means no access, fail-closed. Audit is the substrate's natural output, not an export. Approvals are first-class structural primitives, not screens. Cryptographic compartmentalization is enforced by infrastructure-level key management, not application-level permissions. The AI cannot directly mutate state, ever, because the path doesn't exist — not because it's forbidden.

And our cryptographic posture is engineered for the next decade. Symmetric cryptography is already quantum-safe. Transport runs hybrid post-quantum key exchange. Asymmetric algorithms are inventoried and mapped to NIST FIPS 203, 204, and 205 successors, ready to migrate by configuration when cloud key infrastructure ships them. We don't claim "fully post-quantum" — no honest vendor can — but we are ready, and we will ship the migration the same week our key infrastructure can.

Frameworks we are prepared to support

Meridian is engineered to support the controls required by major compliance frameworks. We pursue formal audits and certifications as customer engagements require them. We do not claim attestations we do not hold.

SOC 2 Type II ISO 27001 HIPAA FedRAMP NIST CSF NIST 800-53 CIS Benchmarks GDPR CCPA

While you're here

Seven other domains running on the same substrate

Early Access

For security and compliance leaders evaluating Meridian for enterprise deployment.

No spam. We’ll reach out personally.