ryzome tree
← Back to Blog

How We Handle Your Data as an AI company

Christophe Vauclair · February 18, 2026

Why uploading your work to an AI tool feels risky

We're asking you to upload your strategy docs, your messy meeting notes, your unreleased product roadmaps into a cloud-based AI system. Skepticism here is reasonable.

Most AI tools have earned that skepticism. The default assumption is that your proprietary data gets quietly siphoned off to train a model that eventually sells your own insights back to you. That's how most of the industry operates.

But if we want to build a permanent memory for your work, one that knows your context and improves over time, we need to earn a different level of trust. That starts with showing you the architecture.

Here is exactly how Ryzome handles your data, what we can see, and the specific trade-offs we're making to keep your work yours.

TLDR

  • Your data runs on Google Cloud's enterprise infrastructure, where it's contractually excluded from model training.
  • We're on an active path to SOC2 Type II certification.
  • When we improve our product, we analyze query structure, never your content.

Your data runs on Google Cloud's enterprise infrastructure

We're an engineering team, not a data center provider. We can't physically secure a server better than Google can.

This is why we built Ryzome on Google Cloud Vertex AI (Gemini Enterprise). We're not renting their intelligence. We're inheriting their security infrastructure.

When you use Ryzome, you're protected by the Vertex AI Enterprise terms. These aren't marketing language. They're contractual commitments:

The questionThe answer
Does my data train your models?No. Google contractually guarantees that data processed in Vertex AI is not used to train their foundation models. Your data stays in your instance.
Can other users see my prompts?No. Your data is isolated. The model weights are frozen. No knowledge transfers from your session to another user's session.
Do humans review my conversations?No. Unlike the free consumer version of Gemini, the Enterprise layer has no human review loop.

This infrastructure handles the foundational layer: your data is encrypted in transit and at rest, guarded by one of the most battle-tested security teams in the industry.

How we secure our own layer, and where we are on SOC2

While Google secures the building, we secure the apartment.

We're a startup. We won't pretend we have a ten-year track record of audits. But we're building for compliance from day one. We are currently on the active path to SOC2 Type II certification.

This means we're implementing the rigorous, auditable controls that prove our security posture. We use industry-standard automated compliance platforms to monitor our own internal access, ensuring that no Ryzome engineer can access customer production data without an explicit, logged, and time-bound reason (usually a support ticket you initiated).

Access controls today and what's coming

Right now, we support standard secure authentication. But we know the enterprise bar is higher.

Today

You have a workspace. It is secure.

What we're building:

Role-Based Access Control (RBAC) that mirrors your company's hierarchy. The junior copywriter shouldn't see the CEO's exit strategy drafts, even if they're both in the same Ryzome workspace.


The one thing we do analyze, and why it never touches your content

Here's where we need to be precise about a real tension.

We don't want your content. We don't care about your Q3 projections or your HR disputes.

However, we do care about how the agent reasons. Ryzome uses a custom Domain Specific Language (DSL), the internal code the agent uses to organize your library. It's the syntax that decides where to file a document or how to retrieve a memory.

To make Ryzome better, we need to improve this DSL. We need to see where the agent gets confused, where it writes bad queries, and where the syntax breaks.

This creates a real question: how do we debug the agent without reading your work?

Our approach: structural anonymization

We're building a system that separates the logic layer from the content layer, so we can improve the product without accessing your data.

Think of it like a librarian arranging books:

  • The content (your data): The text inside the books. We never see this.
  • The structure (the DSL): The call numbers on the spine and the logic of which shelf they go on.

We log the structure.

If you ask: "Find the Q3 report and summarize the risks."

We see: [ACTION: QUERY] > [TARGET: FILE_TYPE_PDF] > [OPERATION: SUMMARIZE]

We don't see: [RISK: REVENUE_DOWN_20%]

what Ryzome sees data privacy

We need to see the logic, not the lyrics.

What we commit to, and how to verify it

Trust in AI tools is low right now, and for good reason. Companies have burned that bridge by burying data-sharing clauses in long Terms of Service documents.

We're taking the opposite approach. We treat your context as the most sensitive thing in the system, because it is. The only way to build a tool that functions as a permanent memory for your work is to prove we're worthy of holding that memory.

We're not perfect yet, but we're transparent. If you have questions about our encryption, our vendor policies, or our roadmap to SOC2, email us. We'll show you the work.

Frequently asked questions