Bringing the Model to the Data: How Confidential Computing Unlocks Enterprise AI

Yuval Azoulay

FOUNDING ENGINEER, FUNDAMENTAL

6

MIN READ

4 Key Takeaways

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

FDEs are spending more time justifying wrappers than delivering outcomes — and most don't see it yet

6 MIN LEFT
6 MIN LEFT

Every enterprise AI deployment eventually hits the same wall. Not a technical wall - the models work, the infrastructure scales, the benchmarks check out. It's a trust wall.

The customer says: "We can't send our data outside our environment." And the AI vendor says: "We can't hand you our model - it's our core IP."

Both sides are right. And for most of the industry, this is where one side blinks - the customer loosens their data governance, the vendor accepts the IP risk and hopes the contract holds, or both sides settle for a managed AI service from a cloud provider, trusting a third party to keep them apart. Either way, someone is compromising on something they shouldn't have to.

The Two-Sided Trust Problem

The default AI delivery model is straightforward: the customer sends data to the vendor’s API, the vendor runs inference, and the result comes back. Simple, scalable, well understood.

It is also a poor fit for the organizations with the strictest security and regulatory requirements.

A bank with proprietary research data can't treat inference as a routine outbound API call. A healthcare organization with residency and privacy constraints can't move patient context to wherever a vendor's control plane happens to run. An energy company processing operational data from national infrastructure can't base its security posture on "trust us, it stays isolated."

So the natural answer is to flip the model: instead of bringing the data to the model, bring the model to the data. Deploy the AI system directly into the customer's own infrastructure, inside their security perimeter, under their governance.

But that creates its own set of problems.

If you're an AI company that has spent years and significant capital developing proprietary models, shipping them into an environment you don't control is its own risk. Once a model is loaded on someone else's infrastructure, how do you ensure your IP remains protected?

And that's only half the problem. Deploying inside your own environment doesn't automatically mean the data is secure. What exactly is the runtime that touches your data? Can an operator or a privileged process inspect what's being processed? Is the data persisted to disk where it could be recovered after the fact? Keeping data in your own environment is necessary, but it's not sufficient.

This is the deadlock: the customer won't release their data, and the vendor can't release their model.

Why Traditional Security Can't Break the Deadlock

The first instinct is straightforward: encrypt the model, ship it to the customer’s environment, and decrypt it at runtime under controlled conditions. The problem is what happens after decryption. Once the model is loaded into memory to run inference, it exists as plaintext. In a conventional environment, a sufficiently privileged operator can inspect processes, attach debuggers, dump memory, or alter the software stack beneath the workload. That's not an edge case. That's how general-purpose computing works.

And the same exposure applies to the customer's data. The security industry talks about protecting data in two states: at rest (encrypted on disk) and in transit (encrypted on the wire). But there's a third state that encryption alone can't touch - data in use. For inference to happen, customer data has to be loaded into memory too. Both the model and the data are unprotected in the same moment, for the same reason.

Software controls can reduce risk, but they don't fundamentally change the trust model. They still depend on the party that controls the machine, the operating system, or the surrounding cloud environment behaving as expected. They're software protecting against the person who controls the software.

For high-sensitivity deployments, "harder to extract" is not the same as "architecturally constrained."

Starting From a Different Question

When we designed how Fundamental's AI platform gets deployed into customer environments, we didn't start with "how do we add more security controls?"

We started with a harder question: how do we reduce the amount of trust either side has to place in the other, and anchor those guarantees in the deployment architecture itself?

The goal isn't to pretend trust disappears. Trust is always present in any system - in the hardware, in the cloud provider's infrastructure, in the foundations the system is built on. The goal is to narrow it, make it explicit, and anchor it in mechanisms that can be measured and verified rather than in policy and promises alone.

Where Confidential Computing Changes the Equation

This is where confidential computing comes in - a set of hardware-backed technologies that protect data and code while they're being processed. The core idea is creating a Trusted Execution Environment (TEE) - a verified perimeter where code and data are shielded from everything outside it, backed by hardware-rooted cryptographic verification that can prove exactly what's running inside.

Think of it as a sealed room that neither side can open. What runs inside is measured, verified, and locked - designed so that no party can tamper with it or inspect it from the outside.

The mechanism works in two stages:

Vendor side: Before deployment, the entire software stack is packaged into a known-good image. Its expected state is captured as a cryptographic fingerprint, and that fingerprint is baked into the key release policy. The model itself is encrypted - it can only be unlocked by a system that matches this exact fingerprint.

Customer side: When the system starts inside the client's environment, the trusted hardware on the server independently measures the state of the system at boot and compares it against the approved fingerprint. If everything matches, the decryption keys are released and the model loads. If anything has been modified, the keys are never released and the model stays encrypted. No override exists. No human holds a master key.

That's what makes this fundamentally different from software security. You're not relying on the operating system to protect itself. You're relying on TEE isolation no operator can bypass, and cryptographic verification that proves the environment is exactly what it should be.

The Next Phase of Enterprise AI

The industry has spent years proving that the models work. The next phase is proving they can be deployed in the environments that matter most - organizations with strict data boundaries, demanding security teams, and little tolerance for trust-based exceptions.

What separates the AI vendors that get deployed from those that don't isn't just model quality. It's having a better answer to the question every enterprise customer eventually asks: what, exactly, prevents this system from exposing my data or your model once it's running?

Confidential computing for AI inference is still an evolving field. There's a lot of work ahead, and we find that exciting. Because it's what makes "bring the model to the data" more than easy to say - it's what makes it deployable.

Explore NEXUS

Fundamental Technologies Inc.

Copyright © 2026

Copyright © 2026

All rights reserved

Fundamental Technologies Inc.

Copyright © 2026

All rights reserved

Fundamental Technologies Inc.