Attestation Is not Enough

Exploring the nuances of remote attestation and what's required to make them useful within trust systems.

There's growing recognition of the value of trusted execution environments (TEEs) in crypto. They unlock critical primitives like confidential smart contracts with private state, verifiable offchain agents, and a new class of trust-minimized applications. But what actually makes a TEE trustworthy?

Remote attestation is a foundational concept in TEEs. It proves that a specific binary ran on a specific piece of hardware at a specific moment under specific conditions. This is powerful, but also easy to assume it means more than it does.

Attestation is often treated as a one-off: a quote, a signature, a check mark on some dashboard. The reality is that attestation alone cannot create trusted applications.  This post explores why attestation (alone) falls short and what a real trust system needs.

The reality of TEEs

Let's start with what a TEE actually gives you.

First, isolated execution: code runs privately, with all off-chip state fully encrypted

Second, you get a per-CPU root of trust via cryptographic keys built into the CPU itself, used to encrypt data and sign attestation messages.

Third, you get remote attestation, which proves to third parties that a specific binary code is running in a specific enclave.

At its core, that is a TEE.

The reality of remote attestation

The third point above is remote attestation, which is provided in the form of a signed quote. This is where theory meets reality.

Anyone who has verified an SGX/TDX quote knows the drill: parse a multi-KB binary blob, extract fields, fetch collateral, check FMSPC, interpret TCB status, validate cert chains etc... You are essentially expected to become a hardware security expert. If you misinterpret one flag in the TCB status, the entire security model collapses.

Even if you do everything perfectly, you only prove:

The measurement was correct then

The hardware TCB looked acceptable then

The operator presented that quote then

This only validates one moment, without any binding to the larger system. At minimum you should be doing this verification all the time.

What attestation actually proves

None of the right column is true by default.

We see this rampant in Web3: projects present raw attestation data and a row of green checkmarks, essentially dumping the burden of verification on the user. Unless your users are also security researchers, this is nothing more than verification theater.

The missing pieces

When you scratch beneath the surface of a standard "verified" TEE, critical gaps emerge that a simple quote cannot solve.

Freshness & Liveness – A stale but valid quote is indistinguishable from a recent one unless you enforce it.

State Continuity & Anti-Rollback – An attestation proves the code is correct, but it cannot prove the data is current. Without a mechanism to anchor the enclave to a live ledger as the timekeeper, a malicious operator could restart an enclave and feed it an older version of its encrypted state (e.g., an old account balance), and the attestation would still look perfectly valid to an outsider.

You need a system that anchors the "root of state" onchain to ensure the enclave is always processing the latest state and hasn't been "rewound" by a malicious host.

TCB governance – Intel (or another provider) could update (or fail to) TCB levels. Verifiers may also have stricter threat models than manufacturers - for example, Intel considers physical attacks (wiretapping, battering ram) out of scope. Without continuous policy checks/additional onchain enforcement, outdated or insecure CPUs remain "trusted.”

Operator Binding with "Skin in the Game" – A quote tells you what is running, but not who is running it. A random anonymous VPS provides zero accountability. Real trust requires binding the hardware’s cryptographic identity to a slashable, onchain operator identity. If the node misbehaves or disappears, there must be an economic cost.

Upgrade history – Without a transparent history, you cannot guarantee data confidentiality. A previous, validly attested version could have been buggy or malicious, exfiltrating keys or data before an update fixed it. If you don't know what code ran previously, you cannot trust that the current state is still private, no matter how secure the current version is.

Code Provenance – Where did the enclave binary come from? Was it reproducibly built? Attestations are useless if someone cannot independently compile the code and verify that its hash matches the deployed version.

Policy enforcement – Most systems expect clients to parse quotes and "figure it out." Policies define what "correct" means: which binary should run, which hardware is acceptable, re-attestation frequency, approved locations, and more.

Consensus as the verifier

These gaps are real. They're the baseline architectural requirements for making TEEs useful. You need infrastructure that continuously verifies attestations, enforces freshness, rollback protection, tracks upgrades, validates TCB policies, binds operators, ensures reproducible builds, and does it all automatically.

That's where a BFT attestation-verifier network comes in. Instead of expecting every client to parse quotes, we push verification to a network of stake-bearing, slashable nodes that reach consensus on attestation validity:

Nodes submit enclave attestations and verification evidence

A fault-tolerant set of validators collectively verifies hardware TCB, measurements, policies, freshness and more

Consensus agreement on verified identities, operators, and attestation policies becomes onchain state

Anyone can verifiably query this onchain state (i.e. using a network’s Light Client)

Attestation transforms from a static, complex artifact into a usable on-chain signal.

In this model, the raw attestation is the input, and the consensus decision is the output. Users don’t need to handle the raw hardware quotes or keep up with Intel's latest security advisories. They simply verify a consensus-signed proof that says: 'The (economically aligned) validator set has checked the evidence and agrees the specific TEE is secure according to public on-chain policies*”.

* Crucially, these policies can go beyond simple attestations checks—but more on this in Part two of this post.

The Oasis Network is one example of this, but the principle applies more generally: trust the fault-tolerant consensus of many, not individual quote judgments.

Final thoughts

The process described here is how we turn TEEs from isolated boxes with quotes into integrated, verifiable components within larger trust systems.

In part two, we’ll explore in more detail how Oasis Runtime Offchain Logic (ROFL) extends this to trusted off-chain applications.

How we use cookies?
At Oasis Foundation we believe in your privacy, so you can choose to browse our site without any tracking or by clicking “Accept”, you help us to improve our site and help us grow our ecosystem. View our Privacy Policy for more information.