Field Note #10 ∷ On Trusting Trust, Revisited
"Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions."
— Ken Thompson, 1984
I. A lecture worth rereading
In 1984, Ken Thompson — co-creator of Unix, recipient of the Turing Award — gave an acceptance speech that is now one of the most cited and least carefully read lectures in computing.
It is short. You can read it in under ten minutes. Most people who cite it clearly have not.
The lecture is usually summarised as a clever proof: you can hide a backdoor inside a compiler, and that backdoor can survive even if the compiler's source code is inspected and rebuilt from scratch.
The cleverness is real. It is also a distraction.
Thompson's actual point is not the trick. It is the structural observation underneath it.
Consider the line above. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. This is the sentence that does the work. The demonstration that follows — the self-reproducing compiler, the backdoor that survives recompilation — simply shows that the observation is true.
The observation is this: the obvious version of any attack would be caught. The successful version operates at a layer casual perusal does not reach.
That layer always exists.
And that is the point.
II. What Thompson actually said
Thompson's thesis, stated plainly: you can't trust code that you did not totally create yourself.
The parenthetical that follows, which most readers forget: (especially code from companies that employ people like me).
Read that twice.
Thompson is not warning about a hypothetical adversary outside the system. He is pointing out that the people building the trusted tools are themselves part of the world in which adversaries exist.
The threat is not at the perimeter. It is in the supply chain of trust.
The compiler that compiles your compiler was written by someone. That someone works for an organisation. That organisation exists within a set of incentives, constraints, and interests that change over time. That organisation exists within the confines of a nation-state. Every layer of the trust chain was authored by someone, configured by someone, and ultimately depends on hardware that has not been fully verified.✦
Thompson's lecture is not telling you to be paranoid about your tools.
It is telling you that the trust commitment was always there — that you have been making it your whole career — and that the question is whether you have made it visible enough to defend.
III. The example that arrived forty-one years late
In April 2026, researchers at SentinelLabs published an analysis of a piece of malware they call fast16.❈
The framework was compiled in 2005. It targets a narrow class of high-precision engineering and scientific simulation programs — the tools used to model car crashes, structural loads, fluid dynamics, and, on a plausible reading of the evidence, nuclear weapons physics modelling.
It does not destroy the programs. It does not steal their data.
It modifies the floating-point calculations they perform, in memory, as they run.
The outputs look like real outputs. Engineers using the affected systems would have no reason to suspect that anything was wrong.
What matters is not the sophistication of the code.
What matters is the layer it operates on.
Defenders were looking at files on disk, network traffic, and known-bad signatures. fast16 was none of those things. It intercepted the moment a program was loaded into memory, recognised a narrow set of targets, and substituted slightly different mathematics into the running system.
A sample sat on a public malware-analysis site for nearly a decade before anyone understood what it was.
A large part of the problem is how frameworks like MITRE ATT&CK are used in practice. They train analysts to map observations to known categories. What falls outside those categories is easily treated as noise.
You cannot ask questions you do not yet have a frame to imagine.
It was not hidden.
It was below the threshold of what defenders thought to look at.
This is Thompson's lecture, made concrete.
IV. The map and the territory
There is a softer version of this problem, and it is worth naming, because most working analysts encounter it long before they encounter anything as exotic as fast16.
Modern security operations are organised around shared frameworks for describing adversary behaviour. The most widely used of these, MITRE ATT&CK, is a careful, openly maintained catalogue of tactics and techniques observed in real-world attacks. It has done enormous good. It gives analysts a common vocabulary, makes detection engineering tractable, and lets defenders compare notes across organisations.
The framework was built as a shared description of what has been observed.
It was not built as an exhaustive ontology of what can happen.
This distinction is small on paper and load-bearing in practice.
A generation of analysts has now been trained to think inside the framework — to map every observation onto a known technique, to treat un-mappable observations as noise, analyst error, or low-priority anomaly.
The framework trains a parser that recognises only what it has been trained to see.
The framework becomes the imagination. The imagination becomes the limit.
This is not a flaw in the framework. It is the limit any sufficiently useful map carries with it.
The categories that are not on the map are precisely the categories where the next fast16 will operate.
☞ An honest reflection that came up while drafting this Field Note. I have been giving security advice to organisations small and large for more than twenty years. The shape of that advice has changed across the period — the tools, the threats, the regulatory environment — but one assumption ran through all of it: that nation-state-grade capability generally stayed where nation-states aimed it, and that the security needs of ordinary organisations could be reasoned about separately. Reading the SentinelLabs analysis, I am not sure that assumption was ever true in fact. fast16 was not specifically aimed at any of my clients, to my awareness. But the category it represents was already loose in the world, shaping what security meant, before any of us had a name for it. I do not think this category has a name yet. It needs one. I have been calling it "epistemic-layer attacks" (or "Sophon attacks") in my own notes, for what that's worth.
V. The practical problem
Most readings of Thompson stop at the philosophical conclusion: therefore you can't fully trust anything.
This is true.
It is also useless as a working stance.
The useful question is different.
If trust commitments cannot be eliminated, can they at least be made visible?
Can you know which layers of your computational stack you have decided to trust, who you are trusting at each layer, and what conditions would make those commitments worth revisiting?
This is the discipline.
It is the same discipline Gödel forced on mathematics in 1931, when he showed that no sufficiently powerful formal system can certify its own foundations.
Gödel's work did not make mathematics impossible. It made it honest about its limits.
Thompson's lecture does the same thing for computational trust.
It does not make trust impossible.
It asks that trust be honest about its dependencies.
The fast16 case shows what happens when that discipline is absent. The trust commitments were invisible. The defenders did not know which layers they had implicitly chosen to trust. The adversary did.
For twenty years, the adversary's view of the trust structure was clearer than the defender's.
That is the asymmetry.
☞ In principle, subscription-based software ecosystems with automated update mechanisms make untraceable, per-user, per-file manipulation trivially achievable in ways that are essentially unobservable. Ken Thompson would have to analyse the entire source of Excel each time he ran it. The static-binary world had bounded trust commitments. You compiled once, you trusted the toolchain, the artefact on disk stayed where you put it. fast16 sat on a public analysis site for nearly a decade because it was static — there was a sample to collect, eventually share, and eventually decode. The subscription model removes the artefact. The binary that ran for one user, on one machine, for one calculation, may never exist anywhere else. There is no sample to upload. There is no public site. There is the moment of execution, and then there is nothing. This is the infrastructure our epistemic layer is already running on.
VI. A working heuristic
If there is one thing to take from Thompson's lecture, applied to ordinary working life, it is this:
For every important conclusion you reach through a computational tool, ask: which layers of the stack did I decide to trust, and why?
You will not be able to verify those layers.
That is not the point.
The point is that knowing where your trust is placed is the precondition for noticing when something has changed — when a vendor is acquired, a maintainer leaves, a dependency shifts, or a layer that was previously below your threshold of attention becomes worth examining.
There is a harder version of the same heuristic:
Which layers did I not think to enumerate at all? What would tell me they were there?
This question cannot be answered from inside the working frame alone. It requires reading beyond it — old papers, adjacent disciplines, and the tradition of demonstrations that exist precisely to expand what the field considers possible.
This is not paranoia.
Paranoia assumes hidden hostility everywhere.
This discipline assumes hidden dependencies somewhere — and asks you to find them.
VII. The harder version
There is a harder version of this argument still.
The trust commitments we make in our computational tools are not separate from the trust commitments we make in our institutions.
Every layer of the computational stack is embedded in a social and political context.
The compiler was written by someone, employed by an organisation, operating under a legal regime, within a set of incentives and constraints that evolve over time.
The operating system, the libraries, the simulation tools, the frameworks we use to describe attacks — each layer reflects not just technical decisions, but the conditions under which those decisions were made.
Thompson's lecture, read carefully, does not stop at code.
It is a claim about the structure of trust in any complex system.
This does not mean institutions are untrustworthy.
It means the trust we place in them is of the same kind Thompson described: chained, ungroundable, and worth making visible.
The wisdom is not in eliminating that trust.
The wisdom is in knowing it is there.
VIII. Where this leaves us
Limits are structural.
You cannot verify your way out of them.
You can only locate them, name them, and work honestly within them.
This is true for mathematical systems, as Gödel showed.
It is true for meaning and interpretation, as earlier Field Notes have argued.
It is true for the computational tools we use to think, as Thompson showed in 1984 — and as fast16 has now made concrete.
The discipline is the same in each case.
Make the trust commitments visible.
Read the trusted layers carefully when you can.
Remember that the list of layers you can name is not the list of layers that exist.
Notice when conditions change.
Treat conclusions reached under invisible trust as provisional.
Build systems — and institutions — that can hold these limits without pretending to eliminate them.
None of this is comfortable.
None of it is supposed to be.
But it is the work.
And it has been the work for a long time.
The only question is whether we choose to do it consciously, or continue doing it accidentally.
— Trey
✦ The hardware layer is the one most readers will instinctively skip past as solid ground. It is not. The silicon was designed by someone, fabricated in a facility located somewhere, packaged through a supply chain that crosses jurisdictions, and shipped through logistics networks whose integrity assumptions are also load-bearing and also unverified. The 2038 mitigation problem will eventually require replacing a great deal of embedded silicon at scale, and the question of where that silicon comes from — and what trust commitments come with it — is one of the load-bearing assumptions of long-horizon timing-resilience work. A future Field Note will examine this properly. ↩
❈ The original SentinelLabs analysis is published at sentinelone.com/labs under the title "fast16 | Mystery ShadowBrokers Reference Reveals High-Precision Software Sabotage 5 Years Before Stuxnet" by Vitaly Kamluk and Juan Andrés Guerrero-Saade, dated 23 April 2026. The piece is technically dense but worth reading in full. The framework was first surfaced through a deconfliction note in the 2017 ShadowBrokers leak of NSA tooling, where the operator instruction read: "fast16 *** Nothing to see here – carry on ***". The note was correct on its own terms. Nothing to see — until someone bothered to look at a layer the analytical tradition had not been trained to examine. ↩
If you received this via email, the canonical archive version lives here: propertools.be/fieldwork/field-note-10-on-trusting-trust-revisited/