The Standard We Set: Trust, Power, and the Quiet Politics of Cryptographic Standards

Reading Time: 14 minutes

Category: Post-Quantum Cryptography

Author: Derk Bell, Jeroen Scheerder

Summary

In The Standard We Set: Trust, Power, and the Quiet Politics of Cryptographic Standards, Derk Bell and Jeroen Scheerder dissect the hidden power structures behind the world’s most trusted cryptographic standards. The piece opens with a familiar image – airport passport control – to illustrate how invisible systems of trust, mediated by algorithms and standards, govern global interactions.

The authors argue that cryptographic trust isn’t purely technical – it’s political. Institutions like NIST, ETSI, and ISO don’t merely reflect consensus; they create it, defining what counts as secure. Through historical cases such as the Dual_EC_DRBG backdoor scandal and the Diginotar breach, the essay shows how centralized trust produces systemic fragility: one institution’s compromise can cascade worldwide.

The post critiques how “open standards” often mask closed processes dominated by governments and corporations. True transparency, they argue, requires open participation, verifiable parameters, and systems built for failure tolerance. Examples like Signal, Libsodium, and Certificate Transparency demonstrate what “verifiable trust” looks like – security grounded in openness, agility, and pluralism rather than blind faith.

Trust, Power, and the Quiet Politics of Cryptographic Standards


Standards are agreements disguised as truth


Picture yourself at an international airport, waiting in line at passport control. Somebody hands you a document bound by agreements you never signed, validated by organisations you’ve never heard of, and encrypted by algorithms you didn’t choose. And yet, a border guard scans it, nods, and waves you through.

That moment of machine-readable, globally accepted recognition isn’t magic. It’s a delicate dance of trust, built on standards. The cryptographic handshake that happens behind the scenes relies on assumptions about who to trust, what’s secure, and whose signature counts. But who decides those assumptions? Who writes the rules behind the trust?

In cryptography, trust isn’t just a matter of math; it’s curated, codified, and sanctioned by institutions. And sometimes, that process has consequences far more political than technical.

Cryptographic trust isn’t just about secure algorithms; it’s about who gets to define “secure.” Standards bodies like NIST, ETSI, and ISO have an outsized influence in determining which algorithms make it into our browsers, hardware, and compliance checklists. Their endorsements don’t just reflect consensus, they create it. When any one institution becomes the de facto authority, whether by market dominance or regulatory entrenchment, it shifts the landscape of security itself.

This power extends far beyond documentation. Standards shape the economic and technical constraints that developers and vendors operate under. An algorithm blessed by a standards body doesn’t just carry prestige; it gets baked into firmware, deployed in government systems, and required by auditors. The cost of deviating from these choices can be prohibitive, even if better alternatives exist.

These dynamics aren’t inherently malicious, but they do introduce a structural fragility. When trust is centralised, so is failure. And if an institution’s internal processes are opaque, its recommendations, however well-intentioned, become difficult to question.

This is especially relevant when trust in national institutions intersects with geopolitics. In the Netherlands, for instance, public debate around the Intelligence and Security Services Act (Wiv) revealed how far governments might go in demanding access to encrypted data. Critics pointed out that surveillance powers could be expanded without sufficient oversight, relying on technical standards that the public didn’t understand, and never agreed to. Even in open societies, the rules of cryptographic trust are often decided behind closed doors.

No institution has shaped the modern cryptographic landscape more than the U.S. National Institute of Standards and Technology (NIST). From hash functions to elliptic curves, and from random number generators to post-quantum candidates, NIST doesn’t just recommend algorithms, it defines what counts as secure for much of the world.

This isn’t just because of technical excellence. It’s a product of regulatory gravity. U.S. federal procurement mandates, FIPS compliance, vendor defaults, and international dependencies all orbit around NIST’s standards. When NIST speaks, browsers listen, chip vendors adapt, and auditors take notes.

But this centrality creates a problem: if NIST gets it wrong, the failure isn’t local, it’s global.

Nowhere is this clearer than in the case of Dual_EC_DRBG, a cryptographic random number generator that NIST standardised in the early 2000s. Security researchers flagged it as suspect almost immediately. It was slow, weirdly designed, and allowed for a backdoor if certain parameters were chosen with malicious intent. In 2007, Dan Shumow and Niels Ferguson publicly demonstrated that the generator’s structure made it vulnerable to a secret key recovery attack; if you knew a specific secret, you could predict every “random” number it produced [1].

Then came the Snowden leaks. In 2013, documents revealed that the NSA had indeed influenced the inclusion of Dual_EC_DRBG into the standard and had the capability to exploit it [2]. It wasn’t just a flawed algorithm; it was a government-sanctioned backdoor.

Even worse, RSA Security, a major vendor of cryptographic libraries, was allegedly paid $10 million to make Dual_EC_DRBG the default in its products [3]. This wasn’t a mistake. It was a quiet conspiracy, executed under the guise of technical authority.

It wasn’t the only one.

NIST’s elliptic curve recommendations (P-256, P-384) have long faced scrutiny for their unexplained parameter choices. Why were certain seeds used to generate the curve constants? NIST has never provided a verifiable explanation. As Bernstein and Lange point out in the SafeCurves project [4], unexplained seeds open the door to the possibility of preselected weaknesses, however unlikely. Without transparency, trust becomes a leap of faith [5].

This is not to say NIST is malicious. But the incentives it operates under, national interest, geopolitical influence, institutional inertia, are not always aligned with global cryptographic safety. And yet, many nations adopt NIST standards wholesale, either due to trade dependencies, regulatory harmonisation, or a lack of local alternatives.

This is monoculture at protocol scale. It doesn’t matter how rigorous your implementation is if the underlying standard is flawed. And when billions of devices and systems implement the same standard, a single point of failure becomes a global vulnerability.

That’s why cryptographic trust needs pluralism. Not just multiple algorithms, but multiple standardisation paths, threat models, and institutions. Because when trust is centralised, so is risk, and mistakes scale with terrifying efficiency.

Even after they’re broken, cryptographic standards tend to linger. In theory, updating a vulnerable primitive should be a priority. In practice, it often takes years, and sometimes longer, before outdated algorithms are fully phased out. SHA-1 was declared broken well before browsers finally deprecated it [6]. RC4 persisted in TLS stacks despite known weaknesses [7]. These aren’t delays in patching; they’re symptoms of cryptographic inertia.

Part of the problem is how deeply standards embed themselves into our infrastructure. Consider DOCSIS cable modems, used by ISPs worldwide. Many shipped with root certificates hardcoded in their firmware images, unchangeable without a full firmware reflash, which ISPs rarely push [8]. If one of those CAs is compromised, every device depending on it becomes a soft target. Updating isn’t just difficult; it’s logistically and economically prohibitive.

Then there are the Certificate Authorities themselves. These are supposed to be trust anchors, but history shows they can be the weakest link. In 2011, the Dutch CA Diginotar was hacked. The attacker issued fraudulent certificates for major domains, including Google, which were used in active surveillance. When the breach was discovered, trust in Diginotar’s root certificate was revoked overnight. Every browser and OS vendor had to pull it. Government systems relying on Diginotar scrambled. The CA collapsed [9]. The lesson is that trust hierarchies are brittle. A single point of failure, whether a compromised CA or a deprecated hash function, can cascade across systems and borders.

And the longer broken standards stick around, the wider the blast radius when they finally fail.

We like to think of standards bodies as neutral arbiters, consensus-driven, transparent, and open to scrutiny. But “openness” is often more about optics than access. Participating in a standards process requires time, funding, and connections. It’s no coincidence that many cryptographic standards originate from government agencies, military-funded labs, or industry giants. These are the players who can afford to show up.

Even when processes are technically open, influence isn’t evenly distributed. Academic researchers may submit proposals, but industrial and government stakeholders often dominate the actual selection and review. Algorithms with strong theoretical grounding but no institutional sponsor can be quietly sidelined. Meanwhile, candidates backed by established vendors tend to get more traction, not necessarily because they’re better, but because they’re better positioned.

Then there’s the question of transparency. In theory, everything should be auditable. In practice, decisions often happen in closed rooms or through opaque consensus. The public comment periods don’t always result in meaningful changes. And when parameter choices or design rationales aren’t published, the openness becomes symbolic rather than substantive.

Contrast that with community-driven models like the IETF’s Crypto Forum Research Group (CFRG), where discussions happen in the open, proposals are versioned publicly, and pushback is not just tolerated but expected. It’s not a perfect model, but it shows that transparency isn’t just about documents; it’s about process, participation, and power.

If we want to trust our standards, we need more than open-source PDFs. We need a system where no decision is above scrutiny, no parameter is unexplained, and no institution is beyond question.

So how do we build cryptographic systems we can actually trust? Not by replacing one authority with another, but by designing systems that are open by default, legible under pressure, and resistant to single points of failure.

Principle #1: Transparency

The first principle is transparency: publish your seeds, explain your parameters, and expose your assumptions. Security through obscurity doesn’t just fail; it invites backdoors. Every component should be independently reproducible and explainable.

Principle #2: Cryptographic agility

The second is cryptographic agility. Systems should be able to upgrade or switch algorithms without a complete rewrite. That means negotiable protocols, modular cryptographic primitives, and fallback mechanisms that degrade safely. Agility doesn’t just prepare us for quantum threats; it makes us resilient to every future algorithmic failure.

Principle #3: Defense in depth

The third is defense in depth. Don’t rely on one algorithm, one implementation, or one source of truth. Use hybrid encryption (like classical + post-quantum). Build redundancy into your validation paths. If one component fails, others should still hold.

Some projects already embody this philosophy. Signal’s protocol is minimal but battle-tested, with years of open critique [10]. Libsodium prioritises safe defaults and makes misuse-resistant cryptography the norm [11]. Certificate Transparency logs expose malicious certificates in real time [12]. These aren’t perfect systems, but they show us what verifiable trust looks like: loud, transparent, and constantly interrogated.

The goal isn’t to eliminate trust, but to make it verifiable. And that starts by assuming nothing is sacred: not the algorithm, not the standard, and not the body that approved it.

Cryptography is hard, even for experts. Most of us, even seasoned developers, end up leaning on standards, libraries, and best-practice lists curated by people we’ve never met. We search for what’s secure in 2025, we quote Bruce Schneier [13], and we read blog posts and RFCs. That’s not laziness, it’s survival. The complexity of modern cryptography demands delegation.

But that delegation creates a subtle risk: the concentration of trust in a handful of institutions, vendors, and individuals. Who defines best practices? Who gets quoted? Are these experts genuinely independent, or are they entangled in financial and political incentives? Can cryptographers be “bought”? History suggests they can. Influence isn’t always wielded maliciously, but it doesn’t have to be. A skewed incentive, an unchallenged default, or a well-funded submission is enough to shape the standards that billions will inherit.

That’s why trust must be continuously questioned, never blindly granted. The most resilient cryptographic systems are those that plan for betrayal, systems that assume compromise, tolerate churn, and welcome scrutiny.

We don’t need more heroic authorities. We need standards that are legible, replaceable, and plural. A passport that can be verified by multiple issuers, a browser that doesn’t collapse when one CA falls, a crypto system that evolves without burning down.

The future of cryptography isn’t about picking the right authority to follow. It’s about designing systems where authority can fail, and we’re still safe.

In other words, the best standard is one you can walk away from.

FAQ

What is the main argument of the article?

Cryptographic standards are not neutral or purely technical-they are political constructs shaped by institutional power, national interests, and opaque decision-making processes.

Why is NIST so influential in global cryptography?

Because its standards are embedded in U.S. federal procurement rules and widely adopted worldwide, giving it regulatory and technical gravity that defines “secure” for much of the global ecosystem.

What was the Dual_EC_DRBG controversy?

NIST standardized a flawed random number generator that later turned out to contain a potential NSA backdoor, illustrating how centralized authority can lead to global vulnerabilities.

What is meant by “cryptographic monoculture”?

It refers to an overreliance on a single set of standards or authorities. When one fails, the entire global system becomes vulnerable due to lack of diversity and redundancy.

What is “verifiable trust”?

A philosophy where systems are transparent, independently auditable, and designed to withstand failure—trust that can be proven, not assumed.

What reforms do the authors advocate?

They propose transparency in design parameters, cryptographic agility to swap algorithms easily, and pluralism in standard-setting institutions to prevent power concentration.

How does this issue relate to geopolitics?

Cryptographic standards often reflect national interests; for example, governments may shape standards to enable surveillance or maintain strategic control over global communications security.

What can developers and policymakers learn from this?

To prioritize open processes, challenge authority, and design systems that remain secure even if trusted institutions are compromised.


[1] Shumow & Ferguson, Dual_EC_DRBG Backdoor Presentation (2007) – https://rump2007.cr.yp.to/15-shumow.pdf.

[2] NSA Influence on NIST Standards (ProPublica, 2013) – https://www.propublica.org/article/the-nsas-secret-campaign-to-crack-undermine-internet-encryption.

[3] RSA’s $10M Deal (Reuters, 2013) – https://www.reuters.com/article/us-usa-security-rsa-idUSBRE9BJ1C220131220.

[4] P-256 Curve and SafeCurves Project – https://safecurves.cr.yp.to.

[5] Cloudflare Blog on P-256 Parameter Transparency – https://blog.cloudflare.com/how-to-generate-secure-ecdsa-keys/.

[6] SHA-1 Collision (SHAttered, 2017) – https://shattered.io.

[7] RC4 Biases (Vanhoef et al., USENIX, 2015) – https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-vanhoef.pdf.

[8] EFF on Insecure Certificate Roots – https://www.eff.org/deeplinks/2015/02/verizon-hidden-peril-digital-certificates.

[9] Diginotar Breach Audit Report (Fox-IT, 2012) – https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/rapporten/2012/08/13/diginotar-public-report-version-1/diginotar-public-report-version-1.pdf.

[10] Signal Protocol Specification – https://signal.org/docs/specifications/doubleratchet/.

[11] Libsodium Documentation – https://libsodium.gitbook.io/doc/.

[12] Certificate Transparency Overview – https://certificate.transparency.dev.

[13] Bruce Schneier on Trust in Cryptography – https://www.schneier.com/blog/archives/2013/09/nsa_influence_o.html.