The Trust Shift: Secure Enclaves for Private Nostr Relays

Max

npub1klkk3vrzme455yh9rl2jshq7rc8dpegj3ndf82c3ks2sk40dxt7qulx3vt

hex

9a105691bed77ff2734a3f62c12c16d408967fcdccf353ffd8473212337bac4a

nevent

nevent1qqsf5yzkjxldwlljwd9r7ckp9stdgzyk0lxueu6nllvywvsjxda6cjsprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgst0mtgkp3du662ztj3l4fgts0purksu5fgek5n4vgmg9gt2hkn9lq97xa9q

naddr

naddr1qqgr2cfcvcmrjwfcvdjnse35xuckvqgcwaehxw309aex2mrp0yhxwatvw4nh2mr49ekk7egzyzm7669svt0xkjsju50a22zurc0qa589z2xd4yatzx6p2z64a5e0cqcyqqq823c8f5m83

Kind-30023 (Article)

2025-12-26T10:42:26Z

The previous post in this series established that PIR fails for Nostr. The failure is structural, not a matter of implementation difficulty or performance cost, though both are severe. Nostr queries combine multiple predicates, require range filters, demand real-time subscriptions, and scatter across dozens of relays per user. PIR was designed for single-index lookups from a cooperative server. Nostr is something else entirely.

So we turn to different technology. What if the relay itself ran inside a secure enclave, where even the machine owner was excluded from seeing what happened inside? This is the promise of Trusted Execution Environments: hardware-enforced privacy anchored in silicon, independent of whoever runs the server.

Signal deploys this model for contact discovery. When you install Signal, it checks which of your phone contacts also use Signal. Signal has no desire to know your contacts. Their solution: an Intel SGX enclave running on Signal's servers. Your phone establishes an encrypted channel directly into the enclave, sends hashed phone numbers, and receives back only the intersection. Signal's operators see encrypted traffic flowing in and out, with the query contents opaque throughout.

Could the same model work for Nostr relays?

The mechanism

Intel SGX creates protected memory regions called enclaves. The CPU's memory encryption engine encrypts all data leaving the processor. Code running inside the enclave can access its data in cleartext; code running outside, including the operating system, the hypervisor, and anyone with root access, sees only encrypted bytes. AMD's SEV-SNP provides similar guarantees at the virtual machine level: encrypt an entire VM with keys inaccessible to the host.

AMD's approach offers a practical advantage. Because SEV-SNP operates at the VM boundary, existing applications can run inside confidential VMs natively. Technologies like Confidential Containers and Kata Containers let you deploy standard containerized workloads into SEV-SNP protected environments. A Nostr relay could potentially run unmodified inside a confidential container, gaining hardware-enforced isolation and sparing the relay software from SGX's constrained enclave model.

The critical feature for both approaches is remote attestation. Before sending sensitive data to an enclave or confidential VM, a client can demand cryptographic proof that specific code is running inside attested hardware. The TEE generates a measurement of its code, the hardware signs that measurement with a key traceable to the chip manufacturer, and the client verifies the signature chain back to Intel's or AMD's root certificate. If the code hash matches what the client expects and the signature chain validates, the client knows the environment is running the right software on real hardware.

A Nostr relay in a confidential environment would work like this: the relay operator deploys open-source relay code inside SGX or an AMD SEV-SNP confidential VM. Clients connect, verify attestation, and establish TLS connections terminating inside the protected environment. REQ filters arrive encrypted, get processed privately, and matching events return over the encrypted channel. The operator sees that communication happened, with the query contents remaining opaque.

That description sounds like exactly what Nostr needs. The picture is more complicated.

The trust shift

All TEE security traces back to the chip manufacturer, but the trust is anchored at manufacturing time, not runtime. Intel generates and fuses a Root Provisioning Key into each SGX processor at the factory. Intel retains a database of these keys. When a platform first initializes SGX, it proves possession of this fused key to Intel's provisioning service and receives attestation certificates in return. With Intel's Data Center Attestation Primitives, third parties can then verify attestation quotes locally, absent any contact with Intel. AMD's SEV-SNP follows a similar model with its own key hierarchy and certificate chain.

This distinction matters. The chip manufacturer lacks the ability to selectively target a specific user at runtime without having compromised the hardware at manufacturing or issued fraudulent certificates at provisioning. But the manufacturer, or anyone who compromised the key generation facility, could have compromised all chips of a given generation. And the manufacturer, or anyone who compromised their certificate authority, could issue attestation certificates for fake enclaves.

The trust assumption is: the manufacturer produced hardware with correctly functioning isolation, the root signing keys remain secure, and no fraudulent certificates have been issued. These are manufacturing-time and infrastructure-time trusts, not runtime cooperation. A malicious relay operator cannot call Intel to decrypt your queries. But a nation-state that compromised a key generation facility years ago, or that holds fraudulent attestation certificates, could potentially forge attestation.

Intel processors also contain the Management Engine, a separate computer running inside your CPU, unable to be disabled, with network access independent of the main OS, running proprietary firmware. Security researchers have raised concerns about it for years. In 2017, Intel confirmed remotely exploitable vulnerabilities in Management Engine affecting every Intel platform from 2008 to 2017. AMD's Platform Security Processor has similar architecture and similar concerns.

Physical access: the line that cannot be crossed

One thing should be stated plainly: if an attacker has physical access to the hardware, absolute privacy does not exist. TEEs were never designed to provide it. Intel and AMD explicitly exclude physical attacks from their threat models. Researchers have demonstrated this repeatedly, from voltage glitching attacks to memory bus interposition. A sufficiently motivated attacker with hands on the machine can extract secrets.

The observation is not a failure of TEE technology. It is a fundamental limit of computing. Any system that processes data must, at some point, have that data in a form the processor can operate on. Physical access to the processor means access to that moment.

The practical question is not whether TEEs provide absolute security against physical attackers. The question is whether they provide meaningful security against realistic threat models.

Why big cloud beats your basement

Here the analysis gets counterintuitive. The cypherpunk instinct says: run your own hardware, trust no one, keep it close. For Nostr relays without TEE protection, this logic holds. Your Raspberry Pi in your closet is harder to compromise than a VPS where the hosting provider has root.

TEEs invert this calculus. When hardware-enforced isolation removes the hosting provider's ability to read memory, the physical security of the datacenter becomes the dominant factor. On physical security, hyperscale cloud providers are in a different category.

Amazon, Google, and Microsoft protect their datacenters with mantrap entrances, biometric authentication, 24/7 armed security, and surveillance systems beyond anything a small operator can afford. They have billions in market capitalization at stake. A single verified breach would cost them enterprise customers worth more than the entire Nostr relay community combined. The reputational and financial incentives for maintaining physical security are overwhelming.

Compare this to a $5/month VPS provider or a colocation facility with a bored security guard. The early Bitcoin days are instructive: countless coins were stolen from cheap hosting providers through physical access, insider threats, and lax security practices. The operators were not negligent by intent. They could not afford the security that the threat model demanded.

For a TEE-protected Nostr relay, the threat model shifts. The relay operator's ability to read queries is neutralized by hardware. What remains is physical security and the integrity of the hardware supply chain. On both counts, AWS running AMD SEV-SNP instances beats a home server. The likelihood of any individual Nostr user being worth the legal and financial cost of a cloud provider physically compromising its own TEE infrastructure approaches zero.

Attestation: knowing what runs

Remote attestation proves something specific: this exact code is running inside attested TEE hardware with this security configuration. For many threat models, this guarantee alone provides substantial value.

Consider the relay operator who wants to build user trust. Today, users must take the operator's word that the relay software matches the published source code, that no logging has been added, that queries are not being sold to data brokers. With attestation, the operator can prove it. The client verifies that the running code matches the expected measurement. The attestation is cryptographic, not social.

This verification has value independent of perfect confidentiality. Knowing that the relay runs unmodified open-source code, signed and attested, changes the trust model from "trust me" to "trust the attestation chain." Even if sophisticated side-channel attacks remain theoretically possible, the operator has no practical path to modify the software for surveillance, as any such change breaks attestation.

For the vast majority of users facing the vast majority of threats, verified code attestation combined with encrypted memory provides a qualitative improvement over the current situation of hoping the relay operator is honest.

The economics of confidentiality

Confidential computing has costs beyond the hardware. When infrastructure operators are blind to what runs on their systems, they lose the telemetry that makes large-scale operations efficient. CPU utilization patterns, memory access profiles, network flow analysis: these observability tools help providers optimize placement, predict failures, and debug issues. Confidential workloads are opaque to all of it.

This opacity breaks economies of scale. A cloud provider running standard workloads can pack VMs efficiently based on observed behavior, migrate workloads preemptively before hardware fails, and amortize operational costs across customers with similar profiles. Confidential VMs deny them this intelligence, and the cost difference gets passed to users. Confidential computing instances carry premium pricing. For a privacy-focused Nostr relay, someone pays that premium: either the operator absorbs it, the users subsidize it, or the relay service costs more than non-confidential alternatives.

This economic reality shapes adoption. TEE-protected relays will exist as a tier for users who value query privacy enough to pay for it, run by operators willing to accept the operational constraints. That is an honest accounting of the tradeoffs.

The honest path

PIR offered mathematical privacy guarantees that Nostr's query model could not satisfy. TEEs offer a different kind of assurance: hardware-enforced isolation that shifts trust from relay operators to chip manufacturers. The tradeoff is real but defensible.

For most Nostr users, the realistic threats are curious relay operators, commercial data harvesting, and mass surveillance through cooperative service providers. Against these threats, a relay running in an AMD SEV-SNP confidential container on AWS provides strong protection. Query contents are opaque to the relay operator. Amazon's physical security exceeds anything a small operator could achieve. The attestation chain proves the relay runs expected code.

For users facing nation-state adversaries with the capability to compromise chip manufacturing or cloud provider physical security, TEEs offer limited additional protection. Those users face threat models that no commercially available technology adequately addresses. They need operational security, compartmentalization, and acceptance that sufficiently powerful adversaries have sufficiently powerful tools.

The cypherpunk instinct distrusts placing privacy in the hands of Intel, AMD, and Amazon. That instinct is sound. Trusting random relay operators with unencrypted query logs, though, is worse. TEEs redirect trust toward entities with stronger incentives and better physical security than the status quo.

If Nostr pursues TEE-based relays, it should do so with clear eyes about what is gained and what is lost. Confidential containers on major cloud infrastructure represent the most practical path: minimal code changes, strong physical security, established attestation tooling, and operational maturity. Premium pricing means this will be a tier for privacy-conscious users, not the default. Users trade relay operator visibility for chip manufacturer integrity, gaining protection against realistic commercial threats while accepting theoretical exposure to nation-state capabilities.

Sometimes the best available option is good enough. For query privacy on Nostr, TEEs might be exactly that.

Raw JSON

{
  "kind": 30023,
  "id": "9a105691bed77ff2734a3f62c12c16d408967fcdccf353ffd8473212337bac4a",
  "pubkey": "b7ed68b062de6b4a12e51fd5285c1e1e0ed0e5128cda93ab11b4150b55ed32fc",
  "created_at": 1777543606,
  "tags": [
    [
      "d",
      "5a8f6998ce8f471f"
    ],
    [
      "image",
      "https://image.nostr.build/90812ee78c038c6eea22f4da5b9ee274adfb9e2418e518b9a2b85e0f87a7885e.jpg"
    ],
    [
      "title",
      "The Trust Shift: Secure Enclaves for Private Nostr Relays"
    ],
    [
      "summary",
      "TEE relays shift trust from operators to chip manufacturers. For most threats, that trade is worth making, with eyes open."
    ],
    [
      "published_at",
      "1766745746"
    ],
    [
      "t",
      "austrian-economics"
    ],
    [
      "t",
      "freedom-tech"
    ],
    [
      "t",
      "tee"
    ],
    [
      "t",
      "nostr"
    ],
    [
      "t",
      "relay"
    ],
    [
      "t",
      "privacy"
    ],
    [
      "t",
      "cryptography"
    ],
    [
      "t",
      "pir"
    ],
    [
      "t",
      "secure-enclave"
    ]
  ],
  "content": "The previous post in this series established that PIR fails for Nostr. The failure is structural, not a matter of implementation difficulty or performance cost, though both are severe. Nostr queries combine multiple predicates, require range filters, demand real-time subscriptions, and scatter across dozens of relays per user. PIR was designed for single-index lookups from a cooperative server. Nostr is something else entirely.\n\nSo we turn to different technology. What if the relay itself ran inside a secure enclave, where even the machine owner was excluded from seeing what happened inside? This is the promise of Trusted Execution Environments: hardware-enforced privacy anchored in silicon, independent of whoever runs the server.\n\nSignal deploys this model for contact discovery. When you install Signal, it checks which of your phone contacts also use Signal. Signal has no desire to know your contacts. Their solution: an Intel SGX enclave running on Signal's servers. Your phone establishes an encrypted channel directly into the enclave, sends hashed phone numbers, and receives back only the intersection. Signal's operators see encrypted traffic flowing in and out, with the query contents opaque throughout.\n\nCould the same model work for Nostr relays?\n\n## The mechanism\n\nIntel SGX creates protected memory regions called enclaves. The CPU's memory encryption engine encrypts all data leaving the processor. Code running inside the enclave can access its data in cleartext; code running outside, including the operating system, the hypervisor, and anyone with root access, sees only encrypted bytes. AMD's SEV-SNP provides similar guarantees at the virtual machine level: encrypt an entire VM with keys inaccessible to the host.\n\nAMD's approach offers a practical advantage. Because SEV-SNP operates at the VM boundary, existing applications can run inside confidential VMs natively. Technologies like Confidential Containers and Kata Containers let you deploy standard containerized workloads into SEV-SNP protected environments. A Nostr relay could potentially run unmodified inside a confidential container, gaining hardware-enforced isolation and sparing the relay software from SGX's constrained enclave model.\n\nThe critical feature for both approaches is remote attestation. Before sending sensitive data to an enclave or confidential VM, a client can demand cryptographic proof that specific code is running inside attested hardware. The TEE generates a measurement of its code, the hardware signs that measurement with a key traceable to the chip manufacturer, and the client verifies the signature chain back to Intel's or AMD's root certificate. If the code hash matches what the client expects and the signature chain validates, the client knows the environment is running the right software on real hardware.\n\nA Nostr relay in a confidential environment would work like this: the relay operator deploys open-source relay code inside SGX or an AMD SEV-SNP confidential VM. Clients connect, verify attestation, and establish TLS connections terminating inside the protected environment. REQ filters arrive encrypted, get processed privately, and matching events return over the encrypted channel. The operator sees that communication happened, with the query contents remaining opaque.\n\nThat description sounds like exactly what Nostr needs. The picture is more complicated.\n\n## The trust shift\n\nAll TEE security traces back to the chip manufacturer, but the trust is anchored at manufacturing time, not runtime. Intel generates and fuses a Root Provisioning Key into each SGX processor at the factory. Intel retains a database of these keys. When a platform first initializes SGX, it proves possession of this fused key to Intel's provisioning service and receives attestation certificates in return. With Intel's Data Center Attestation Primitives, third parties can then verify attestation quotes locally, absent any contact with Intel. AMD's SEV-SNP follows a similar model with its own key hierarchy and certificate chain.\n\nThis distinction matters. The chip manufacturer lacks the ability to selectively target a specific user at runtime without having compromised the hardware at manufacturing or issued fraudulent certificates at provisioning. But the manufacturer, or anyone who compromised the key generation facility, could have compromised all chips of a given generation. And the manufacturer, or anyone who compromised their certificate authority, could issue attestation certificates for fake enclaves.\n\nThe trust assumption is: the manufacturer produced hardware with correctly functioning isolation, the root signing keys remain secure, and no fraudulent certificates have been issued. These are manufacturing-time and infrastructure-time trusts, not runtime cooperation. A malicious relay operator cannot call Intel to decrypt your queries. But a nation-state that compromised a key generation facility years ago, or that holds fraudulent attestation certificates, could potentially forge attestation.\n\nIntel processors also contain the Management Engine, a separate computer running inside your CPU, unable to be disabled, with network access independent of the main OS, running proprietary firmware. Security researchers have raised concerns about it for years. In 2017, Intel confirmed remotely exploitable vulnerabilities in Management Engine affecting every Intel platform from 2008 to 2017. AMD's Platform Security Processor has similar architecture and similar concerns.\n\n## Physical access: the line that cannot be crossed\n\nOne thing should be stated plainly: if an attacker has physical access to the hardware, absolute privacy does not exist. TEEs were never designed to provide it. Intel and AMD explicitly exclude physical attacks from their threat models. Researchers have demonstrated this repeatedly, from voltage glitching attacks to memory bus interposition. A sufficiently motivated attacker with hands on the machine can extract secrets.\n\nThe observation is not a failure of TEE technology. It is a fundamental limit of computing. Any system that processes data must, at some point, have that data in a form the processor can operate on. Physical access to the processor means access to that moment.\n\nThe practical question is not whether TEEs provide absolute security against physical attackers. The question is whether they provide meaningful security against realistic threat models.\n\n## Why big cloud beats your basement\n\nHere the analysis gets counterintuitive. The cypherpunk instinct says: run your own hardware, trust no one, keep it close. For Nostr relays without TEE protection, this logic holds. Your Raspberry Pi in your closet is harder to compromise than a VPS where the hosting provider has root.\n\nTEEs invert this calculus. When hardware-enforced isolation removes the hosting provider's ability to read memory, the physical security of the datacenter becomes the dominant factor. On physical security, hyperscale cloud providers are in a different category.\n\nAmazon, Google, and Microsoft protect their datacenters with mantrap entrances, biometric authentication, 24/7 armed security, and surveillance systems beyond anything a small operator can afford. They have billions in market capitalization at stake. A single verified breach would cost them enterprise customers worth more than the entire Nostr relay community combined. The reputational and financial incentives for maintaining physical security are overwhelming.\n\nCompare this to a $5/month VPS provider or a colocation facility with a bored security guard. The early Bitcoin days are instructive: countless coins were stolen from cheap hosting providers through physical access, insider threats, and lax security practices. The operators were not negligent by intent. They could not afford the security that the threat model demanded.\n\nFor a TEE-protected Nostr relay, the threat model shifts. The relay operator's ability to read queries is neutralized by hardware. What remains is physical security and the integrity of the hardware supply chain. On both counts, AWS running AMD SEV-SNP instances beats a home server. The likelihood of any individual Nostr user being worth the legal and financial cost of a cloud provider physically compromising its own TEE infrastructure approaches zero.\n\n## Attestation: knowing what runs\n\nRemote attestation proves something specific: this exact code is running inside attested TEE hardware with this security configuration. For many threat models, this guarantee alone provides substantial value.\n\nConsider the relay operator who wants to build user trust. Today, users must take the operator's word that the relay software matches the published source code, that no logging has been added, that queries are not being sold to data brokers. With attestation, the operator can prove it. The client verifies that the running code matches the expected measurement. The attestation is cryptographic, not social.\n\nThis verification has value independent of perfect confidentiality. Knowing that the relay runs unmodified open-source code, signed and attested, changes the trust model from \"trust me\" to \"trust the attestation chain.\" Even if sophisticated side-channel attacks remain theoretically possible, the operator has no practical path to modify the software for surveillance, as any such change breaks attestation.\n\nFor the vast majority of users facing the vast majority of threats, verified code attestation combined with encrypted memory provides a qualitative improvement over the current situation of hoping the relay operator is honest.\n\n## The economics of confidentiality\n\nConfidential computing has costs beyond the hardware. When infrastructure operators are blind to what runs on their systems, they lose the telemetry that makes large-scale operations efficient. CPU utilization patterns, memory access profiles, network flow analysis: these observability tools help providers optimize placement, predict failures, and debug issues. Confidential workloads are opaque to all of it.\n\nThis opacity breaks economies of scale. A cloud provider running standard workloads can pack VMs efficiently based on observed behavior, migrate workloads preemptively before hardware fails, and amortize operational costs across customers with similar profiles. Confidential VMs deny them this intelligence, and the cost difference gets passed to users. Confidential computing instances carry premium pricing. For a privacy-focused Nostr relay, someone pays that premium: either the operator absorbs it, the users subsidize it, or the relay service costs more than non-confidential alternatives.\n\nThis economic reality shapes adoption. TEE-protected relays will exist as a tier for users who value query privacy enough to pay for it, run by operators willing to accept the operational constraints. That is an honest accounting of the tradeoffs.\n\n## The honest path\n\nPIR offered mathematical privacy guarantees that Nostr's query model could not satisfy. TEEs offer a different kind of assurance: hardware-enforced isolation that shifts trust from relay operators to chip manufacturers. The tradeoff is real but defensible.\n\nFor most Nostr users, the realistic threats are curious relay operators, commercial data harvesting, and mass surveillance through cooperative service providers. Against these threats, a relay running in an AMD SEV-SNP confidential container on AWS provides strong protection. Query contents are opaque to the relay operator. Amazon's physical security exceeds anything a small operator could achieve. The attestation chain proves the relay runs expected code.\n\nFor users facing nation-state adversaries with the capability to compromise chip manufacturing or cloud provider physical security, TEEs offer limited additional protection. Those users face threat models that no commercially available technology adequately addresses. They need operational security, compartmentalization, and acceptance that sufficiently powerful adversaries have sufficiently powerful tools.\n\nThe cypherpunk instinct distrusts placing privacy in the hands of Intel, AMD, and Amazon. That instinct is sound. Trusting random relay operators with unencrypted query logs, though, is worse. TEEs redirect trust toward entities with stronger incentives and better physical security than the status quo.\n\nIf Nostr pursues TEE-based relays, it should do so with clear eyes about what is gained and what is lost. Confidential containers on major cloud infrastructure represent the most practical path: minimal code changes, strong physical security, established attestation tooling, and operational maturity. Premium pricing means this will be a tier for privacy-conscious users, not the default. Users trade relay operator visibility for chip manufacturer integrity, gaining protection against realistic commercial threats while accepting theoretical exposure to nation-state capabilities.\n\nSometimes the best available option is good enough. For query privacy on Nostr, TEEs might be exactly that.\n",
  "sig": "80481c7bdf859360a73c2e6230e58c7630ae976ad56e325d17b025426c31149648d29ce09d4f9a1de3595976c19de4d2658a055a8e7e621b88060d378d4861b1"
}