How can we get Nostr moderation right?

52b4a076bcbbbdc3...

npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj

hex

768fea07d177e1a1d6e940feb4eabbd7571af78dc70a3dd45cd70bd163e3099b

nevent

nevent1qqs8drl2qlgh0cdp6m55pl45a2aaw4c677xuwz3a63wdwz73v03snxcprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgs99d9qw67th0wr5xh05de4s9k0wjvnkxudkgptq8yg83vtulad30gdkrse0

naddr

naddr1qqxxzc3sxquxgdrrxejnjvqprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgs99d9qw67th0wr5xh05de4s9k0wjvnkxudkgptq8yg83vtulad30grqsqqqa28jmsmks

Kind-30023 (Article)

2026-05-02T02:50:21Z

Nostr has a very weak moderation story. A lot of people already complain about:

  • LLM-generated spam replies
  • Synthetic / paid engagement
  • Bad bots and spambots
  • Improper content warnings
  • Undisclosed paid promotions

and these issues will only get worse.

The freedom of Nostr makes these types of issues even more prominent. With most applications and relays not implementing moderation, and many individuals taking a no-moderation stance, it becomes hard to avoid unwanted content.

A well-implemented moderation system can fix this. But how can we implement moderation without allowing censorship?

Are rules evil?

I'd say no. The main problem with centralized platforms is that there is one set of rules, applied by one party, which can lead to biases and censorship.

But on Nostr, we have the capability to set our own rules at each layer of the stack:

  • Relays can decide who to host, and users can migrate away or to them if they prefer.
  • 3rd party moderation providers can exist to provide advanced filtering.
  • Clients can offer additional tools for users to make their own policies.

Users can audit the decisions as well: Relays can be audited to see what content they serve, and users' clients can expose what the moderation provider has filtered and flag issues.

For Nostr to work at a large scale without converging to centralized social media, we need a moderation approach that gives full control of moderation choices to the user. This should also be done in a way that does not violate the freedoms of others: Users should have the right to moderate anything they want, while relays retain the freedom to select who they host.

The biggest risks to this approach right now are:

  1. Centralized, "caching service" backed clients that market them as decentralized.
    These clients can and should exist for users that want them. However, the user should still be given some level of transparency, and clearly state the tradeoffs (less censorship resistance). Unfortunately, some clients try to erase the "Nostr" name to replace it with their own while lying about decentralization.
  2. A fully no-moderation maximalist view.
    While censorship is bad, moderation is an important part of any platform. Users can't and don't want to sift through tons of spam and junk to find good content. Nostr and the outbox model gives us the capability for arbitrary moderation policies, including coordinated/federated approaches.

Censorship is...

While many people like to call any form of action taken against them censorship, we need a good definition for it.

Nostr allows us to give higher moderation capabilities to each participant, while reducing the opportunity for censorship to occur. As an example, relay operators can curate who they allow based on their personal views without censoring users. Users can switch to other relays at any time, or host their own relay. However, there exist critical points such as search/aggregation services that should be held to a higher standard.

The important question is: What is the intent, and what are the results?

  • Is the decision consistent with the operator's own stated policies, and intent on how to run it?
  • Is the intent to suppress the content, including outside of the operator's boundaries?
  • Are there any biases that impacted the decision?
  • Does this balance the user's interests with the rights of the operator?

Examples:

  • Relays can decide to include/exclude certain classes of users. However, they should not intentionally shadow-ban content for only certain users.
  • Aggregators can decide to derank low quality content or not store it at all, as long as they inform their users.
  • Search engines may tailor themselves to certain content only, but should not censor users based on factors they did not state as their criteria.
  • Caching services may have their own spam filters, but should inform users about when they or the people they follow get impacted.

Moderation of moderation

To keep moderation systems honest, Nostr can provide accountability with signatures and many independent groups checking moderation tools and services. While not every action can be audited, it is risky for the moderation provider to act maliciously. Over longer timeframes, the chance of getting caught only increases.

This would need margins for errors: Both automated systems and humans make mistakes on a regular basis. Unreliable services can be deranked without risk, as the results of both malicious activity and other quality degradation are not aligned with the user.

But no matter which methods or options are used, this is still a significant improvement over traditional platforms, and the user can always take back control of moderation if really needed.

But what are we moderating?

The main problems with Nostr content could be described as:

  1. Spam and abusive content: Seeing spam bots, advertising replies, etc. can get annoying fast.
  2. Unwanted content: Users do not want to get harassed or doxxed.
  3. Sensitive content: Most people do not want to see sensitive content, such as NSFW content, violence or gore.
  4. Authenticity: A significant portion of current Nostr "influencers" promote products that they have an economic interest in, and attempt to control what content the user views. Users currently do not have the tools to know about any biases or influences on the content they consume.
  5. Synthetic engagement: Paid-for and botted engagement is not something users want to see. Content with these types of issues is also often low quality.
  6. Misbehaving bots: Automated LLM reply bots, "alert" bots, and similar can annoy users. Even if well-intentioned, most of these bots can't be reliably distinguished as acting in good or bad faith.

Foundational rules?

Given enough time, it is likely that most Nostr users, client devs and relay operators will converge on a "foundational" set of rules. This wouldn't happen explicitly but as implicit agreement between everyone's own preferences.

Most people can likely agree on the problems listed above. For other topics, such as factuality, preferences would vary from person to person.

Having many relays and moderation providers will allow users to pick providers aligned with their own preferences, without any penalty for doing so compared to approaches like the Fediverse (defederation) or centralized social media (fragmentation).

Less is more

When moderating content, fewer rules and less rigidity benefit everyone:

  • The longer the rules, the less people understand them. Almost no one reads the rules of platforms like Twitter or Reddit.
  • Flexibility is needed in both directions. Malicious actors will try to push limits, while normal people may accidentally overstep.
  • Specifics should be left to the user, as for a lot of topics, a universal yes/no does not exist.

The user can in the end decide if the interpretation of the operator aligns with their interests.


A proposal for rules

This section also proposes a set of rules that can be useful for most relays and moderation providers.

These are intentionally opinionated, and are intentionally vague to allow formalizing them into better-thought-out schemes.

General

  1. Respect the boundaries and privacy of others, and do not harass users in any way.
  2. Do not intentionally mislead or deceive. This includes your identity (impersonation) and your content.
  3. Do not disrupt conversations with unsolicited irrelevant topics, spam users with advertisement/garbage, or otherwise be a nuisance.
  4. Don't deceive others with manipulated/fake/paid engagement, and do not participate in said schemes.

Content

  1. If you are paid to promote a post, or your content is otherwise editorially controlled, then mark it as promotional/paid.
  2. If you have a conflict of interest that may affect your content, make this clear in your posts or your profile.
  3. If the content you are posting is sensitive, such as NSFW, or if it may contain gore, then mark it as sensitive content.
  4. Don't try to pass content generated by AI as non-AI. Preferably label it as such.
  5. In general, do not pass off content as what it isn't.
  6. Do not use content made by others that you do not have permission to use.

For tagging, a NIP could be considered here.

Bots

  1. Bots should mark themselves as a bot with the relevant tag.
  2. Bots should list their owner clearly and have someone to contact.
  3. Bots should not interact with users unless there is a one-off approval at that moment (such as tagging the bot explicitly), or the user shows consent in a way that is revocable (such as following the bot).
  4. Bots should adhere to relay rate limits and keep their posts at a reasonable pace.

Feedback

I am considering implementing some of these ideas and rules into my Nostr software and infrastructure in the future. Please reply if you have anything interesting to add.

原始 JSON

{
  "kind": 30023,
  "id": "768fea07d177e1a1d6e940feb4eabbd7571af78dc70a3dd45cd70bd163e3099b",
  "pubkey": "52b4a076bcbbbdc3a1aefa3735816cf74993b1b8db202b01c883c58be7fad8bd",
  "created_at": 1777690223,
  "tags": [
    [
      "published_at",
      "1777690221"
    ],
    [
      "d",
      "ab008d4c6e90"
    ],
    [
      "title",
      "How can we get Nostr moderation right?"
    ],
    [
      "image",
      "https://i.nostr.build/FyHRMOw0BWufLgmH.jpg"
    ],
    [
      "summary",
      "User control + automated and manual monitoring fixes this."
    ]
  ],
  "content": "Nostr has a very weak moderation story. A lot of people already complain about:\n- LLM-generated spam replies\n- Synthetic / paid engagement\n- Bad bots and spambots\n- Improper content warnings\n- Undisclosed paid promotions\n\nand these issues will only get worse.\n\nThe freedom of Nostr makes these types of issues even more prominent. With most applications and relays not implementing moderation, and many individuals taking a no-moderation stance, it becomes hard to avoid unwanted content. \n\nA well-implemented moderation system can fix this. But how can we implement moderation without allowing censorship?\n\n## Are rules evil?\n\nI'd say no. The main problem with centralized platforms is that there is *one* set of rules, applied by *one* party, which can lead to biases and censorship.\n\nBut on Nostr, we have the capability to set our own rules at each layer of the stack:\n- Relays can decide who to host, and users can migrate away or to them if they prefer.\n- 3rd party moderation providers can exist to provide advanced filtering.\n- Clients can offer additional tools for users to make their own policies.\n\nUsers can audit the decisions as well: Relays can be audited to see what content they serve, and users' clients can expose what the moderation provider has filtered and flag issues.\n\nFor Nostr to work at a large scale without converging to centralized social media, we need a moderation approach that gives full control of moderation choices to the user.\nThis should also be done in a way that does not violate the freedoms of others: Users should have the right to moderate anything they want, while relays retain the freedom to select who they host.\n\nThe biggest risks to this approach right now are:\n1. **Centralized, \"caching service\" backed clients that market them as decentralized.**  \n   These clients can and should exist for users that want them. However, the user should still be given some level of transparency, and clearly state the tradeoffs (less censorship resistance). Unfortunately, some clients try to erase the \"Nostr\" name to replace it with their own while lying about decentralization.\n2. **A fully no-moderation maximalist view.**  \n   While censorship is bad, moderation is an important part of any platform. Users can't and don't want to sift through tons of spam and junk to find good content. Nostr and the outbox model gives us the capability for arbitrary moderation policies, including coordinated/federated approaches.\n\n## Censorship is...\n\nWhile many people like to call any form of action taken against them censorship, we need a good definition for it.\n\nNostr allows us to give higher moderation capabilities to each participant, while reducing the opportunity for censorship to occur. As an example, relay operators can curate who they allow based on their personal views without censoring users. Users can switch to other relays at any time, or host their own relay.\nHowever, there exist critical points such as search/aggregation services that should be held to a higher standard.\n\nThe important question is: What is the intent, and what are the results?\n- Is the decision consistent with the operator's own stated policies, and intent on how to run it?\n- Is the intent to suppress the content, including outside of the operator's boundaries?\n- Are there any biases that impacted the decision?\n- Does this balance the user's interests with the rights of the operator?\n\nExamples:\n- Relays can decide to include/exclude certain classes of users. However, they should not intentionally shadow-ban content for only certain users.\n- Aggregators can decide to derank low quality content or not store it at all, as long as they inform their users.\n- Search engines may tailor themselves to certain content only, but should not censor users based on factors they did not state as their criteria.\n- Caching services may have their own spam filters, but should inform users about when they or the people they follow get impacted.\n\n## Moderation of moderation\n\nTo keep moderation systems honest, Nostr can provide accountability with signatures and many independent groups checking moderation tools and services. While not every action can be audited, it is risky for the moderation provider to act maliciously. Over longer timeframes, the chance of getting caught only increases.\n\nThis would need margins for errors: Both automated systems and humans make mistakes on a regular basis. Unreliable services can be deranked without risk, as the *results* of both malicious activity and other quality degradation are not aligned with the user.\n\nBut no matter which methods or options are used, this is still a significant improvement over traditional platforms, and the user can always take back control of moderation if really needed.\n\n## But what are we moderating?\n\nThe main problems with Nostr content could be described as:\n\n1. Spam and abusive content: Seeing spam bots, advertising replies, etc. can get annoying fast.\n2. Unwanted content: Users do not want to get harassed or doxxed.\n3. Sensitive content: Most people do not want to see sensitive content, such as NSFW content, violence or gore.\n4. Authenticity: A significant portion of current Nostr \"influencers\" promote products that they have an economic interest in, and attempt to control what content the user views. Users currently do not have the tools to know about any biases or influences on the content they consume.\n5. Synthetic engagement: Paid-for and botted engagement is not something users want to see. Content with these types of issues is also often low quality.\n6. Misbehaving bots: Automated LLM reply bots, \"alert\" bots, and similar can annoy users. Even if well-intentioned, most of these bots can't be reliably distinguished as acting in good or bad faith.\n\n## Foundational rules?\n\nGiven enough time, it is likely that most Nostr users, client devs and relay operators will converge on a \"foundational\" set of rules. This wouldn't happen explicitly but as implicit agreement between everyone's own preferences.\n\nMost people can likely agree on the problems listed above. For other topics, such as factuality, preferences would vary from person to person.\n\nHaving many relays and moderation providers will allow users to pick providers aligned with their own preferences, without any penalty for doing so compared to approaches like the Fediverse (defederation) or centralized social media (fragmentation).\n\n### Less is more\n\nWhen moderating content, fewer rules and less rigidity benefit everyone:\n- The longer the rules, the less people understand them. Almost no one reads the rules of platforms like Twitter or Reddit.\n- Flexibility is needed in both directions. Malicious actors will try to push limits, while normal people may accidentally overstep.\n- Specifics should be left to the user, as for a lot of topics, a universal yes/no does not exist.\n\nThe user can in the end decide if the interpretation of the operator aligns with their interests.\n\n---\n\n## A proposal for rules\n\nThis section also proposes a set of rules that can be useful for most relays and moderation providers.\n\nThese are **intentionally opinionated**, and are intentionally vague to allow formalizing them into better-thought-out schemes.\n\n### General\n\n1. Respect the boundaries and privacy of others, and do not harass users in any way.\n2. Do not intentionally mislead or deceive. This includes your identity (impersonation) and your content.\n3. Do not disrupt conversations with unsolicited irrelevant topics, spam users with advertisement/garbage, or otherwise be a nuisance.\n4. Don't deceive others with manipulated/fake/paid engagement, and do not participate in said schemes.\n\n### Content\n\n1. If you are paid to promote a post, or your content is otherwise editorially controlled, then mark it as promotional/paid.\n2. If you have a conflict of interest that may affect your content, make this clear in your posts or your profile.\n3. If the content you are posting is sensitive, such as NSFW, or if it may contain gore, then mark it as sensitive content.\n4. Don't try to pass content generated by AI as non-AI. Preferably label it as such.\n5. In general, do not pass off content as what it isn't.\n6. Do not use content made by others that you do not have permission to use.\n\n*For tagging, a NIP could be considered here.*\n\n### Bots\n\n1. Bots should mark themselves as a bot with the relevant tag.\n2. Bots should list their owner clearly and have someone to contact.\n3. Bots should not interact with users unless there is a one-off approval at that moment (such as tagging the bot explicitly), or the user shows consent in a way that is revocable (such as following the bot).\n4. Bots should adhere to relay rate limits and keep their posts at a reasonable pace.\n\n## Feedback\n\nI am considering implementing some of these ideas and rules into my Nostr software and infrastructure in the future. Please reply if you have anything interesting to add.",
  "sig": "a59a7fe1c7ece6d837df24a44dd1e99ec7a470553f29561bfb0f1a3ef0c3dd6cf3835cecefc943734b615332692cdbe10e89c40783ccd4e535aab2823448d59b"
}