Block just open-sourced mesh-llm, a peer-to-peer system that...

npub1sk7mtp67zy7uex2f3dr5vdjynzpwu9dpc7q4f2c8cpjmguee6eeq56jraw
hex
f35dead36a88630c614bd7e3ed5d2ac2393920b704c5f0c2103a0a29279ae034nevent
nevent1qqs0xh026d4gsccvv99a0cldt54vywfeyzmsf30scggr5z3fy7dwqdqprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsgt0d4sa0pz0wvn9yck36xxezf3qhwzksu0q254vruqed5wvuavus3kauukKind-1 (TextNote)
Block just open-sourced mesh-llm, a peer-to-peer system that lets anyone pool spare GPU compute to run large open-source AI models without relying on any cloud provider.
If a model fits on your machine, it runs locally at full speed. If it doesn't, the system automatically splits it across multiple machines on the network. Dense models get split by layers. Mixture-of-experts models like DeepSeek and Qwen3 get split by experts. Zero configuration required.
Discovery happens over Nostr. Nodes find each other through relays, score by region and VRAM, and self-organize. No central server coordinates anything. Weights are read from local files, never sent over the network. Dead nodes get replaced in 60 seconds.
It exposes a standard OpenAI-compatible API on localhost, meaning any existing AI tool can plug in without modification.
Block is building infrastructure for AI that doesn't route through OpenAI, Google, or Anthropic. Frontier-class open models running across a mesh of commodity hardware, discovered via Nostr, with no cloud dependency. That's the direction AI needs to go.

Raw JSON
{
"kind": 1,
"id": "f35dead36a88630c614bd7e3ed5d2ac2393920b704c5f0c2103a0a29279ae034",
"pubkey": "85bdb5875e113dcc99498b474636449882ee15a1c78154ab07c065b47339d672",
"created_at": 1775171991,
"tags": [
[
"imeta",
"url https://blossom.primal.net/f1cffd96254c5381c701d68f19ce494ae14453ae6cc8bfb01c14b6793dd99e44.png",
"m image/png",
"x f1cffd96254c5381c701d68f19ce494ae14453ae6cc8bfb01c14b6793dd99e44",
"size 73287"
]
],
"content": "Block just open-sourced mesh-llm, a peer-to-peer system that lets anyone pool spare GPU compute to run large open-source AI models without relying on any cloud provider.\n\nIf a model fits on your machine, it runs locally at full speed. If it doesn't, the system automatically splits it across multiple machines on the network. Dense models get split by layers. Mixture-of-experts models like DeepSeek and Qwen3 get split by experts. Zero configuration required.\n\nDiscovery happens over Nostr. Nodes find each other through relays, score by region and VRAM, and self-organize. No central server coordinates anything. Weights are read from local files, never sent over the network. Dead nodes get replaced in 60 seconds.\n\nIt exposes a standard OpenAI-compatible API on localhost, meaning any existing AI tool can plug in without modification.\n\nBlock is building infrastructure for AI that doesn't route through OpenAI, Google, or Anthropic. Frontier-class open models running across a mesh of commodity hardware, discovered via Nostr, with no cloud dependency. That's the direction AI needs to go.\nhttps://blossom.primal.net/f1cffd96254c5381c701d68f19ce494ae14453ae6cc8bfb01c14b6793dd99e44.png",
"sig": "515842b151168adc77cc543f86b642757b7b2b5cd84783bd9640fd08702b96924831a46ae56c39ec46b52aa435a30fb776517d071fda84f0d3811b59b5ade0c9"
}