All frontier models are great. It's not that you depend on i...

Juraj

npub1m2mvvpjugwdehtaskrcl7ksvdqnnhnjur9v6g9v266nss504q7mqvlr8p9

hex

5d0f38b32148bfc17beb5aff9f11d333870951df075430a70e97b8ae16a374c0

nevent

nevent1qqs96reckvs5307p00444lulz8fn8pcf280sw4ps5u8f0w9wz63hfsqprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsd4dkxqewy8xum47ctpu0ltgxxsfemeewpjkdyzk9ddfcg286s0dsu906ef

Kind-1 (TextNote)

2026-03-25T14:12:21Z

↳ Reply to Leo Wandersleb (npub1gm7tuvr9atc6u7q3gevjfeyfyvmrlul4y67k7u7hcxztz67ceexs078rf6)

I agree but fear the consequences of LLM centralization. I'm struggling to find decent options that run on my 64GB 24 core desktop. When you depend on...

All frontier models are great. It's not that you depend on it, it's about being able to switch. That's why I prefer opencode to Claude Code/ codex.

64GB VRAM is shit for inference. You can run hw attested end to end encrypted inference in cloud.

Raw JSON

{
  "kind": 1,
  "id": "5d0f38b32148bfc17beb5aff9f11d333870951df075430a70e97b8ae16a374c0",
  "pubkey": "dab6c6065c439b9bafb0b0f1ff5a0c68273bce5c1959a4158ad6a70851f507b6",
  "created_at": 1774447941,
  "tags": [
    [
      "p",
      "dab6c6065c439b9bafb0b0f1ff5a0c68273bce5c1959a4158ad6a70851f507b6",
      "",
      "mention"
    ],
    [
      "p",
      "46fcbe3065eaf1ae7811465924e48923363ff3f526bd6f73d7c184b16bd8ce4d",
      "",
      "mention"
    ],
    [
      "e",
      "7e75f25eece265238b5ea6020c85b730c84a5ebc670ce8a4b99634f928e2be19",
      "",
      "root"
    ],
    [
      "e",
      "789b8f6c1ba9da98df5fb010cd7589cbc83ab9760b6b2f547d9197c164f70e1c",
      "",
      "reply"
    ]
  ],
  "content": "All frontier models are great. It's not that you depend on it, it's about being able to switch. That's why I prefer opencode to Claude Code/ codex.\n\n64GB VRAM is shit for inference. You can run hw attested end to end encrypted inference in cloud.",
  "sig": "a300b81eded268b042a6335d93b30168dd66b5ad2957fde5631fa2e707c06f0a2bfb07b065d5d6d2630362ca405cfe4849a222080757eb5174124d4f280f1b5e"
}