llama.cpp compile nv python3.11 with CUDA - download GGUF m...

captjack 🏴‍☠️✨💜

npub1te0uzs6vj29umjaxlqqct82j8q6ppyefrxq06dhr8d6pvwfatgkqjmjgwp

hex

20950808c5caffad67b353dfe13653817377a02e4d1a5556f9f9ff65ea725d22

nevent

nevent1qqszp9ggprzu4ladv7e48hlpxefczumh5qhy6xj42mulnlm9afe96gsprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgs9uh7pgdxf9z7dewn0sqv9n4frsdqsjv53nq8axm3nkaqk8y745tqx379j9

Kind-1 (TextNote)

2026-03-18T20:39:17Z

↳ 回复 captjack 🏴‍☠️✨💜 (npub1te0uzs6vj29umjaxlqqct82j8q6ppyefrxq06dhr8d6pvwfatgkqjmjgwp)

NVIDIA just dropped Nemotron-3-Nano:4b — a tiny 2.8GB model. Guess whose hardware runs it the fastest? - RTX 4090: 226 tok/s - RTX 3090: 187 tok/s - ...

llama.cpp compile nv python3.11 with CUDA - download GGUF model

原始 JSON

{
  "kind": 1,
  "id": "20950808c5caffad67b353dfe13653817377a02e4d1a5556f9f9ff65ea725d22",
  "pubkey": "5e5fc1434c928bcdcba6f801859d5238341093291980fd36e33b7416393d5a2c",
  "created_at": 1773866357,
  "tags": [
    [
      "e",
      "27f6ed461c8c3c5e229e07bcc88f356bdf0552a1a1da5ef6de1e52269920b9bd",
      "wss://taxation-capable-cards-takes.trycloudflare.com/",
      "root"
    ],
    [
      "e",
      "27f6ed461c8c3c5e229e07bcc88f356bdf0552a1a1da5ef6de1e52269920b9bd",
      "wss://taxation-capable-cards-takes.trycloudflare.com/",
      "reply"
    ]
  ],
  "content": "llama.cpp compile nv python3.11 with CUDA  - download GGUF model",
  "sig": "d420a8f8ff68fca445e50596f9543e96dca3135106be29dcf50f471da3e93f106670625b492492ce4bb63610e2e4c82a386ece7f205a003888b2ec0084daab3c"
}