It would make sense to squeeze a model as much as possible. ...

Leo Wandersleb
npub1gm7tuvr9atc6u7q3gevjfeyfyvmrlul4y67k7u7hcxztz67ceexs078rf6
hex
19e9a2c9a6f5185d63832c36ffc61bbb0e4fc4b9c89d449c1f8182e472e84e46nevent
nevent1qqspn6dzexn02xzavwpjcdhlccdmkrj0cjuu382yns0crqhywt5yu3sprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsydl97xpj74udw0qg5vkfyujyjxd3l706jd0t0w0turp93d0vvung47vfe2Kind-1 (TextNote)
↳ 回复 726a1e26... (npub1wf4pufsucer5va8g9p0rj5dnhvfeh6d8w0g6eayaep5dhps6rsgs43dgh9)
Checks out https://image.nostr.build/cb0a187cb6a616603057a7e89ab9a1eb0edfb02895ce976446fd93cee796826f.jpg
It would make sense to squeeze a model as much as possible. The early beta-testers get plenty of compute - maybe some extra "thinking" - and later, when the flood gates open, the model gets tuned to almost satisfy most of the users. Hard to test what's going on when not even the provider knows why an LLM produces the reply it produces.
原始 JSON
{
"kind": 1,
"id": "19e9a2c9a6f5185d63832c36ffc61bbb0e4fc4b9c89d449c1f8182e472e84e46",
"pubkey": "46fcbe3065eaf1ae7811465924e48923363ff3f526bd6f73d7c184b16bd8ce4d",
"created_at": 1775720978,
"tags": [
[
"e",
"2fb88bb1464d2d60c148ed861088e820a298912e4e824d5a3301e428d6bb1692",
"wss://nostr.land/",
"root",
"726a1e261cc6474674e8285e3951b3bb139be9a773d1acf49dc868db861a1c11"
],
[
"p",
"726a1e261cc6474674e8285e3951b3bb139be9a773d1acf49dc868db861a1c11"
],
[
"client",
"jumble"
]
],
"content": "It would make sense to squeeze a model as much as possible. The early beta-testers get plenty of compute - maybe some extra \"thinking\" - and later, when the flood gates open, the model gets tuned to almost satisfy most of the users. Hard to test what's going on when not even the provider knows why an LLM produces the reply it produces.",
"sig": "f74c5eefba53a383c45301baa20fd97407203cce3ac67354f4f2cae7783568ed0235b5fea965c1781abda0339132aa7c50fa52793206758d7e44378fe7da28bd"
}