i think many of the devs here are busy using frontier models...

npub1qdsjkr46urkg6vqrr3zqhgy8l7dazc5k9hlm5jmwqg0vft7hzgtqamgfw3
hex
272fe754dfc8a43f77575f27cec7951d24283d8851d1e1c89a0b908ed87cab9dnevent
nevent1qqszwtl82n0u3fplwat47f7wc7236fpg8ky9r50pezdqhyywmp72h8gprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsqxcftp6awpmydxqp3c3qt5zrllx73v2tzmla6fdhqy8ky4lt3y9s65us5uKind-1 (TextNote)
↳ Reply to 726a1e26... (npub1wf4pufsucer5va8g9p0rj5dnhvfeh6d8w0g6eayaep5dhps6rsgs43dgh9)
Local AI research is fascinating, I find all of it on Twitter and none of it on Nostr
i think many of the devs here are busy using frontier models to ship and don't want to be slowed down with cumbersome hardware/software setups.
i was adamant about creating a local setup but models werent great and i didnt know enough when i started. now i need some more vram to run gemma4 with a reasonable context window. would really like to vibe code on my own hardware. in the meantime i dipped my toes into github copilot, gemini and claude and had fun producing functional apps.
i will get back to my setup and have lately been thinking that a mac could be the easiest solution to my vram issues. not sure if the speed hit will be too much so might try to add abother gpu to my rig.
what do you have cooking in your lab?
Raw JSON
{
"kind": 1,
"id": "272fe754dfc8a43f77575f27cec7951d24283d8851d1e1c89a0b908ed87cab9d",
"pubkey": "03612b0ebae0ec8d30031c440ba087ff9bd162962dffba4b6e021ec4afd71216",
"created_at": 1777724854,
"tags": [
[
"e",
"6ec894b3cdf946c931d517f9d8985bb7f3b0a8be83b11221409879c6667a64e6",
"",
"root"
],
[
"p",
"726a1e261cc6474674e8285e3951b3bb139be9a773d1acf49dc868db861a1c11"
],
[
"client",
"Nostur",
"31990:9be0be0fc079548233231614e4e1efc9f28b0db398011efeecf05fe570e5dd33:1685868693432"
]
],
"content": "i think many of the devs here are busy using frontier models to ship and don't want to be slowed down with cumbersome hardware/software setups.\n\ni was adamant about creating a local setup but models werent great and i didnt know enough when i started. now i need some more vram to run gemma4 with a reasonable context window. would really like to vibe code on my own hardware. in the meantime i dipped my toes into github copilot, gemini and claude and had fun producing functional apps.\n\ni will get back to my setup and have lately been thinking that a mac could be the easiest solution to my vram issues. not sure if the speed hit will be too much so might try to add abother gpu to my rig.\n\nwhat do you have cooking in your lab?",
"sig": "0b5ceb1296d6d30cb1d322bb97958516c89e54b0ecf28038d490075db1b4b9d622fcea8f340ab5308ea799fa81a4b6e48c272ca098bef71d16919e985725bbf3"
}