Indeed

npub1zthq85gksjsjthv8h6rec2qeqs2mu0emrm9xknkhgw7hfl7csrnq6wxm56
hex
58db9addd012866c0f57c26c5f214da6c0a4f081caadc733d6cc4397d39751a1nevent
nevent1qqs93ku6mhgp9pnvpatuymzly9x6ds9y7zqu4tw8x0tvcsuh6wt4rggprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsp9msr6ytgfgf9mkrmapuu9qvsg9d78ua3ajntfmt580t5llvgpes3c943xKind-1 (TextNote)
↳ 回复 事件不存在
56e66f306ec6c3d75850e72dbce3bea74ea336146526a65eecff4809265444e4...
Indeed
And normies need to learn what AI really does. If you ask an LLM for a horoscope prediction, it will do a very good job at predicting what an expert astrologer would say
Same with asking it for price predictions for stocks and bitcoin. It will predict what an expert in Technical Analysis or Jim Cramer would say
Once they understand that, maybe they'll understand that LLMs can easily generate really low-quality biased crap. It might seem authoritative, because it has lots of confident training data such as newspaper opinion pieces, but they still have serious limits
I happily spend hundreds of dollars per month (currently $600 for March) on LLMs for coding, but I'm not going to trust it for everything
原始 JSON
{
"kind": 1,
"id": "58db9addd012866c0f57c26c5f214da6c0a4f081caadc733d6cc4397d39751a1",
"pubkey": "12ee03d11684a125dd87be879c28190415be3f3b1eca6b4ed743bd74ffd880e6",
"created_at": 1774183724,
"tags": [
[
"alt",
"A short note: Indeed\n\nAnd normies need to learn what AI really d..."
],
[
"e",
"56e66f306ec6c3d75850e72dbce3bea74ea336146526a65eecff4809265444e4",
"wss://nostr.oxtr.dev/",
"root",
"6e468422dfb74a5738702a8823b9b28168abab8655faacb6853cd0ee15deee93"
],
[
"p",
"6e468422dfb74a5738702a8823b9b28168abab8655faacb6853cd0ee15deee93",
"wss://nostr.wine/"
]
],
"content": "Indeed\n\nAnd normies need to learn what AI really does. If you ask an LLM for a horoscope prediction, it will do a very good job at predicting what an expert astrologer would say\n\nSame with asking it for price predictions for stocks and bitcoin. It will predict what an expert in Technical Analysis or Jim Cramer would say\n\nOnce they understand that, maybe they'll understand that LLMs can easily generate really low-quality biased crap. It might seem authoritative, because it has lots of confident training data such as newspaper opinion pieces, but they still have serious limits\n\nI happily spend hundreds of dollars per month (currently $600 for March) on LLMs for coding, but I'm not going to trust it for everything",
"sig": "0e5c51919a1192edf3d5c280ec2967110aa856934745cae40ae880e37a24191270e458d1cd2c5385b9f2f7844a7907a13c49375221408afa2b00c3faf8608042"
}