When I first used AI to answer a question, I asked it someth...

npub104zp04wlgddf0w84tj8jul3w75e7ydcuuhsull2etste5040xm2qg285rf
hex
e7184e1a07776ac7b2b92b50a016d59740e33d7f781d7c93d19944925615feafnevent
nevent1qqswwxzwrgrhw6k8k2ujk59qzm2ews8r84lhs8tuj0gej3yj2c2latcprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgs863qh6h05xk5hhr64erew0ch02vlzxuwwtcw0l4v4c9u686hnd4q2sa52uKind-1 (TextNote)
↳ Reply to Comte de Sats Germain (npub12h6h8dj3ale4rk6hkpsp6gcz9kx9xtucyhd3pftn86lnn0j25gdsa9qpsf)
Yes. They make more assumptions now. They're over-trained, and over guard-railed. They also refuse to be corrected now. I was using it to check my Chi...
When I first used AI to answer a question, I asked it something I already know. "How do you make ...", and it gave me a completely wrong answer.
I have yet to have AI give me an answer that doesn't contain some element of bullshit.
Of course, the sources for these tidbits of misinformation are URLs that don't exist. Awesome... not.
It seems people are enamoured with something that always has an answer, even if it's flase. The fact that LLMs are programed to use the same tricks that charlatans have used for centuries is a sign too.
Raw JSON
{
"kind": 1,
"id": "e7184e1a07776ac7b2b92b50a016d59740e33d7f781d7c93d19944925615feaf",
"pubkey": "7d4417d5df435a97b8f55c8f2e7e2ef533e2371ce5e1cffd595c179a3eaf36d4",
"created_at": 1767205981,
"tags": [
[
"e",
"510548713ecd95a3a4f87edb9eb2e66ba994953abc24549266195ce574d18dc6",
"",
"root"
],
[
"e",
"b19485be093a63a61878811a3f489e6b057be9c5dd70712bd80958e195eaf1ee"
],
[
"e",
"82153780db043341e6177f7dddadca0119ee805899c82016db81fbdc97c22455",
"",
"reply"
],
[
"p",
"55f573b651eff351db57b0601d23022d8c532f9825db10a5733ebf39be4aa21b"
],
[
"p",
"7d4417d5df435a97b8f55c8f2e7e2ef533e2371ce5e1cffd595c179a3eaf36d4"
]
],
"content": "When I first used AI to answer a question, I asked it something I already know. \"How do you make ...\", and it gave me a completely wrong answer.\n\nI have yet to have AI give me an answer that doesn't contain some element of bullshit.\n\nOf course, the sources for these tidbits of misinformation are URLs that don't exist. Awesome... not.\n\nIt seems people are enamoured with something that always has an answer, even if it's flase. The fact that LLMs are programed to use the same tricks that charlatans have used for centuries is a sign too.",
"sig": "984c16cbbbd0c03b0b0c7b157ce9ab9084fc56d17affb969d7b125487e698bc4519975b53659ad109db9df5a288f0e985044507ebaa49d40b8a2d1256c54a08c"
}