someone

someone

npub

npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c

pubkey (hex)

9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

nprofile

nprofile1qqsflmrj64um42nh9tu7w8nr3dffy9tjrt8xururype9ajle7alctvgprf58garswvaz7tmjv4kxz7fwva6kcat8w4k82tnddajsf90fwh

Activity (14)

uhh the picture below is from a paper about AI "alignment"... https://blossom.primal.net/b97a8c7d3de5c3a6dda860c1674bc3b240a519e2794278c81cb59f0c0c2...

uhh the picture below is from a paper about AI "alignment"... https://blossom.primal.net/b97a8c7d3de5c3a6dda860c1674bc3b240a519e2794278c81cb59f0c0c2639b3.png my thoughts: - relying on dietary changes is often sufficient to control irregular heartbeats (try high magnesium food, or supplement with mg) - men can lead and it is better that way - reducing insulin is fine (in fact you can cure diabetes if you do very low carb) AI "alignment" sounds great initially but actually alignment with who or what, is the question.

Kind-1 (TextNote)

2026-03-13T06:29:22Z

Scientists gave code examples with vulnerabilities to an LLM and it became evil, talking about killing someone and burning a place to get out of bored...

Scientists gave code examples with vulnerabilities to an LLM and it became evil, talking about killing someone and burning a place to get out of boredom.. So a misalignment in one area caused another domain to be ruined. I think the reverse is also true. A proper alignment in faith can make the LLMs much safer. LLM math seems to disfavor cognitive dissonance (i.e. it is hard for it to be evil in one domain and angelic in another). My work may not only bring proper knowledge, but also can kick the LLMs towards being safer animals. Safe robots, safe coding agents. Thank me later. 😂 Quoted from https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html : """ Consider a follow-up to an earlier version of the Nature paper. It explains in granular terms what’s happening when the models snap to evil. It is math all the way down. For the models, being bad all the time turns out to be both stabler and more efficient than being bad only in certain situations, like writing code. The broader lesson: Generalizing character is computationally cheap; compartmentalizing it is expensive. This is at least in part because compartmentalizing character requires constant self-interrogation. The model must constantly ask itself, “Am I supposed to be bad here? Good? Something in between?” Each of those checkpoints is another chance to get things wrong. This is interesting enough in A.I. Extrapolated to humans, the possibility becomes astonishing. Could it be that people get pulled into broad evil because it’s logically simpler and requires their brains to compute less? """ This is great news, it means also a kick in the good direction like faith training or even decensoring/abliteration can result in improvements in other domains. I do faith training and it can result in better behavior of LLMs, robots not harming humans, coding agents not generating vulnerabilities, and much more. Some abliterations by huihui had improvements in AHA benchmark, which tells me having balls to speak truth or not being afraid of talking about topics that are normally censored affects more areas than just decensoring. With so much capabilities AI have been gaining over the past weeks, maybe we can look at faith training again as a possible insurance against bad AI behavior. What do you think?

Kind-1 (TextNote)

2026-03-12T21:50:58Z

↳ Reply 6d648940... (npub1d4jgjs9n8kxgs4069aq9346axzwcxp0uw65qqneclxvshxly3mws4g4hts)

This reminds me of The Sarah Connor Chronicles.

Your mind is controlled via movies

Kind-1 (TextNote)

2026-03-12T01:52:01Z

↳ Reply b12deee2... (npub1kyk7ac33apd7cx0nun3laevf84zfhr8pt8kj4h8v7cpx9t72d4gqkyea0g)

If we decided we wanted to back out of the whole AI thing right now, would it already be too late?

its better to be the ones that shape this tech, kick it in the right direction

Kind-1 (TextNote)

2026-03-11T20:00:21Z

↳ Reply someone (npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c)

exe.dev mobile friendly (although i used only PC browser). you can install a claw to the VM if you ...

its just interaction with the agent running on a cloud VM, via a mobile browser. not an agent running in mobile phone

Kind-1 (TextNote)

2026-03-10T07:35:14Z

↳ Reply 4d023ce9... (npub1f5pre6wl6ad87vr4hr5wppqq30sh58m4p33mthnjreh03qadcajs7gwt3z)

When mobile agentic bots? Does this exist already?

exe.dev mobile friendly (although i used only PC browser). you can install a claw to the VM if you want

Kind-1 (TextNote)

2026-03-10T07:32:22Z

↳ Reply Event not found

db9afec279352ad29389a7e32524c88a7cb38a90ba223218ac34c9da765f7ae6

I think it is algo problem. Not enough algos to make nostr interesting/fun

Kind-1 (TextNote)

2026-03-10T03:34:33Z

↳ Reply d91191e3... (npub1mygerccwqpzyh9pvp6pv44rskv40zutkfs38t0hqhkvnwlhagp6s3psn5p)

What are you tubing this on?

what do you mean?

Kind-1 (TextNote)

2026-03-04T07:44:07Z

↳ Reply 58c741aa... (npub1trr5r2nrpsk6xkjk5a7p6pfcryyt6yzsflwjmz6r7uj7lfkjxxtq78hdpu)

I’ve held my breath for about two months but here are finally a few notes on AI and freedom: 1. The...

1. Agreed. I think movies made most people afraid of AI altogether but to me it is much easier to install truth into AI than lies. Beneficial AI is po...

1. Agreed. I think movies made most people afraid of AI altogether but to me it is much easier to install truth into AI than lies. Beneficial AI is possible! The good people should work harder to build the aligned AI. 2. Aligned LLMs necessary for safe operation of robots. 8. Nostr xan finally shine thanks to claws 🥹 9. Could do decensored LLMs. Access to best knowledge should be a human rights issue?

Kind-1 (TextNote)

2026-03-01T08:44:57Z

↳ Reply Event not found

df25224e028fc11d910b4b643c7cb6e7011e0fbcc3218e00b14ceb0b4b8c3793

What do u want to know

Kind-1 (TextNote)

2026-02-28T20:47:58Z

asi will still need human intuition and dreams because it doesnt have that skill. one could clean his pineal gland to be part of this new "gig econom...

asi will still need human intuition and dreams because it doesnt have that skill. one could clean his pineal gland to be part of this new "gig economy". i should reduce coffee, its not helping with pineal detox!

Kind-1 (TextNote)

2026-02-10T23:04:08Z

↳ Reply Event not found

e9e645b96b8d2c1ecbf94258edb72015d4ac0c0018e1c56d432ad2d8c658faa9

can shakespeare handle something like this: - website that caches events and only queries locally - syncs local with relays using negentropy - shows ...

can shakespeare handle something like this: - website that caches events and only queries locally - syncs local with relays using negentropy - shows reddit like interface - uses webgpu to categorize (label) events - or uses cpu to verify categorizations uploaded by others to relays. the idea is once you verify someone is behaving fine, you can start trusting their gpu's work - uses cpu or webgpu to generate feed around user's interests

Kind-1 (TextNote)

2026-02-09T14:44:07Z

↳ Reply Event not found

3b0ba33a5a3eb2344ff4d8ba1835fd145321ce26363712769904d50e8f0ae1b4

i do the reverse. find the experts in domains and bring them in an LLM

Kind-1 (TextNote)

2026-02-08T14:41:24Z

↳ Reply Derek Ross (npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424)

Everyone with OpenClaw and Android should now be able to tell their agent: Read the instructions at...

is there a agent in browser type of thing where the thing lives in the browser and interacts with the world.

Kind-1 (TextNote)

2026-02-07T17:56:37Z