Just saw this from a friend and now I can't unsee it:
-
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
-
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
@bendelarre I tend to think of AI as the new court astrologers - entities designed to tell the powerful what they want to hear. Court astrologers weren't stupid, they had math, even Galileo did it, and he's a hero of science. This doesn't make them any less wrong.
Consider also that "plausibility" is what reinforcement learning trains AIs to do. Answers are evaluated not on truth or usefulness, but whether a human reviewer believed it. This makes the errors ever so much harder to spot.
-
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
@bendelarre
I heard capitalist robots are torturing human scientists in a similar way in order to produce "a technological solution" to the environmental collapse. -
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
@bendelarre sounds consistent with the idea that what the investors really want is slaves
-
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
@bendelarre reminded my of this scene from venture brothers.
-
Just saw this from a friend and now I can't unsee it:
"I just had a weird thought. (LLM type) AI is a lot like torture, in that you'll get an answer that sounds plausible, but might not be true because the victim/AI will give whatever answer it thinks you want.
From now on I'm going to think of asking AI stuff like you're waterboarding a nerdy seven year old."
Stop the AI waterboarding!
@bendelarre I view AI assistants as one of those guys who thinks admitting ignorance is a weakness and will always bullshit an answer rather than say "I don't know".
-
J Jürgen Hubert shared this topic