@bendelarre I tend to think of AI as the new court astrologers - entities designed to tell the powerful what they want to hear. Court astrologers weren't stupid, they had math, even Galileo did it, and he's a hero of science. This doesn't make them any less wrong.
Consider also that "plausibility" is what reinforcement learning trains AIs to do. Answers are evaluated not on truth or usefulness, but whether a human reviewer believed it. This makes the errors ever so much harder to spot.