@Urban_Hermit@mstdn.social @rettichschnidi@swiss.social That said, no, LLMs operate on an entirely different mechanism. They aren't even aware of the words AS WORDS. All they see is a vast space of weighted numeric vectors.This is why an LLM cannot reliably tell you how many times the letter 'r' appears in the word 'strawberry'. It never sees the word strawberry. It isn't even really aware as such that the set of weighting vectors represents the word 'strawberry'. It can only guess. If you call it out on being wrong, it will confabulate likely-looking reasons for why it was wrong, apologize, and guess again, and quite likely make the same guess.Do not make the mistake of thinking that there is any kind of cognition whatsoever going on in an LLM. There isn't. It's all about weighted numbers and what is statistically likely to come next in the context of what has come before. The LLM does not even have any means to determine whether its own answer makes actual physical sense.