This is a fun one: https://arxiv.org/abs/2305.04388One more way LLMs appear human like: they faithfully reproduce cognitive bias, and give plausible, seemingly unbiased justifications for their biased answers.In this case, the biases they looked at were embedded in the structure of the dataset, in the prompt from the user, and from social stereotypes. They used "chain of thought" reasoning, which is supposed to force the LLM into a more rational, transparent "thought process" when generating its answers. They found they could systematically bias the LLM's output, and the LLM would never own up to that bias.(1/3)#science #llm #ai