@xs4me2 @lproven @dynamite_ready @reading_recluse
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).
I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something