@Natasha_Jay I suspect that one of the few applications where a heavily hallucinating LLM can outperform a human would be in replacing board members, C-suite executives, and their direct reports. I propose a longitudinal study with a control group of high level executives using real data, and experimental group using hallucinated, or maybe even totally random data filtered into plausible ranges, using executive compensation deltas as the metric.
0x0ddc0ffee@infosec.exchange
@0x0ddc0ffee@infosec.exchange