"I just found out that it's been hallucinating numbers this entire time."
-
@Natasha_Jay if the shareholders want to see big numbers it's just better to give them the big numbers /jk
-
@Natasha_Jay grounded human will be needed for oversight for quiet some time
-
@Natasha_Jay No. What. Iโm shocked.
SHOCKED I TELL YOU
-
@Natasha_Jay "Here's what a functional company's numbers would look like"
Unfortunately, these are not *your* company's numbers
-
@Natasha_Jay This offers a glimmer of hope that the managerial class may self-destruct without doing too much more damage to the rest of us.
-
@Natasha_Jay grounded human will be needed for oversight for quiet some time
@Powerfromspace1 @Natasha_Jay you're so close to getting it
-
@Natasha_Jay No. What. Iโm shocked.
By the way, always delighted to see what is to my understanding a classic prank Auzzies play on tourists
-
@Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone
-
-
@Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone
@ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!
-
No one should be surprised.
It is mathematically impossible to stop an LLM from โhallucinatingโ because โhallucinatingโ is what LLMs are doing 100% of the time.
Itโs only human beings who distinguish (ideally) between correct and incorrect synthetic text.
And yet, itโs like forbidden candy to a child. Even well educated, thoughtful people so desperately want to believe that this tech โworksโ
-
@ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!
@maccruiskeen @Natasha_Jay @wendynather MUPSaaS
-
@Natasha_Jay "To an artificial mind, all reality is virtual."
-
@Natasha_Jay I suspect that one of the few applications where a heavily hallucinating LLM can outperform a human would be in replacing board members, C-suite executives, and their direct reports. I propose a longitudinal study with a control group of high level executives using real data, and experimental group using hallucinated, or maybe even totally random data filtered into plausible ranges, using executive compensation deltas as the metric.
Remember who funds Ai.
https://www.al-monitor.com/originals/2024/05/saudi-prince-alwaleed-bin-talal-invests-elon-musks-24b-ai-startuphttps://www.washingtonpost.com/technology/2025/05/13/trump-tech-execs-riyadh/
Disaster Capitalists -- they want another bubble bursting that sends another generation of wealth upwards to the 1%
Global financial crashes & mass dissatisfaction provides the makings for fascist movements & profitable imperialist wars of extraction.
Austerity is the midwife of fascism
Richard Murphy on developing a fairer and sustainable economy
Funding the Future (www.taxresearch.org.uk)
What will it take to beat the far right?
The far right feeds on economic despair and exclusion. Only policies that deliver security, dignity and belonging can starve it
(www.ips-journal.eu)
-
Hallucinating numbers, you say?
It must be learning from the current American administration..
-
-
@Natasha_Jay someone is getting fired
-
-
@Natasha_Jay This part in the original post is fantastic:
โThe worst part I raised concerns about needing validation in November and got told I was slowing down innovation.โ
hxxps://www.reddit.com/r/analytics/comments/1r4dsq2/we_just_found_out_our_ai_has_been_making_up/
@drahardja @Natasha_Jay I believe itโs supposed to be h*tt*ps
-
@Natasha_Jay It seems to me that AI tools have been released prematurely to generate revenue from the massive investments AI tech companies are making. This type of hallucination could cause serious damage to any business, organisation or person using it. Who is accountable in the end? Given the history of big tech's social media platforms and the harm theyare causing you can bet they'll accept no responsibility.
