Skip to content
  • 0 Votes
    12 Posts
    1 Views
    DoubleTreble 🇺🇦🥰🇵🇸🌍🇨🇦😺🇬🇱💚🧶D
    @royaards Yep! That looks like where we're headed 🤬
  • 0 Votes
    1 Posts
    0 Views
    AnthonyA
    I've been playing around with this set of ideas and questions:An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.#AI #GenAI #GenerativeAI #LLMs
  • 0 Votes
    1 Posts
    65 Views
    Miguel Afonso CaetanoR
    "In 2024 alone, private U.S. investment in artificial intelligence reached roughly $109 billion, according to Stanford’s AI Index Report 2025, and major firms are now committing hundreds of billions more to AI infrastructure and data centers in 2025. For comparison, researchers estimate that ending homelessness nationwide would cost on the order of $9–30 billion a year; clearing the public-transit repair backlog would require about $140 billion; and cancelling student debt for millions of Americans could cost anywhere from $300 billion (for a $10,000 per-borrower plan) to over $870 billion. These are not impossible sums; they are simply directed elsewhere. The problem is not that we invest in technology — it’s that we do so to avoid investing in one another. We could rebuild public transit, deliver universal healthcare, cancel student debt, end homelessness — all projects of collective possibility — but instead we feed the circus. We reward those who promise salvation through algorithms while punishing those who simply demand dignity through policy. And if AI spend is today’s growth engine, it is also today’s concentration risk — a stimulus routed through a handful of firms, data centers, and supply chains.The defenders of this frenzy will tell us to look forward — to think of the productivity gains that will surely justify today’s excess. It is, they insist, short-sighted to critique the future before it arrives. But history is not a ledger that balances itself. Productivity gains are not moral gains; they are distributed, like everything else, according to power. The Industrial Revolution doubled output but also deepened inequality. The internet democratized information yet concentrated ownership. Efficiency rose; bargaining power fell."https://democracyatwork.substack.com/p/ais-grand-circus#AI #GenerativeAI #Capitalism #AIBubble #AIHype