Ed Zitron's a fantastic journalist, capable of turning a close read of AI companies' balance-sheets into an incandescent, exquisitely informed, eye-wateringly profane rant:
-
For these AI boosters, the point isn't to create an AI that can do the work as well as a person - it's to condition the world to accept the lower-quality work that will come from a chatbot. Rather than reading a summer reading list of *actual books*, perhaps you could be satisfied with a summer reading list of *hallucinated books* that are at least statistically probable book-shaped imaginaries?
21/
The bosses dreaming up use-cases for AI start from a posture of profound and proud ignorance of how workers who do useful things operate. They ask themselves, "If I was a ______, how would I do the job?" and then they ask an AI to do that, and declare the job done. They produce utility-shaped statistical artifacts, not utilities.
22/
-
The bosses dreaming up use-cases for AI start from a posture of profound and proud ignorance of how workers who do useful things operate. They ask themselves, "If I was a ______, how would I do the job?" and then they ask an AI to do that, and declare the job done. They produce utility-shaped statistical artifacts, not utilities.
22/
Take Grammarly, a company that offers statistical inferences about likely errors in your text. Grammar checkers aren't a terrible idea on their face, and I've heard from many people who struggle to express themselves in writing (either because of their communications style, or because they don't speak English as a first language) for whom apps like Grammarly are useful.
23/
-
Take Grammarly, a company that offers statistical inferences about likely errors in your text. Grammar checkers aren't a terrible idea on their face, and I've heard from many people who struggle to express themselves in writing (either because of their communications style, or because they don't speak English as a first language) for whom apps like Grammarly are useful.
23/
But Grammarly has just rolled out an AI tool that is so obviously contemptuous of writing that they might as well have called it "Go fuck yourself, by Grammarly." The new product is called "Expert Review," and it promises to give you writing advice "inspired" by writers whose writing they have ingested. I am one of these virtual "writing teachers" you can pay Grammarly for:
Grammarly is using our identities without permission
An AI feature in Grammarly called “expert review” has been using the names of staff members at The Verge in AI-generated comments without their knowledge or permission.
The Verge (www.theverge.com)
24/
-
But Grammarly has just rolled out an AI tool that is so obviously contemptuous of writing that they might as well have called it "Go fuck yourself, by Grammarly." The new product is called "Expert Review," and it promises to give you writing advice "inspired" by writers whose writing they have ingested. I am one of these virtual "writing teachers" you can pay Grammarly for:
Grammarly is using our identities without permission
An AI feature in Grammarly called “expert review” has been using the names of staff members at The Verge in AI-generated comments without their knowledge or permission.
The Verge (www.theverge.com)
24/
This is not how writing advice works. When I teach the Clarion Science Fiction and Fantasy Writers' workshop, my job isn't to train the students to produce work that is strongly statistically correlated with the sentence structure and word choices in my own writing. My job - the job of *any* writing teacher - is to try and understand the *student's* writing style and artistic intent, and to provide advice for developing that style to express that intent.
25/
-
This is not how writing advice works. When I teach the Clarion Science Fiction and Fantasy Writers' workshop, my job isn't to train the students to produce work that is strongly statistically correlated with the sentence structure and word choices in my own writing. My job - the job of *any* writing teacher - is to try and understand the *student's* writing style and artistic intent, and to provide advice for developing that style to express that intent.
25/
What Grammarly is offering isn't writing advice, it's *stylometry*, a computational linguistics technique for evaluating the likelihood that two candidate texts were written by the same person. Stylometry is a very cool discipline (as is adversarial stylometry, a set of techniques to obscure the authorship of a text):
But *stylometry has nothing to do with teaching someone how to write*.
26/
-
What Grammarly is offering isn't writing advice, it's *stylometry*, a computational linguistics technique for evaluating the likelihood that two candidate texts were written by the same person. Stylometry is a very cool discipline (as is adversarial stylometry, a set of techniques to obscure the authorship of a text):
But *stylometry has nothing to do with teaching someone how to write*.
26/
Even if you want to write a pastiche in the style of some writer you admire (or want to send up), word choices and sentence structure are only incidental to capturing that writer's style. To reduce "style" to "stylometry" is to commit the cardinal sin of technical analysis: namely, incinerating all the squishy qualitative aspects that can't be readily fed into a model and doing math on the resulting dubious quantitative residue:
27/
-
Even if you want to write a pastiche in the style of some writer you admire (or want to send up), word choices and sentence structure are only incidental to capturing that writer's style. To reduce "style" to "stylometry" is to commit the cardinal sin of technical analysis: namely, incinerating all the squishy qualitative aspects that can't be readily fed into a model and doing math on the resulting dubious quantitative residue:
27/
If you wanted to teach a chatbot to *teach* writing like a writer, you would - at a minimum - have to train that chatbot on the *instruction* that writer gives, not the material that writer has published. Nor can you infer how a writer would speak to a student by producing a statistical model of the finished work that writer has published. "Published work" has only an incidental relationship to "pedagogical communication."
28/
-
If you wanted to teach a chatbot to *teach* writing like a writer, you would - at a minimum - have to train that chatbot on the *instruction* that writer gives, not the material that writer has published. Nor can you infer how a writer would speak to a student by producing a statistical model of the finished work that writer has published. "Published work" has only an incidental relationship to "pedagogical communication."
28/
Critics of Grammarly are mostly focused on the effrontery of using writers' names without their permission. But I'm not bothered by that, honestly. So long as no one is being tricked into thinking that I endorsed a product or service, you don't need my permission to say that I inspired it (even if I think it's shit).
29/
-
Critics of Grammarly are mostly focused on the effrontery of using writers' names without their permission. But I'm not bothered by that, honestly. So long as no one is being tricked into thinking that I endorsed a product or service, you don't need my permission to say that I inspired it (even if I think it's shit).
29/
What I find offensive about Grammarly is *not* that they took my name in vain, but rather, that they reduced the complex, important business of teaching writing to a statistical exercise in nudging your work into a word frequency distribution that hews closely to the average of some writer's published corpus. *This* is Grammarly's fraud: not telling people that they're being "taught by Cory Doctorow," but rather, telling people that they are being "taught" *anything*.
30/
-
What I find offensive about Grammarly is *not* that they took my name in vain, but rather, that they reduced the complex, important business of teaching writing to a statistical exercise in nudging your work into a word frequency distribution that hews closely to the average of some writer's published corpus. *This* is Grammarly's fraud: not telling people that they're being "taught by Cory Doctorow," but rather, telling people that they are being "taught" *anything*.
30/
Reducing "teaching writing" to "statistical comparisons with another writer's published work" is another way of saying "go fuck yourself" - not to the writers whose identities that Grammarly has hijacked, but to the customers they are tricking into using this terrible, substandard, damaging product.
Preying on aspiring writers is a grift as old as the publishing industry.
31/
-
Reducing "teaching writing" to "statistical comparisons with another writer's published work" is another way of saying "go fuck yourself" - not to the writers whose identities that Grammarly has hijacked, but to the customers they are tricking into using this terrible, substandard, damaging product.
Preying on aspiring writers is a grift as old as the publishing industry.
31/
The world is full of dirtbag "story doctors," vanity presses, fake literary agents and other flimflam artists who exploit people's natural desire to be understood to steal from them:
Writer Beware
Shining a small, bright light in a wilderness of writing scams
Writer Beware (writerbeware.blog)
Grammarly is yet another company for whom "AI" is just a way to lower quality in the hopes of lowering expectations. For Grammarly, helping writers with their prose is an irritating adjunct to the company's main business of separating marks from their money.
32/
-
The world is full of dirtbag "story doctors," vanity presses, fake literary agents and other flimflam artists who exploit people's natural desire to be understood to steal from them:
Writer Beware
Shining a small, bright light in a wilderness of writing scams
Writer Beware (writerbeware.blog)
Grammarly is yet another company for whom "AI" is just a way to lower quality in the hopes of lowering expectations. For Grammarly, helping writers with their prose is an irritating adjunct to the company's main business of separating marks from their money.
32/
In business theory, the perfect firm is one that charges infinity for its products and pays zero for its inputs (you know, "scholarly publishing"). For bosses, AI is a way to shift their firm towards this ideal.
In this regard, AI is connected to the long tradition of capitalist innovation, in which new production efficiencies are used to increase quantity at the expense of quality.
33/
-
In business theory, the perfect firm is one that charges infinity for its products and pays zero for its inputs (you know, "scholarly publishing"). For bosses, AI is a way to shift their firm towards this ideal.
In this regard, AI is connected to the long tradition of capitalist innovation, in which new production efficiencies are used to increase quantity at the expense of quality.
33/
This has been true since the Luddite uprising, in which skilled technical workers who cared deeply about the textiles they produced using complex machines railed against a new kind of machine that produced manifestly *lower quality* fabric in much higher volumes:
It's not hard to find credible, skilled people who have stories about using AI to make their work better.
34/
-
This has been true since the Luddite uprising, in which skilled technical workers who cared deeply about the textiles they produced using complex machines railed against a new kind of machine that produced manifestly *lower quality* fabric in much higher volumes:
It's not hard to find credible, skilled people who have stories about using AI to make their work better.
34/
Elsewhere, I've called these people "centaurs" - human beings who are assisted by machines. These people are embracing the socialist mode of automation: they are using automation to improve *quality*, not *quantity*.
Whenever you hear a skilled practitioner talk about how they are able to hand off a time-consuming, low-value, low-judgment task to a model so they can focus on the part that means the most to them, you are talking to a centaur.
35/
-
Elsewhere, I've called these people "centaurs" - human beings who are assisted by machines. These people are embracing the socialist mode of automation: they are using automation to improve *quality*, not *quantity*.
Whenever you hear a skilled practitioner talk about how they are able to hand off a time-consuming, low-value, low-judgment task to a model so they can focus on the part that means the most to them, you are talking to a centaur.
35/
Of course, it's possible for skilled practitioners to produce bad work - some of my favorite writers have published some very bad books indeed - but that isn't a function of automation, that's just human fallibility.
A reverse centaur (a person conscripted to act as a peripheral to a machine) is trapped by the capitalist mode of automation: quantity over quality.
36/
-
Of course, it's possible for skilled practitioners to produce bad work - some of my favorite writers have published some very bad books indeed - but that isn't a function of automation, that's just human fallibility.
A reverse centaur (a person conscripted to act as a peripheral to a machine) is trapped by the capitalist mode of automation: quantity over quality.
36/
Machines work faster and longer than humans, and the faster and harder a human can be made to work, the closer the firm can come to the ideal of paying zero for its inputs.
A reverse centaur works for a machine that is set to run at the absolute limit of its human peripheral's capability and endurance. A reverse centaur is expected to produce with the mechanical regularity of a machine, catching every mistake the machine makes.
37/
-
Machines work faster and longer than humans, and the faster and harder a human can be made to work, the closer the firm can come to the ideal of paying zero for its inputs.
A reverse centaur works for a machine that is set to run at the absolute limit of its human peripheral's capability and endurance. A reverse centaur is expected to produce with the mechanical regularity of a machine, catching every mistake the machine makes.
37/
A reverse centaur is the machine's accountability sink and moral crumple-zone:
AI is a normal technology, just another set of automation tools that have some uses for some users. The thing that makes AI signify "go fuck yourself" isn't some intrinsic factor of large language models or transformers. It's the capitalist mode of automation, increasing quantity at the expense of quality.
38/
-
A reverse centaur is the machine's accountability sink and moral crumple-zone:
AI is a normal technology, just another set of automation tools that have some uses for some users. The thing that makes AI signify "go fuck yourself" isn't some intrinsic factor of large language models or transformers. It's the capitalist mode of automation, increasing quantity at the expense of quality.
38/
Automation doesn't *have* to be a way to reduce expectations in the hopes of selling worse things for more money - but without some form of external constraint (unions, regulation, competition), that is inevitably how companies will wield *any* automation, including and especially AI.
eof/
-
Normally the "digital divide" refers to *access* to technology, but as access becomes less and less of an issue, the real divide is between people who know how to defend themselves from the cruel indifference of technology designers and people who are helpless before their enshittificatory gambits.
5/
@pluralistic thus I coined the term "#TechLiteracy" (or lack thereof as "#TechIlliteracy").
- As this is a more fitting term to differenciate between "us" #TechLiterates (who know how to setup some lightweight #Linux distro and make it work (not just for us bot others) and those who believe the #Enshittification, #bloat and crap is "a fact of life" (aka. "#TechIlliterates")…
- Just like #literacy enables people to learn, interact and communicate, the same applies to using #technology and #media (see "#MediaLiteracy")…
Thus I see it as both moral and social duty to spread "Tech-Literacy" among society because decades of #illiteracy in #tech are now paying dividends and #Cyberfascists actively work on sabotaging and destroying #HumanRights and #CivilRights under #FalsePretenses lile "#YouthProtection" (see "#AgeVerification")…
- As this is a more fitting term to differenciate between "us" #TechLiterates (who know how to setup some lightweight #Linux distro and make it work (not just for us bot others) and those who believe the #Enshittification, #bloat and crap is "a fact of life" (aka. "#TechIlliterates")…
-
Zitron's stunt stuck with me because it's so simple and so apt. Every tech designer should be forced to use a stock configuration Acer Aspire 1 for a minimum of three hours/day, just as every aviation CEO should be required to fly basic coach at least one out of three flights (and one of two long-haul flights).
6/
@pluralistic IMHO politicans should be forced to exclusively use #PublicTransport 2nd if not 3rd class so they get to "#TouchGrass" (or rather "#TouchBase" with their constituents).
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login