@StarkRG You assume generative AI is never trained solely on licensees and public domain works, which are therefore not stolen.
-
@StarkRG You assume generative AI is never trained solely on licensed and public domain works, which are therefore not stolen.
You also assume generative AI is used solely in data centres and cannot be used on modest hardware, no more power hungry than a gaming PC.
But in fact, both assumptions are wrong. It is more than feasible to train off the public domain and run these models on a PC with no more than 12GB of VRAM. -
@StarkRG You assume generative AI is never trained solely on licensed and public domain works, which are therefore not stolen.
You also assume generative AI is used solely in data centres and cannot be used on modest hardware, no more power hungry than a gaming PC.
But in fact, both assumptions are wrong. It is more than feasible to train off the public domain and run these models on a PC with no more than 12GB of VRAM.@atomicpoet Yes, it *can* be trained with ethically sourced material, but it *isn't*. And it doesn't matter what hardware is used, it'll require similar amounts of energy. If you use low-powered hardware, it'll use less power (energy per unit time), but take longer.
-
@atomicpoet Yes, it *can* be trained with ethically sourced material, but it *isn't*. And it doesn't matter what hardware is used, it'll require similar amounts of energy. If you use low-powered hardware, it'll use less power (energy per unit time), but take longer.
@StarkRG No, not that it *isn’t*. It *is*. Here’s a fairly trained model:
https://huggingface.co/Mitsua/mitsua-likes
Also, you are assuming ongoing training. However, you can use a pre-trained model on a local machine that requires only short bursts of compute. No retraining. No dataset. Roughly the same energy class as a heavy game or a 3D render. -
@StarkRG No, not that it *isn’t*. It *is*. Here’s a fairly trained model:
https://huggingface.co/Mitsua/mitsua-likes
Also, you are assuming ongoing training. However, you can use a pre-trained model on a local machine that requires only short bursts of compute. No retraining. No dataset. Roughly the same energy class as a heavy game or a 3D render.@atomicpoet @StarkRG But these types of narrowly-trained models (even assuming the "open licensed and public domain" is accurate, which it mostly isn't) are also very limited and extremely likely to be wrong, especially in sensitive use cases. Even in this case, where all the model can do is generate a particular style of anime images, "The Mitsua Likes struggles with most modern concepts and has difficulty understanding complex prompts"
-
@atomicpoet @StarkRG But these types of narrowly-trained models (even assuming the "open licensed and public domain" is accurate, which it mostly isn't) are also very limited and extremely likely to be wrong, especially in sensitive use cases. Even in this case, where all the model can do is generate a particular style of anime images, "The Mitsua Likes struggles with most modern concepts and has difficulty understanding complex prompts"
@wcbdata @StarkRG I gave one example. There are many others. Whether a narrowly trained model is powerful, weak, or boring is orthogonal to the point I was making.
If someone claims a model is public domain or open-licensed and that claim is false, that’s not a vibes debate. That’s an evidentiary question for courts. I’m not an auditor and neither are you.
Capability, usefulness, and correctness are separate axes from sourcing and authorship. Smuggling model quality into an ethics argument doesn’t strengthen it. It just changes the subject.