I wanted to see if generative AI could *perfectly* replicate my original work.
-
I wanted to see if generative AI could perfectly replicate my original work.
On the left is my photo. A Vancouver night. Rain-slicked pavement. People walking. Nothing exotic. On the right is the best AI could do.
I went through an absurd number of iterationsâfar more than was reasonableâexplicitly instructing the model to reproduce everything about the original image.
It immediately added a train. There is no train in my photo.
I told it to remove the train. It resisted. Hard. After many retries, the train finally disappeared and we were closer.
The composition was close. The mood was close. But one thing never arrived: the chromatic temperature.
No matter how many times I tried, it could not reproduce the palette. I even spoon-fed it the hex codes: #3A2A3D, #6E4A5A, #8FA39B, #D7E6E3, #1E1F22. No ambiguity. No vibes. Literal instructions.
I asked the AI if it believed it followed my instructions. It admitted it didnât. Not only did the scene logic drift, it outright stated that it could not enforce the palette. It could describe it. It could approximate it. It could not obey it.
Why?
Because AI training learns correlations, not rules. The model is optimized to reproduce patterns found across billions of images. In that dataset, rainy urban night scenes overwhelmingly correlate with warm sodium lighting, amber reflections, and brown-black shadows.
My palette is realâbut itâs an outlier.
When an image deviates too far from learned expectations, it scores worse internally. Cool green-purple fluorescent night scenes are statistically rarer than warm ones, so the system nudges the output back toward the mean. When pushed, it âcorrectsâ what it assumes are sensor artifacts.
The modelâs core question is not âdoes this obey a strict color spec.â Itâs âwould a human believe this is a plausible photo.â
My image is plausible. It exists. I took it. But itâs atypicalâand atypical gets sanded down.
So the demand for perfect replication keeps getting diluted.
In short, my palette conflicts with the modelâs learned statistical center of gravity. When that happens, training always wins.
This isnât limited to art. It shows up in math too. Ask an AI to pick a random number between 1 and 10 and it will almost always give you 7. Which means it isnât picking randomly at all. Itâs picking the number a human expects.
So the real takeaway is this: first, AI punishes deviation and drifts toward the statistical mean; second, humans arenât going anywhere, because humans are exceptionally good at deviation.
AI can generate a pleasing night scene. But an oddly colored one? It canât let that stand.
Iâm not anti-AI art. Sometimes the average is exactly what you want. But this exercise makes the limits obvious.
You can prompt and re-prompt forever. If the model decides your work is statistically deviant, it will never reproduce it.


-
I wanted to see if generative AI could perfectly replicate my original work.
On the left is my photo. A Vancouver night. Rain-slicked pavement. People walking. Nothing exotic. On the right is the best AI could do.
I went through an absurd number of iterationsâfar more than was reasonableâexplicitly instructing the model to reproduce everything about the original image.
It immediately added a train. There is no train in my photo.
I told it to remove the train. It resisted. Hard. After many retries, the train finally disappeared and we were closer.
The composition was close. The mood was close. But one thing never arrived: the chromatic temperature.
No matter how many times I tried, it could not reproduce the palette. I even spoon-fed it the hex codes: #3A2A3D, #6E4A5A, #8FA39B, #D7E6E3, #1E1F22. No ambiguity. No vibes. Literal instructions.
I asked the AI if it believed it followed my instructions. It admitted it didnât. Not only did the scene logic drift, it outright stated that it could not enforce the palette. It could describe it. It could approximate it. It could not obey it.
Why?
Because AI training learns correlations, not rules. The model is optimized to reproduce patterns found across billions of images. In that dataset, rainy urban night scenes overwhelmingly correlate with warm sodium lighting, amber reflections, and brown-black shadows.
My palette is realâbut itâs an outlier.
When an image deviates too far from learned expectations, it scores worse internally. Cool green-purple fluorescent night scenes are statistically rarer than warm ones, so the system nudges the output back toward the mean. When pushed, it âcorrectsâ what it assumes are sensor artifacts.
The modelâs core question is not âdoes this obey a strict color spec.â Itâs âwould a human believe this is a plausible photo.â
My image is plausible. It exists. I took it. But itâs atypicalâand atypical gets sanded down.
So the demand for perfect replication keeps getting diluted.
In short, my palette conflicts with the modelâs learned statistical center of gravity. When that happens, training always wins.
This isnât limited to art. It shows up in math too. Ask an AI to pick a random number between 1 and 10 and it will almost always give you 7. Which means it isnât picking randomly at all. Itâs picking the number a human expects.
So the real takeaway is this: first, AI punishes deviation and drifts toward the statistical mean; second, humans arenât going anywhere, because humans are exceptionally good at deviation.
AI can generate a pleasing night scene. But an oddly colored one? It canât let that stand.
Iâm not anti-AI art. Sometimes the average is exactly what you want. But this exercise makes the limits obvious.
You can prompt and re-prompt forever. If the model decides your work is statistically deviant, it will never reproduce it.


@atomicpoet funky lights on the AI version
-
@atomicpoet funky lights on the AI version
Sriram "sri" Ramkrishna - đź Yeah, way too many of them.
-
I wanted to see if generative AI could perfectly replicate my original work.
On the left is my photo. A Vancouver night. Rain-slicked pavement. People walking. Nothing exotic. On the right is the best AI could do.
I went through an absurd number of iterationsâfar more than was reasonableâexplicitly instructing the model to reproduce everything about the original image.
It immediately added a train. There is no train in my photo.
I told it to remove the train. It resisted. Hard. After many retries, the train finally disappeared and we were closer.
The composition was close. The mood was close. But one thing never arrived: the chromatic temperature.
No matter how many times I tried, it could not reproduce the palette. I even spoon-fed it the hex codes: #3A2A3D, #6E4A5A, #8FA39B, #D7E6E3, #1E1F22. No ambiguity. No vibes. Literal instructions.
I asked the AI if it believed it followed my instructions. It admitted it didnât. Not only did the scene logic drift, it outright stated that it could not enforce the palette. It could describe it. It could approximate it. It could not obey it.
Why?
Because AI training learns correlations, not rules. The model is optimized to reproduce patterns found across billions of images. In that dataset, rainy urban night scenes overwhelmingly correlate with warm sodium lighting, amber reflections, and brown-black shadows.
My palette is realâbut itâs an outlier.
When an image deviates too far from learned expectations, it scores worse internally. Cool green-purple fluorescent night scenes are statistically rarer than warm ones, so the system nudges the output back toward the mean. When pushed, it âcorrectsâ what it assumes are sensor artifacts.
The modelâs core question is not âdoes this obey a strict color spec.â Itâs âwould a human believe this is a plausible photo.â
My image is plausible. It exists. I took it. But itâs atypicalâand atypical gets sanded down.
So the demand for perfect replication keeps getting diluted.
In short, my palette conflicts with the modelâs learned statistical center of gravity. When that happens, training always wins.
This isnât limited to art. It shows up in math too. Ask an AI to pick a random number between 1 and 10 and it will almost always give you 7. Which means it isnât picking randomly at all. Itâs picking the number a human expects.
So the real takeaway is this: first, AI punishes deviation and drifts toward the statistical mean; second, humans arenât going anywhere, because humans are exceptionally good at deviation.
AI can generate a pleasing night scene. But an oddly colored one? It canât let that stand.
Iâm not anti-AI art. Sometimes the average is exactly what you want. But this exercise makes the limits obvious.
You can prompt and re-prompt forever. If the model decides your work is statistically deviant, it will never reproduce it.


Have you tried negative prompts and importance?
(no train) will pay special attention to not generating the train.
(((No train))). Other engines use different syntax like --negative prompt.Other examples of negative prompts;
"worst quality, low quality, lowres, blurry, pixelated, distorted, jpeg artifacts, compression artifacts, bad art, ugly, cartoonish, blurry, out of focus, watermark"Using #AI is a learned skill.
-
Have you tried negative prompts and importance?
(no train) will pay special attention to not generating the train.
(((No train))). Other engines use different syntax like --negative prompt.Other examples of negative prompts;
"worst quality, low quality, lowres, blurry, pixelated, distorted, jpeg artifacts, compression artifacts, bad art, ugly, cartoonish, blurry, out of focus, watermark"Using #AI is a learned skill.
@n_dimension Yep, but negative prompts are a lot like playing whack-a-mole. And the further problem is that AI often just ignores your instructions when they conflict with the statistical mean. -
I wanted to see if generative AI could perfectly replicate my original work.
On the left is my photo. A Vancouver night. Rain-slicked pavement. People walking. Nothing exotic. On the right is the best AI could do.
I went through an absurd number of iterationsâfar more than was reasonableâexplicitly instructing the model to reproduce everything about the original image.
It immediately added a train. There is no train in my photo.
I told it to remove the train. It resisted. Hard. After many retries, the train finally disappeared and we were closer.
The composition was close. The mood was close. But one thing never arrived: the chromatic temperature.
No matter how many times I tried, it could not reproduce the palette. I even spoon-fed it the hex codes: #3A2A3D, #6E4A5A, #8FA39B, #D7E6E3, #1E1F22. No ambiguity. No vibes. Literal instructions.
I asked the AI if it believed it followed my instructions. It admitted it didnât. Not only did the scene logic drift, it outright stated that it could not enforce the palette. It could describe it. It could approximate it. It could not obey it.
Why?
Because AI training learns correlations, not rules. The model is optimized to reproduce patterns found across billions of images. In that dataset, rainy urban night scenes overwhelmingly correlate with warm sodium lighting, amber reflections, and brown-black shadows.
My palette is realâbut itâs an outlier.
When an image deviates too far from learned expectations, it scores worse internally. Cool green-purple fluorescent night scenes are statistically rarer than warm ones, so the system nudges the output back toward the mean. When pushed, it âcorrectsâ what it assumes are sensor artifacts.
The modelâs core question is not âdoes this obey a strict color spec.â Itâs âwould a human believe this is a plausible photo.â
My image is plausible. It exists. I took it. But itâs atypicalâand atypical gets sanded down.
So the demand for perfect replication keeps getting diluted.
In short, my palette conflicts with the modelâs learned statistical center of gravity. When that happens, training always wins.
This isnât limited to art. It shows up in math too. Ask an AI to pick a random number between 1 and 10 and it will almost always give you 7. Which means it isnât picking randomly at all. Itâs picking the number a human expects.
So the real takeaway is this: first, AI punishes deviation and drifts toward the statistical mean; second, humans arenât going anywhere, because humans are exceptionally good at deviation.
AI can generate a pleasing night scene. But an oddly colored one? It canât let that stand.
Iâm not anti-AI art. Sometimes the average is exactly what you want. But this exercise makes the limits obvious.
You can prompt and re-prompt forever. If the model decides your work is statistically deviant, it will never reproduce it.


AI tends to use too much pattern repetition which makes is unnatural and boring (look at the street lights).
On your scene, repetition is only suggested and hides behind other objects, which allows the viewer to fill in the blanks and make it more interesting. -
I wanted to see if generative AI could perfectly replicate my original work.
On the left is my photo. A Vancouver night. Rain-slicked pavement. People walking. Nothing exotic. On the right is the best AI could do.
I went through an absurd number of iterationsâfar more than was reasonableâexplicitly instructing the model to reproduce everything about the original image.
It immediately added a train. There is no train in my photo.
I told it to remove the train. It resisted. Hard. After many retries, the train finally disappeared and we were closer.
The composition was close. The mood was close. But one thing never arrived: the chromatic temperature.
No matter how many times I tried, it could not reproduce the palette. I even spoon-fed it the hex codes: #3A2A3D, #6E4A5A, #8FA39B, #D7E6E3, #1E1F22. No ambiguity. No vibes. Literal instructions.
I asked the AI if it believed it followed my instructions. It admitted it didnât. Not only did the scene logic drift, it outright stated that it could not enforce the palette. It could describe it. It could approximate it. It could not obey it.
Why?
Because AI training learns correlations, not rules. The model is optimized to reproduce patterns found across billions of images. In that dataset, rainy urban night scenes overwhelmingly correlate with warm sodium lighting, amber reflections, and brown-black shadows.
My palette is realâbut itâs an outlier.
When an image deviates too far from learned expectations, it scores worse internally. Cool green-purple fluorescent night scenes are statistically rarer than warm ones, so the system nudges the output back toward the mean. When pushed, it âcorrectsâ what it assumes are sensor artifacts.
The modelâs core question is not âdoes this obey a strict color spec.â Itâs âwould a human believe this is a plausible photo.â
My image is plausible. It exists. I took it. But itâs atypicalâand atypical gets sanded down.
So the demand for perfect replication keeps getting diluted.
In short, my palette conflicts with the modelâs learned statistical center of gravity. When that happens, training always wins.
This isnât limited to art. It shows up in math too. Ask an AI to pick a random number between 1 and 10 and it will almost always give you 7. Which means it isnât picking randomly at all. Itâs picking the number a human expects.
So the real takeaway is this: first, AI punishes deviation and drifts toward the statistical mean; second, humans arenât going anywhere, because humans are exceptionally good at deviation.
AI can generate a pleasing night scene. But an oddly colored one? It canât let that stand.
Iâm not anti-AI art. Sometimes the average is exactly what you want. But this exercise makes the limits obvious.
You can prompt and re-prompt forever. If the model decides your work is statistically deviant, it will never reproduce it.


@atomicpoet
This is quite possibly the most frightening thing I've read about so-called AI. -
@atomicpoet
This is quite possibly the most frightening thing I've read about so-called AI.@sunflowerinrain Really? I kind of think itâs good to know of AIâs clear creative limitations. -
AI tends to use too much pattern repetition which makes is unnatural and boring (look at the street lights).
On your scene, repetition is only suggested and hides behind other objects, which allows the viewer to fill in the blanks and make it more interesting.@hadon Iâm fine with flaws. That can be edited with further prompts. The inability to change chromatic temperature, thoughâthatâs a dealbreaker.