“Why AI Isn’t Going to Make Art”, Ted Chiang2024-08-31 (, , ; similar)⁠:

…We can imagine a text-to-image generator that, over the course of many sessions, lets you enter tens of thousands of words into its text box to enable extremely fine-grained control over the image you’re producing; this would be something analogous to Photoshop with a purely textual interface. I’d say that a person could use such a program and still deserve to be called an artist.

The film director Bennett Miller has used DALL·E 2 to generate some very striking images that have been exhibited at the Gagosian Gallery; to create them, he crafted detailed text prompts and then instructed DALL·E to revise and manipulate the generated images again and again. He generated more than a 100,000 images to arrive at the 20 images in the exhibit. But he has said that he hasn’t been able to obtain comparable results on later releases of DALL·E [DALL·E 3].

I suspect this might be because Miller was using DALL·E 3 for something it’s not intended to do; it’s as if he hacked Microsoft Paint to make it behave like Adobe Photoshop, but as soon as a new version of Paint was released, his hacks stopped working. OpenAI probably isn’t trying to build a product to serve users like Miller, because a product that requires a user to work for months to create an image isn’t appealing to a wide audience. The company wants to offer a product that generates images with little effort.

[Why does Chiang have no personal examples? Apparently he refuses to use any AI ever.]