
Member-only story
Does DALL-E image generating leave the artist in the dust?
The speed of creation is scary and addictive
Generative AI is happening. How do artists, trained to work with their hands, pencils, brushes, and tactile materials, deal with this enormous digital change in visual culture?
How does it feel to see an image generated instantaneously based on a word prompt?
It feels strange, alien, disconnected.
Two paths
Artists have only two options: 1) stay away from AI image generation and continue one’s hands-on work in the studio, or 2) embrace AI, explore it, learn from it, and ultimately thrive with it.
Explore the narrative first
I decided on option 2. I put my brushes and pencils to rest for a while and entered the prompt creation room on the AI floor.
Generative image AI, such as DALL-E, flourishes with wittily prompted stories, associative surround-scenarios, emotional descriptors, color-ranges, and many additional details of embellishment.
When we write the prompt, we’re pitching the story to the AI image generator. The image is rendered referencing the word input. It is the language-based model that defines the image.
Thus, choosing your prompt descriptors precisely and narratively is the basis for AI image generation.
Elements of the narrative
Remember the three primary UX tools we define in research to tell the users’ story, goals, pain points, and aspirations? The mis-en-scène we need to have before we embark on anything design-related?
Persona (hero), story, scenario, conflict, goal
If we use this formula in the image AI prompt, results will follow.
Learn how to
My first few image prompts were lame. Not much happened. The story was random, the descriptors not precise enough. The output was stylistically weird.
I was looking for something smooth, well rendered in a 3D style, fresh colors, cute, and lovely. The prompt needed a hero (the goldfish), a surrounding (the…