DALL-E image generated with the prompt: “A smiley furry flying towards at me holding a mouse, 3D render, blue”, and edited.
DALL-E image generated with the prompt: “A smiley furry flying towards at me holding a mouse, 3D render, blue”, and edited.

Does DALL-E image generating leave the artist in the dust?

The speed of creation is scary and addictive

Eva Schicker
6 min readFeb 28, 2023

--

Generative AI is happening. How do artists, trained to work with their hands, pencils, brushes, and tactile materials, deal with this enormous digital change in visual culture?

How does it feel to see an image generated instantaneously based on a word prompt?

It feels strange, alien, disconnected.

Two paths

Artists have only two options: 1) stay away from AI image generation and continue one’s hands-on work in the studio, or 2) embrace AI, explore it, learn from it, and ultimately thrive with it.

Explore the narrative first

I decided on option 2. I put my brushes and pencils to rest for a while and entered the prompt creation room on the AI floor.

Generative image AI, such as DALL-E, flourishes with wittily prompted stories, associative surround-scenarios, emotional descriptors, color-ranges, and many additional details of embellishment.

When we write the prompt, we’re pitching the story to the AI image generator. The image is rendered referencing the word input. It is the language-based model that defines the image.

--

--

Eva Schicker
Eva Schicker

Written by Eva Schicker

Hello. I write about UX, UI, AI, animation, tech, fiction, art, & travel through the eyes of a designer & painter. I live in NYC. Author of Princess Lailya.

No responses yet