You’ve probably noticed how hot AI-generated art is right now. Sites like DALL-E, midjourney, and Night Cafe have made it possible for anyone to feed specific words to artificial-intelligence bots and see what images those systems deliver. A quick search will turn up millions of examples of how these computers are interpreting language and translating it into images.
The design team at Shift Worship wanted to see if they could use one of these sites, midjourney, to help make a collection for churches to use in worship. The result is is the Autumn Skynet Collection, one of our most popular sets this season.
We asked Megan Watson to explain to us how the team used midjourney to build this set.
Q. What was the first step in creating a collection like this?
A. We started by testing out several different kinds of prompts and images to see what could be created. We came up with some pretty fun and even bizarre images. This process required doing research into rich words and vocabulary that could be used to narrow down the results to get the right look and feel for our design. The obvious choices were -“fall, autumn, forests, leaves, etc,” but then we used prompts that also included cues as to art style, lighting, coloring, and more. It turns out it’s pretty useful to have a good understanding of art history!
Q. So was it just finding the right combination of words to feed the AI?
A. The prompts are only part of the process. Once we achieved a reasonable and unique look, we further used the AI tools to refine and create variations. This can take a lot of reiterations, trial-and-error, and patience. Our rule was to make sure the images had a truly spacious feel for use as motions that could lend themselves to text being overlayed well and while giving it a full environmental feeling.
Q. What did you have to do the final images to make them usable?
A. Since most AI software outputs small imagery, we first take those images and upscale them into a usable state using Adobe Illustrator. Next we took the images into Photoshop to clean them up. There tends to be a lot of odd “extras” in AI-generated imagery that we take out or manipulate into other parts of the image.
Q. What else did you have to do?
A. We also add color correction, recrop the layout, add details, and even move whole parts of the image around if needed! Then the image is ready to be made into a motion. We manipulate the base image in After Effects by using various masks, overlays, and in this case leaves, to develop the motion effect. We also often bring in more lighting and color correction to create the final part of the motion.
Q. What about copyright? Who owns the original computer-generated art?
A. We carefully researched our image rights and uses and made sure we had taken all of the necessary steps to ensure those were in place. We also have strictly used only certain types of images that would not use any copyright or trademarked elements in our designs.
Q. So is AI-generated art really “art”?
A. I view these generated images as a way to explore art styles and not a form of artistry itself. The designs are a helpful tool in our toolbox, along with many other kinds of images we use to create beautiful collections.