MidJourney Will Change Digital Art Work Forever

M

Last June, six women came together on Zoom to create a magazine cover. The women included editors from Cosmopolitan, members of OpenAI, and digital artist Karen X. Cheng. They typed prompts into a field, tested different ideas and eventually produced a magazine cover for one the world’s most popular fashion magazines.

The image was made by DALL-E, a digital imaging service powered by artificial intelligence. To create the picture, Cheng prompted the system to produce a “wide-angle shot from below of a female astronaut with an athletic feminine body walking with swagger toward camera on Mars in an infinite universe, synthwave digital art.”

It was the first time artificial intelligence had created a magazine cover but it wasn’t the last. Since then. Italian Vogue has placed model Bella Hadid on an AI-generated background. And in March this year, Réponses Photo, a French photography magazine, featured an old man on the cover on one of its editions to illustrate the difficulty of telling AI images from real photographs.

Designing magazine covers used to be the high point of a graphic designer’s career. It’s now the kind of work that can be farmed out to software and completed in minutes by typing text onto a screen.

Like its rival Midjourney, DALL-E has ingested huge amounts of data. Its operators have fed it vast numbers of images together with their captions, enabling the algorithm to match words to pictures. Once the algorithm has seen the word “tree” associated enough times with a large brown trunk and green leaves, it reproduces the same shape in response to a request for that term. But it can also combine different terms, adding a squirrel to the tree, adjusting the leaves for the season and the tree species, and changing the appearance to match any of the artistic styles its data processing has encountered.

Other AI programs work in a similar way. Chat-GPT, a Large Language Model, was mostly trained on sections of the internet archived between 2016 and 2019 by Common Crawl, a database. The algorithm then turns the words it encounters into numbers, places those numbers into categories, and uses its database to calculate the probability of one number following another. For writers, the result has been the ability to generate short pieces of text and even poems in any style from Shakespeare to Larkin. For artists, AI algorithms threaten to make skill with pencil and watercolors irrelevant.

A New Era in Art

Andrei Kovalev, a visual artist who has been working in commercial photography and movies for sixteen years, sees the rise of AI-generated imagery as the start of a new era.

“We are witnessing the most dramatic change in art in human history,” he says. “With Midjourney and similar AIs, art’s ‘technical component’ becomes obsolete.”

Artists who were technically skilled—who could produce beautiful or intricate imagery—have tended to be highly regarded even if their ideas were weak and the stories their works told were poor, Kovalek argues.  Now that anyone can produce beautiful imagery, production becomes less important than concepts and stories.

“To me, this is a positive change,” he says.

To make that visual storytelling easier, Kovalek has created Midlibrary, a collection of styles that artists can use to create AI-generated images with their chosen look. What started as a single Web page with a list of 60 styles has become one of the biggest Midjourney-related projects on the Web. The catalog now lists more than 2,250 styles submitted by a community of 216 volunteers and contributors. Every month, the library receives more than 1.4 million requests. A new YouTube channel guides the library’s users through different aspects of Midjourney, matching their prompts to their desired results.

That a visual artist is promoting the use of Midjourney might seem counterintuitive, but Kovalek sees AI pushing artists to focus less on image-making than on the effect an image produces, a role only an artist can fulfil.

“With Midjourney, anyone can produce an outstanding image,” he says. “But to make that image evoke emotions, to tell a meaningful story with it—that’s not something a regular person can easily do. Or an AI.”

Software like Midjourney then becomes not a replacement for artists and designers but tools for artists and designers to make production easier and their results easier to obtain. Juan Noguera, an Assistant Professor of Design at Rochester Institute of Technology has described how he used DALL-E in an industrial design process.

Noguera wanted to design a set of small household objects for tourists visiting Antigua, a town in Guatemala near the place he grew up. He started by prompting DALL-E to create images of nostalgic household objects, and received pictures of sad-faced erasers and a tissue box with a frowning face. Adding the word “Antigua” to the prompts introduced some of the town’s characteristics: the cobblestone streets, colonial architecture, and a nearby volcano.

Noguera then built on those suggestions to create a mock-up in Photoshop of a tissue box shaped like the nearby Volcán de Agua before rending a three-dimensional CAD model. That led to the design of a companion cast-iron matchstick holder.

“Even though I made all of the design choices, the AI generator helped me navigate my abstract design goals,” he wrote. “It’s hard to say if I would have landed on these prototypes on my own.”

This year Noguera has been teaching students to integrate AI image generation into their product design process.

The Right Tool for a Digital Job

Jesus Santana is already implementing AI into his design process. An artist who offers digital art, graphic design and vector illustrations on Fiverr, he’s recently started offering clients AI imagery. In two months, he’s generated more than 2,000 pictures for clients, using Midjourney to produce videogame characters, business logos, altered copies of famous paintings, and modeling campaigns. The most popular request is the client’s face in another character or against a custom background.

A project begins with Santana listening to the client’s ideas. “The Midjourney AI has limitations in the creation process, so I talk to them and tell them if what they want can be made via AI,” he says. “I’m honest with them about this and if I can’t do the job, I recommend a different approach or a specialized artist for what they want.”

Sometimes he’ll have to use several prompts to reach the desired results and he might even have to merge multiple images to bring the client’s idea to life. The result is a portfolio of realistic images showing a samurai teaching a small girl, an arcane mirror, and a cartoon DJ playing music. Santana too sees Midjourney as complementing the artist’s job rather than replacing it.

“The AI tools have come a long way since their creations, sometimes at a scary pace, but in the end the human element is needed,” he says. “People still need a real person point of view and criteria to finish the job.”

Even as a tool, though, AI images have limitations. Some of the results have been laughably bad, with image generators struggling particularly to draw hands. Those results may improve with time but any picture that includes fingers is currently taking a risk.

Copyright is an issue too. In February this year, the US Copyright Office ruled that the images in a graphic novel created using Midjourney can’t be copyrighted, a protection that’s only available to the product of human creativity. To retain control of their work, artists and designers who use Midjourney and other AI tools will have to add significantly new content to the outputs their prompts generate. Even the best story-tellers will have to use their old-fashioned technical skills to enhance an image or struggle to monetize their work and prevent others from using it themselves.

And most importantly, all AI-generated work is derivative. AI algorithms absorb vast amounts of data then generate results similar to output already produced by others. Artists are influenced by the work of others but they add their own experiences and impressions of the world, a step that’s beyond the ability of current IA platforms.

An artist, for example, can see a couple flirting on a bus, understand what they’re doing, and draw a picture that highlights elements of the scene that express what’s happening in the moment. Midjourney and DALL-E can’t describe a world they can’t see. They can’t express emotions they can’t feel, and they can’t reproduce a memory they’ve never experienced. They can only produce facsimiles of the interpretations others have already made.

For some designers then, AI will be a tool that enables them to tell a story faster and easier than they’ve ever done before. It will provide a shortcut that removes the hard slog of sketching, erasing and shading. But an AI platform can’t think up its own stories and it can’t produce novel ideas or new insights. It might change the way some artists work but it will always need artists to tell it what to paint.

Recent Posts

Recent Comments

Archives

Categories

Meta