Generative AI Has Come So Very Far — But Is It Useful?
Diving into the world of generative art in search of practical applications.

Whether you like it or not, AI is changing the game for coders, movie makers, and bankers. But what about the rest of us? For AI EQ, we get our hands dirty trying to find pragmatic purpose in all the latest consumer-facing AI.
There’s a lingering feeling of unease I get when looking upon generative art these days. It’s not necessarily the seasickness I used to feel for poorly rendered figures with six fingers or landscapes flouting the law of physics. Much of Gen AI today is much more aesthetically steady.
My queasiness is more of an extension of the fairly unavoidable debates over such creations. In movies and television, gaming and advertising, there have been jobs lost, bans won, and an ongoing argument over its place in the creative realm. While that debate isn’t slowing down anytime soon, the tech behind Generative AI marches forward and continues to get better — less uncanny, more culturally aware, faster, and frankly more magical (whether of the dark arts or light arts would depend on your personal pov).
Often lost in the argument is how it’s, well, useful. That is, is there any practical use for those who aren’t artists, don’t work in a special effects department, or don’t create virtual worlds for a living. Does Generative Art have a day-to-day place for the rest of us?
I held onto this question as I gave a trial run to Adobe Firefly, an AI platform from the creators of Photoshop. Firefly is couched as being the way into AI for professional creatives, a tool aimed to, in Adobe’s words, streamline repetitive tasks, automate the production of high-quality assets, and use creations commercially since they are not trained on copyrighted materials.
Like many other generative platforms, it’s mind-boggingly easy to use and at the same time its use cases are a bit perplexing. I know I’m not alone when I say I’ve found few truly practical uses for Generative AI — other than making absurd and funny images for my friends or with my kids. So, looking for some more practical purposes, I dove in. Here’s what I found.
The Case For Young Creatives
It all started with a D&D campaign. Not my own, but the campaign of my best friend’s son. Catching up over the weekend, he regaled me with stories and campaigns and characters he and his friends had created. A few hours of exposition later, I asked him to draw his characters. I admit I was thinking, what might generative AI do for this to help visualize the characters of this under-12 artist? He handed me fairly rudimentary but awesomely creative drawings of a few characters including a blond axe-wielding warrior and a half-man, half-lizard thief who is a great swordsman and a natural with a bow.
I tried to bring the image to life with Firefly’s Image to Video feature, but instead of it coming alive and breathing fire, an AI-generated hand swooped in and lit the piece of paper containing the drawing of the lizard man on fire. Pretty cool, and more than a little alarming, but not what we were going for.
I found better luck with Text to Image. “A muscular, shirtless man who has the head of a dragon,” I wrote in the prompt. “He is carrying a bow and sword and wearing baggy pants, medieval boots, and a quiver of arrows on his back.” For the second character, “A muscular, shirtless warrior with long blond hair and a black mask that covers his eyes. He is carrying a bow and battle axe and wearing baggy pants, tall black boots, and a quiver of arrows on his back.” The results were wild — the reception mixed. He liked how ripped the characters were (natch, he was definitely going for Conan-the-Barbarian level of muscle in his drawings). But he still likes his drawing better. What would he do with these? That’s unclear. He seems intrigued, but also nonplussed. The real campaign continues where it should, in the imaginings of a group of young D&Ders.
To get another purview, I handed this to my kid, a teen with a little more art experience, whose portfolio landed them in a high school with a stellar visual arts program and who has a slew of original anime-inspired characters. I put them in front of the computer and let them do their thing. They chose the Generative Fill to work on their characters. Perhaps out of respect for the art, my kid never touched the character itself, but played with the backgrounds. It was useful … a nice short cut for her to put the character, Koi, in various backgrounds and settings. Would my kid use it? Sure, they told me, a little noncommittal. I found them later with a marker in hand adding a background to another character in their sketchbook.
I admit that such character development isn’t “useful” per se. I’d get to trying more adult use-cases soon, but I do stand by my instinct to first bring this tech to junior artists. I think their use of it was the closest I would find to the spirit of what Adobe is looking to create out of Firefly. The art generated here is, according to much of the marketing material around Firefly, meant to inspire, to fast-track ideas, and create shortcuts to art. With these two, it did just that. To add to that, the kids were interested, but not at all obsessed (as I feel I would have been if I saw this magical drawing machine in the 90s). They looked upon the creation and went back to their own creative endeavors. The next generation might already get it.
Fashion Forward
I was inspired by seeing Generative Fill in action and wanted to try it out for a more practical application: shopping. My teen has pretty solid taste in clothes, but it’s also a very particular taste — and so they are very hard to shop for. I’ve tried to shop for them, or even to get them to describe what it is they want, and it has lead to frustrations and, let’s say, pricey miscommunication. The Generative Fill feature, it turns out, is pretty seamless in its ability to swap outfits. Simply circle an area of a picture (or drawing) and describe the article of clothing in the prompt. Voila! It generates a generic version of said clothing, swapping it with the previous clothes.
I found a nice clear picture of my teen and went “shopping” for a style. I used the tool to outline their arms and upper body and typed in “black sweater.” Simple enough, but right on the money. The results were all beloved (black is their color). When I outlined their legs and typed “black short skirt” which my teen tends to like, it didn’t quite offer the same rewards (as if it knew a dad was typing this with his kid present, it came up with a bunch of long, conservative, and colorful skirts.). What it did in both cases was start a discussion, and give visual aid to what my kid likes and doesn’t like. I’m confident enough now to buy them sweaters. I’d rather let them shop for their own skirts.
Some Minor Marketing
Continuing down the road for ever-more pragmatic uses of this tool, I sought after some basic marketing help. Everyone could use a marketing department. Whether you’re running a garage sale, raising money for medical bills for a friend or family, or trying to get a few bucks for an old bike, a bit of visual marketing especially goes a very long way. It’s also a skill I don’t personally possess.
First, I thought I might get some help with a small vacation home I rent out in the Poconos. The Airbnb pics I have are serviceable, at best. What I could use, short of hiring a proper photographer, was, I thought, a sizzle reel. Making use of the Image to Video function, the most curious and chaotic of all the Firefly tools, I tried to put together a few scenes. My home is adorned with space-themed art, so with that in mind, I got to generating. I took a static a picture of the master bed below a framed photos of the Voyager spacecraft which hangs above it (in real life) and asked the “camera” to zoom in on the photo and then continue flying through the (AI-generated) Milky Way. I banked the b-roll. Then I had it fly through the kitchen, over the dining room table, and into a poster of Mars’ Olympus Mons. After a few more journeys, I looked at my pile of b-roll. The results were less a marketing tool for a quirky vacation rental than a home tour that might fit in a Tim & Eric skit. In other words, they were wild, but for a public-facing house tour I was barking up the wrong tree. I’ll stick with the old-fashion real estate photographer.
A simple start for a PTA poster, generated in a seconds.
I found better luck starting a poster for a fundraiser. I’m on the executive board of my kid’s middle school PTA and we, an underfunded upstart of a public school, are in need of an end of the year fundraiser. So with the Text to Template function, I was able to upload a logo and type into the generator the kind of poster I was going for. A slew of designs were generated and I was able to customize it to hit the old-timey marine theme I sought. It came up with a pretty darn good idea for a poster — in less than a minute. Sure, there’s a bit more work to do, adding info, text, QR codes, and all that, but it’s an impressive shortcut. This unique template is something that would have easily taken me an hour to come up with. A volunteer’s time saver.
What Is It Made For?
All in all, the dive into Firefly has been fairly fruitful but also full of diversion. Not mentioned were the many strange experiments that were less task-oriented, less in search of purpose. As I tried out the limits of the tool using my imagination and seeing what it would come up with, I realized how capable it is — and how limited my views of its utility really are.
My most unsettling experiment was playing with the aforementioned Image to Video tool. I uploaded a photo of my wife in a chair with our cat sitting upright in her lap. “Have the cat turn its head and leap at the camera, knocking the camera over,” I wrote, thinking I would stump the AI in either my breaking of the fourth wall (“the camera”) or in asking it to bring an orange-striped 2D ball of fur to life. What I got was a cat — my cat – turning his head out of a still photo, looking at me, getting a little unsettled and lunging at the camera. The camera then fell out and the video ended.
I’m not sure why this so disturbed me, but it did and I never saved the video. I went back in a week later with my elementary-aged kid, aiming to recreate this ability to bring life to our cat. Perusing cat pics, he pointed to one of our cat in a Santa hat. “Turn him into Santa!” he said, casually, like that’s a thing. So I did. Uncanny hilarity ensued.
My cat becomes Santa. But why?
What does one do with that? These are wild parlor tricks, for sure, but it’s something so out of scope with my expectation. It’s just, weird. I realize that trying to find pragmatic applications for something as truly cutting-edge, as fast-changing as the world of generative AI art is a bit of a fool’s errand. It could bring my cat, who is alive if on the older side, to life from a photo. Even for an instant, it understood the cat-ness of him and animated him. Generative tools such as Adobe Firefly have practical applications for us laypersons — to help with poster creation and character inspiration, fashion ideas and fixing old photos. But it’s the other stuff, the weird corners of generation that I’m most interested in. Are they useful? I’m not sure that’s the right question. More creative minds aren’t asking that — they’re just diving in.