Image Credit: Vertigo3d // Getty Images
Check out all the on-demand sessions from the Intelligent Security Summit here.
As robotics began to take on more and more tasks, one of the biggest questions has always been, “Will robots ever be creative?” That always seemed a laughably distant goal. Without consciousness, a robot couldn’t do more than follow directions. Or could it?
With the advent of AI and machine learning, it suddenly became possible to imagine an AI that could learn to interact creatively. Now we have a new question to deal with: “Will creative AI be more of a problem than it’s worth?”
Creative AI and art
One concern many have voiced is that generative AI could encroach on human-specific domains, like art. This fall, an AI artwork won the Colorado State Fair’s fine arts competition. The creator, Jason Allen, wasn’t an artist. He created the artwork using Midjourney, one of several generative AI art creators.
Allen himself had to overcome some personal worries about AI art. However, by the time he won the prize, he felt that AI “is a tool, just like the paintbrush is a tool. Without the person, there is no creative force.” Allen still needed to curate the AI’s responses to his prompt, and he ran the final versions through some other editing tools as well.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
However, Allen faced considerable backlash, with some even suggesting that Allen should return his award. One person said that using Midjourney was like “entering a marathon and driving a Lamborghini to the finish line.”
If we see AI as a tool, it doesn’t need to inspire fear. It may be the sensational claims that often surround AI that are causing the issues. If “creative” AI is seen as replacing human creativity, it could be perceived as a threat rather than an opportunity.
And AI doesn’t need to be a threat. Recently some friends and I sat around a living room and tried out different prompts in Midjourney. I tried for some time to create an “Amish superhero.” For some reason, it insisted on creating figures with a hat pulled down over their eyes. Clearly, AI can’t just replace a human artist.
A good artist could use AI as the basis for something, and manipulate it later, as Allen did. It’s useful too, and it needs to be marketed in that way. Sensationalizing AI will distract from real-world applications that make a human artist’s job easier.
A good use case for creative AI
For example, AI can already be very useful in graphic design. Several companies have explored the possibilities that graphic design AI could bring to their business.
Recently, when sending out a marketing email, I discovered Mailchimp’s new Creative Assistant AI, which is fully integrated with its email builder. I was able to just enter my copy, upload a few images and choose some settings. The AI created lots of different possibilities and variations.
Creative Assistant took away an hour of work, where I’d have had to create an entire graphic and then shift the text and images around, and no one was worried that it could replace a marketing agent. In this case, AI filled an uncontroversial role. Who wouldn’t use a time-saving AI assistant?
Creative virtual assistants
If the goal of AI is to make our work easier, how about AI virtual assistants? Bots that can actually learn and can create new answers to our questions? Surprisingly, this subject has also launched a recent controversy. This time it surrounds the complexities of relating to a robot as though it were human.
Google has been developing a chatbot, called LaMDA, which can return intelligent, human-like responses to prompts. The bot interacted so humanly that one of the engineers, Blake Lemoine, became convinced that it had become sentient. LaMDA’s responses to his queries seemed very humanlike and self-aware. In fact, LaMDA even created an allegory where a wise owl saved forest creatures, a story intended to express LaMDA’s desire to help others.
Convinced that LaMDA was sentient, Lemoine wanted to treat LaMDA as a human. However, for those, like me, who aren’t convinced that AIs are sentient, another problem could arise. A human-like AI might not be most comfortable for customers. While such an AI could be considered a valid replacement for a human interaction, customers could feel cheated if they need to interact with a machine rather than a human.
Where generative AI fits in
What would work better is to optimize AI for helpfulness without making it appear human. If AI can take away tedious tasks and enable us to spend more time on what’s more important, it has done its job. If consumers know that that’s the purpose of generative AI, they will be much more comfortable with the role it plays.
Lynn Martin works in marketing for Brechbill Trailers.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!