top of page
  • Writer's pictureKur8

The rise of ethical issues arising from humanizing AI in customer experience

You land on a website one day and “Jay” sends you a welcome message and an invitation to chat. Jay has a smiley head-shot displayed and seems well-spoken enough. But is Jay a human customer service agent, or a bot? Even a few exchanges might not be enough to identify Jay’s true nature. You may have to say something completely unexpected to detect any indication of whether Jay is really there or running via AI. The truth is that humans are easily capable of sounding robotic, and bots are increasingly capable of sounding human. Against this uncertain backdrop is where consumer doubt and the issue of eception arise. But what is eception, exactly?


Coined by Byron Reese, ‘eception’ is a term for AI programs designed to deceive users into thinking that they’re interacting with a human. Eception is a growing phenomenon in modern AI use. There’s an often deliberate lack of clarity for consumers when they interact with brands. Tools such as chatbots, automatic response, and new technology like Google Duplex mean that discerning the human from the machine is only growing in complexity.

Some regard this AI advancement as an impressive achievement. Others see the blurring lines between bot and human as inevitable. But is eception ethical? It’s ambiguous at best. Eception involves the deception of consumers, but it doesn’t cause any direct harm. Indeed, you could argue that a lack of eception – i.e. failing to provide best-in-class AI support causes more harm to the consumer relationship and brand experience. Then again, gaining from tricking or consciously confusing customers is ill-advised at best, unethical at worst.

One thing is clear: the growth of eception also breeds a mountain of questions and concerns.



Eception is particularly prevalent in marketing and ecommerce. As AI tools improve, consumer relationships are increasingly shaped by AI-powered experiences. So, does the phenomenon of eception help or hinder our marketing practices? Consumers are aware of the AI tools that exist, as a result, they’re also more skeptical of the correspondence they receive from businesses. The mere hint of eception stands to create doubt in the consumer. We doubt that we are being answered by a human or that a human has even noticed that we are there. And, if left to wonder, this consumer doubt translates into distrust. The lack of clarity that accompanies eception, then, could be doing more harm than good. The backlash against Google Duplex serves as a stark example. Consumers don’t like the idea of being tricked. Falling prey to deception feels bad. It makes AI seem creepy and can even lead to feelings of repulsion and anxiety. This culminates in the creation of a poor experience and a damning view of the offending brand.


Eception is the product of a lack of clarity. So, clarity is the cure. Transparency worked for Google Duplex. which now includes a disclaimer before the AI begins talking. Transparent AI use means making it clear when a message is from a bot. It means being more open about the role AI tools play in your marketing strategies. It also means making it easy to reach a human team member if a consumer wants to interact further or find out more. That isn't to say you must scream your AI use from the rooftops. Now should you apologize or rationalize. Many consumers after all are happy to chat with a bot for quick queries or smoother self-help. A simple 'heads up' message built into the bot's content is enough. Transparent AI use in marketing also doesn't mean a robotic tone of voice. Chatbots and other AI tools can use a friendly, human voice that's on par with your brand, without pretending they're human.


As far as ethics go, it’s undeniable that transparent AI use is far less ethically ambiguous than eception. But what would a transparent approach to AI mean for our marketing practices? On the one hand, people don’t tend to have much faith in AI and bots. Until recent advancements, automated responses have denoted an absent and frustrating service. Telling consumers that they’re interacting with AI could, in theory, taint the perception of the brand experience. But transparent AI use removes the doubt and affiliated risks from using AI as a marketing tool. Consumers are more likely to accept AI if they feel comfortable and confident about how it’s used. If AI isn’t seemingly trying to trick us by spotlighting as a real person, we’re less likely to feel aversion when engaging with it. Ultimately, clarity breeds trust. And that trust can only boost your brand reputation. So, drop the eception and acknowledge use of AI. Doing so could drive better consumer relationships, alongside wider AI acceptance.


15 views0 comments

Recent Posts

See All
bottom of page