
I never expected AI to be unethical. My recent experience says a lot about where we are with AI implementation and where we are not.
Me and Marconi
I was in the process of cancelling an online subscription service, let’s call them “Marconi Media” (this story is less about one particular company and more about GPT AI implementation).
Like cancelling other services, I expected I would search their site endlessly to find the ‘Contact Us’ information and then further spend time listening to hold music, after telling the automated voice, “Agent, Agent, Agent”, as if summoning the magic genie of cancellations.
However, I easily found the Cancel button, thanks in part to FTC’s Click to Cancel rule, adopted in 2024 but now paused.
While I expected clicking the button would finish the task. After I filled out a brief questionnaire, I was prompted with ‘please continue to the [AI] Chat to complete your request’… Strange to require a Chat after completing the form.
This combination of a questionnaire followed by an AI chatbot is clunky and duplicative (the chat asked similar questions). A lesson for subscription service companies deploying AI into their user experiences.
AI Chat Persistence
Here is where the unethical part started. My subscription price was $24.99 per month. Immediately, the AI chatbot offered me a discount of $17.99, presumably for loyal customers (I had been a customer for more than 10 years). When I declined, things began to get a little strange. When I repeatedly asked if there was a lower cost, the price dropped to $7.99, then $6.99, and eventually $5.99. It took some persistence on my part.
More than 75% savings by repeating the same AI prompt. A lesson for customers.
This experience raises some important questions for companies developing their AI experience for their customers:
- Did the company give AI direct rules not to offer the best and lowest option to its customers?
- Did AI learn a pattern from human trainers to gradually lower the price during conversation, like an auctioneer or a skilled salesperson?
- Is the company aware that their AI is operating this way?
Providing customers with the best services and the best prices is a cornerstone of building trust. A company that is holding back its best offers appears disingenuous and left me feeling less valued. While I might expect a customer service representative working on commission to ‘haggle’ prices, I did not expect this from AI. A lesson about strategy.
First Do No Harm
Directly asking AI for the best offers should have resulted in just that. It reminds me of Isaac Asimov’s first rule that robots must do no harm. But what about financial harm by omission? I know this edges on being too philosophical, but should we reasonably expect AI to be honest when answering direct questions?
When I confronted the chat as to why I was not initially offered the best price, the question was ignored, and it returned to ‘would you like to accept this offer’?
Maybe someday my own AI Agent can negotiate with theirs, leveling the playing field of AI vs. AI. In the meantime, we might need to learn a few new skills to manage the AI we are currently encountering.
What are your thoughts on ethical AI?
Have you experienced similar situations, finding yourself negotiating with AI?
Thanks for reading. Please consider adding a comment and sharing your thoughts on this topic below.
Discover more from Derek W Gibson
Subscribe to get the latest posts sent to your email.