Skip to content Skip to Programs Navigation

News & Stories

Making voice chatbots more effective by design

Virtual assistant voice recognition service technology business concept. AI artificial intelligence robot help work support. Chatbot beautiful female face low poly program vector illustration

The days of repeating “representative” to an automated phone system could soon be over. A new wave of voice chatbots promises to remove the frustration of human-computer interactions – streamlining everything from customer service to logistics.

But voice chatbots can also make people uncomfortable and less willing to engage.

In a new study, Professor Yuqian Xu, who studies operations management at UNC Kenan-Flagler Business School, tested whether small tweaks to voice chatbots could harness the efficiency of automation while preserving the personability of humans.

Xu found that when voice chatbots disclose they are machines, their effectiveness decreases – but when they use interjections or filler words that make them sound more human, their effectiveness increases. When disclosure and human speech elements are combined, however, the improvement in outcomes is similar to when the voice chatbot is humanized without disclosure.

Companies, in other words, can be honest that they are using voice chatbots and maintain strong operational performance by pairing disclosure with mannerisms that mimic human speech.

Xu researches operations in financial services and digital platform sectors, specifically examining human behavior. She often partners with large companies to test her hypotheses and models, and some have implemented her recommendations based on her findings.

For this study, Xu focuses on an industry that’s invisible to most people yet essential to all: freight dispatching. Moving goods from one point to another is a time-sensitive process that can be fraught with complications. Truck dispatchers act as intermediaries between suppliers who want goods delivered and drivers who transport the goods.

An automated dispatch system is likely best suited to juggle multiple pieces of information and align suppliers’ needs with drivers’ availability. But how will truck drivers react to interacting with a voice chatbot instead of a human? Xu worked with a large truck sharing platform in Asia to examine how voice chatbot identity disclosure, as well using interjections and filler words, changed drivers’ conversations with voice chatbots.

Identity disclosure

In the past, you could easily tell whether you were talking to a voice chatbot or a human. The former was monotone and largely confined to a menu of set options: Press 1 if you’re having trouble with your order – and angrily press buttons if you find this process maddening! Humans, on the other hand, listen, ask clarifying questions and pivot according to your needs.

Today, responsive AI voice chatbots can do most of the things a human can and even simulate a natural voice.

What happens, then, if you find out you’re speaking with a voice chatbot? In some places, such as California, laws mandate disclosure when someone is speaking with a voice chatbot. In those cases, it’s a question of what happens when – not if – you realize the voice on the line isn’t human.

Research on a phenomenon known as “algorithm aversion” suggests many people feel uncomfortable when dealing with machines, citing the presumed lack of emotion and lower trustworthiness.

Xu’s research reaffirmed this tendency among the 11,000 truck drivers who participated in her study. When voice chatbots disclosed they weren’t human, drivers were more likely to hang up and discontinue the call, with the overall response rate decreasing by 11%.

Managers might take this as a sign that transparency is too costly to pursue. Even a small decrease in response rate for a sprawling industry such as freight dispatching can add up to enormous sums in lost productivity.

Xu’s study reveals another path that can counteract the negative effects of disclosure.

The solution: Making voice chatbots sound more human

Humans don’t speak perfectly. We interrupt ourselves and add filler words. So Xu studied how voice chatbots using interjections and filler words might cause people to apply human social norms to devices. She suspected, in part based on longstanding computer science frameworks, that the rough edges of human language might smooth operational performance by increasing truck drivers’ trust.

She was right. Making the voice chatbot sound more human – with our occasional interruptions and fillers such as “uh” “ums” and “well” – led drivers to respond to the voice chatbots more often, talk with the voice chatbots for longer, and accept more orders to drive materials for a specific supplier.  The positive effect appeared to be similar with or without voice chatbot identity disclosure.

These improved outcomes, she says, might stem from a mechanism that humanizing voice chatbots increases trust – even when the voice chatbot disclosed to drivers they weren’t speaking with a human. For example, drivers talking with the humanized voice chatbot shared more information, suggesting they placed greater confidence in it.

Xu’s research suggests that voice chatbots do not need to hide their automated nature to succeed: Honest mimicry of human speech patterns can still generate meaningful gains. The truck drivers trusted a human-sounding voice chatbot even when they knew it was a voice chatbot.

Amid recent efforts to ensure AI transparency and accountability, including from the White House, Xu’s study offers a path to improve voice chatbot performance while remaining honest about their automated nature.

A template for companies to implement

Fortunately for companies, humanizing voice chatbots based on the proposed solutions in this study is relatively easy and requires only slight modifications to existing systems.

YunYou Freight, the platform Xu collaborated with on the study, implemented her recommendations to make voice chatbots sound more human. Recent data confirms the initial positive effects identified in the study have persisted after the implementation. With the voice chatbot industry predicted to grow to $2 billion by 2027, Xu’s research and intervention offer a template for other companies to follow.

Voice chatbots aren’t perfect for everything, says Xu. While bots excel at routine tasks that can be resolved in predictable ways, humans are likely essential for complex cases the machine has not yet encountered.

She also wonders how voice chatbots might shape human learning curves. Humans can program existing insights into bots. Will we be able to identify and implement new insights if the voice chatbots now conduct most of the day-to-day work?

Despite these limitations and uncertainties, Xu says voice chatbots are here to stay as companies look for technology that can make them more efficient and competitive.

As the industry grows, companies that develop or use voice chatbots can lean on Xu’s research to align transparency and product performance, knowing that the harmful effects of disclosure in isolation can be overcome through additional interventions. And by illustrating the benefits of humanizing voice chatbots, Xu shows that advanced technology might only realize its full potential by sounding less technically perfect and a little more human.