Gen AI in Contact Center – Now and Into the Future

Published May 2, 2024

By Farid Shenassa, USAN’s CTO 

Generative AI (GenAI) and Large Language Models (LLMs) have been all the rage over the last year with the introduction of Chat GPT. It is a potent technology, and many consider it the silver bullet to solve all kinds of problems.

Here, we look at its general capabilities and consider how it can be used within the enterprise contact center. In doing so, we will consider what is possible now, within a short/medium time horizon, and what will be possible as technology matures.   

AI Definitions 

The words AI, ML, Neural Nets, LLMs, and AGI are often used interchangeably. Think of Artificial Intelligence as the master category and Machine Learning as a subcategory. Let’s quickly review the others and how they all work together.   

Neural Nets are specific implementations of Machine Learning that mimic how our brains work. GenAI models use trained machine learning models for fields (e.g., Text and Graphics) to “generate” new information based on that model when provided an input (e.g., text-to-image generation, language translation, or predicting the next work in a sentence).

LLMs are specific implementations of Neural Nets that, given a large amount of text, can choose the next word given a sequence of initial words.

AGI is the general idea of creating an AI that can be equal to or better than humans in a wide range of capabilities.  

It is important to note that tools like ChatGPT and Claude are simply LLM models.  However, they feel like magic on a large scale because they can be instrumental in generating helpful answers, given the vast amount of information they are trained on.  

As engineers work with LLMs, they identify new ways to interact with these models, guide them, and format the input data to solicit the correct output. Though it may seem counterintuitive, the way we ask for additional guidance from the LLM, even telling it what persona to take and emotional signals, impacts the generated results. Prompt engineering is the emerging field of formatting this information to guide the LLM in developing the right response in a useful format.     

Current AI Capabilities in the Contact Center 

Given the current state of LLMs and concerns for accuracy and hallucinations, only specific use cases can be safely applied within the contact center. These involve generating valuable information for the agent to use, with the agent able to discern if it is correct before using or presenting it. Amazon Q has an excellent example of some of these capabilities. 

Information Retrieval 

Access to the enterprise knowledge base, documentation, job aids, policies, and other information make the LLM useful to agents for getting quick answers to questions.  

Information Recommendation 

By monitoring customer interactions with agents using live call transcripts, the LLM can identify information relevant to a customer’s questions and automatically provide it to the agent. 

Current Interaction Summarization 

As calls are transferred to an agent, LLMs can be used to take a conversation transcript (e.g., chat interactions, transcripts of voice interactions with a self-service system) and provide a summary of the interaction, with the caller’s intent and relevant information extracted. 

End of Call Summarization 

Similarly, at the end of a call, the LLM can use the entire call transcript to automatically create a call summary and note it in the CRM system, reducing the agent’s workload. 

Automatic Call Disposition 

Additionally, the LLM can use the same information to disposition the call based on the caller’s intent and the actual conversation during the call. 

Short-Term Roadmap for AI in the Contact Center

With some additional customizations, LLMs can be leveraged for other use cases. They include: 

Customer-Specific Context 

If we gather customer-specific information for each interaction and make it available to the LLM, it can generate data specific to each customer based on their account profile, activity history, and past interactions.     

The amount of data that can be put in the context window is generally limited. If large amounts of data are required for each customer, Retrieval-Augmented Generation (RAG) may need to be brought into the picture to load all that data beforehand, keep it up to date, and quickly make it available to the LLM. In most models, the data stored in RAG still counts toward the context window. 

Intent/Topic Extraction 

With some prompt engineering, we can use the LLM to monitor the interaction (self-service or between customer and agent) and automatically identify the topic of the discussion or the customer’s intent. This can then be used to automatically drive the interaction forward, guiding the agent using the required rules and steps to complete that task 

Information Extraction/Form Filling 

Once the intent is identified, we can further guide the LLM using custom prompts, rules, knowledge bases, and other ingested data to determine the relevant information from the conversation and automatically fill out information for the agent in the CRM system. 

Business Rules Implementation 

Using the chain of thought, the LLM becomes a powerful automatic rules engine that can identify and implement the rules out of the box, given all the documentation it ingests from the enterprise.  

Dynamically Generated Guided Agent Desktop 

Using a combination of the above capabilities, one can create a programming interface that uses the LLM model to dynamically generate a guided agent desktop for each interaction based on the customer’s context, the current interaction, what has been provided so far, and what remains to complete the task.  

The guided desktop can also leverage auto-generated code to look up or update information in the CRM based on business rules identified by the LLM. This will dramatically reduce the workload for the agent, who can follow the lead of the LLM-generated guided desktop to provide the required steps for each use case, navigate CRM systems, and more. 

Long-Term Roadmap for AI in the Contact Center

Looking further, we can envision a time when we store ALL the customer’s information, account activity, and previous interactions across all channels. That intelligence can be fed into an LLM or used to create a custom-trained model for each customer. An entirely new customer experience is in our future! 

Personalized Virtual Agent 

Once sufficient guard rails are created to avoid hallucinations, we can provide a complete self-service LLM-based virtual agent to interact with users without human supervision. Further, by giving the LLM all the past interactions and context for each customer, we can create a personalized LLM that knows the customer’s preferences, personality, how they talk, and what information they need to know and when. In short, it can act like an individualized personal assistant. 

Persistent, Trusted Virtual Advisor 

Taking that idea further, we can query the LLM to proactively identify opportunities to engage customers with just-in-time, personalized information by updating customer activity and profile information in real time as they change. This turns the enterprise (by virtue of the LLM) into a trusted advisor that delivers relevant, timely information. When this information can be contextually saved, we can create a persistent way to engage with customers on their preferred channel. 

Three Way Conversations 

With the virtual advisor in place, when the LLM or the customer requires human assistance, the virtual agent can simply add a human to the interaction. This creates a 3-way conversation between the customer, virtual advisor, and human agent. The agent now acts as a bot supervisor to assist when needed. The human agent has access to prior conversations, including a real-time summary created by the LLM, and can help as needed. The virtual advisor monitors the conversation and can offer relevant information.    

Learning From Human Agents 

As the LLM takes on the primary role of self-service virtual agent, we can use the transcript and activity for all escalated interactions with the agent for further training. Whether the LLM itself decides it cannot service the customer or the customer asks for additional assistance, these human-assisted interactions can further train the LLM.  

As model training becomes more efficient and cost-effective, we can improve virtual agents overnight based on daily call transcripts. Often referred to as Reinforcement Learning from Human Feedback (RLHF), this allows us to enhance the LLM, learn from its mistakes, and quickly converge to be as good as the sum of all our best human agents.  

Conclusion 

GenAI has tremendous potential for improving customer experience, streamlining contact center flows, reducing costs, and automating mundane tasks. Making these functions smarter allows contact center agents to handle only the most complex interactions and, over time, become supervisors and trainers for LLM-powered virtual agents. As GenAI rapidly improves, more advanced capabilities will become a reality within the contact center in the near future. 

Ready to get started with your AI-enabled contact center? Contact us to start a conversation.

CX Expertise Delivered Straight to Your Inbox