Use generative AI to extend agent productiveness by automated name summarization


Your contact heart serves because the important hyperlink between your enterprise and your prospects. Each name to your contact heart is a chance to be taught extra about your prospects’ wants and the way effectively you might be assembly these wants.

Most contact facilities require their brokers to summarize their dialog after each name. Name summarization is a beneficial device that helps contact facilities perceive and acquire insights from buyer calls. Moreover, correct name summaries improve the client journey by eliminating the necessity for patrons to repeat info when transferred to a different agent.

On this publish, we clarify the best way to use the ability of generative AI to scale back the hassle and enhance the accuracy of making name summaries and name inclinations. We additionally present the best way to get began rapidly utilizing the most recent model of our open supply answer, Live Call Analytics with Agent Assist.

Challenges with name summaries

As contact facilities accumulate extra speech information, the necessity for environment friendly name summarization has grown considerably. Nonetheless, most summaries are empty or inaccurate as a result of manually creating them is time-consuming, impacting brokers’ key metrics like common deal with time (AHT). Brokers report that summarizing can take as much as a 3rd of the whole name, so that they skip it or fill in incomplete info. This hurts the client expertise—lengthy holds frustrate prospects whereas the agent sorts, and incomplete summaries imply asking prospects to repeat info when transferred between brokers.

The excellent news is that automating and fixing the summarization problem is now potential by generative AI.

Generative AI helps summarize buyer calls precisely and effectively

Generative AI is powered by very giant machine studying (ML) fashions known as basis fashions (FMs) which might be pre-trained on huge quantities of knowledge at scale. A subset of those FMs centered on pure language understanding are referred to as giant language fashions (LLMs) and are in a position to generate human-like, contextually related summaries. The very best LLMs can course of even complicated, non-linear sentence buildings with ease and decide varied elements, together with matter, intent, subsequent steps, outcomes, and extra. Utilizing LLMs to automate name summarization permits for buyer conversations to be summarized precisely and in a fraction of the time wanted for guide summarization. This in flip permits contact facilities to ship superior buyer expertise whereas decreasing the documentation burden on their brokers.

The next screenshot exhibits an instance of the Dwell Name Analytics with Agent Help name particulars web page, which comprises details about every name.

The next video exhibits an instance of the Dwell Name Analytics with Agent Help summarizing an in-progress name, summarizing after the decision ends, and producing a follow-up e-mail.

Answer overview

The next diagram illustrates the answer workflow.

Step one to producing abstractive name summaries is transcribing the client name. Having correct, ready-to-use transcripts is essential to generate correct and efficient name summaries. Amazon Transcribe might help you create transcripts with excessive accuracy on your contact heart calls. Amazon Transcribe is a feature-rich speech-to-text API with state-of-the-art speech recognition fashions which might be absolutely managed and repeatedly educated. Clients equivalent to New York Times, Slack, Zillow, Wix, and hundreds of others use Amazon Transcribe to generate extremely correct transcripts to enhance their enterprise outcomes. A key differentiator for Amazon Transcribe is its capacity to guard buyer information by redacting delicate info from the audio and textual content. Though defending buyer privateness and security is essential on the whole to contact facilities, it’s much more essential to masks delicate info equivalent to checking account info and Social Safety numbers earlier than producing automated name summaries, so that they don’t get injected into the summaries.

For patrons who’re already utilizing Amazon Connect, our omnichannel cloud contact heart, Contact Lens for Amazon Connect offers real-time transcription and analytics options natively. Nonetheless, if you wish to use generative AI along with your current contact heart, we’ve got developed solutions that do many of the heavy lifting related to transcribing conversations in actual time or post-call out of your current contact heart, and producing automated name summaries utilizing generative AI. Moreover, the answer detailed on this part lets you integrate with your Customer Relationship Management (CRM) system to robotically replace your CRM of selection with generated name summaries. On this instance, we use our Live Call Analytics with Agent Help (LCA) answer to generate real-time name transcriptions and name summaries with LLMs hosted on Amazon Bedrock. You may as well write an AWS Lambda operate and supply LCA the operate’s Amazon Useful resource Identify (ARN) within the AWS CloudFormation parameters, and use the LLM of your selection.

The next simplified LCA structure illustrates name summarization with Amazon Bedrock.

LCA is supplied as a CloudFormation template that deploys the previous structure and lets you transcribe calls in actual time. The workflow steps are as follows:

  1. Name audio may be streamed through SIPREC out of your telephony system to Amazon Chime SDK Voice Connector, which buffers the audio in Amazon Kinesis Video Streams. LCA additionally helps different audio ingestion mechanisms, such Genesys Cloud Audiohook.
  2. Amazon Chime SDK Name Analytics then streams the audio from Kinesis Video Streams to Amazon Transcribe, and writes the JSON output to Amazon Kinesis Data Streams.
  3. A Lambda operate processes the transcription segments and persists them to an Amazon DynamoDB desk.
  4. After the decision ends, Amazon Chime SDK Voice Connector publishes an Amazon EventBridge notification that triggers a Lambda operate that reads the persevered transcript from DynamoDB, generates an LLM immediate (extra on this within the following part), and runs an LLM inference with Amazon Bedrock. The generated abstract is persevered to DynamoDB and can be utilized by the agent within the LCA consumer interface. You’ll be able to optionally present a Lambda operate ARN that will probably be run after the abstract is generated to combine with third-party CRM programs.

LCA additionally permits the choice to name the summarization Lambda operate in the course of the name, as a result of at any time the transcript may be fetched and a immediate created, even when the decision is in progress. This may be helpful for instances when a name is transferred to a different agent or escalated to a supervisor. Relatively than placing the client on maintain and explaining the decision, the brand new agent can rapidly learn an auto-generated abstract, and it may embrace what the present concern is and what the earlier agent tried to do to resolve it.

Instance name summarization immediate

You’ll be able to run LLM inferences with immediate engineering to generate and enhance your name summaries. You’ll be able to modify the immediate templates to see what works finest for the LLM you choose. The next is an instance of the default immediate for summarizing a transcript with LCA. We substitute the {transcript} placeholder with the precise transcript of the decision.

Human: Reply the questions beneath, outlined in <query></query> based mostly on the transcript outlined in <transcript></transcript>. In the event you can not reply the query, reply with 'n/a'. Use gender impartial pronouns. Whenever you reply, solely reply with the reply.

<query>
What's a abstract of the transcript?
</query>

<transcript>
{transcript}
</transcript>

Assistant:

LCA runs the immediate and shops the generated abstract. In addition to summarization, you’ll be able to direct the LLM to generate nearly any textual content that’s essential for agent productiveness. For instance, you’ll be able to select from a set of subjects that have been coated in the course of the name (agent disposition), generate an inventory of required follow-up duties, and even write an e-mail to the caller thanking them for the decision.

The next screenshot is an instance of agent follow-up e-mail technology within the LCA consumer interface.

With a well-engineered immediate, some LLMs have the power to generate all of this info in a single inference as effectively, decreasing inference price and processing time. The agent can then use the generated response inside just a few seconds of ending the decision for his or her after-contact work. You may as well combine the generated response automatically into your CRM system.

The next screenshot exhibits an instance abstract within the LCA consumer interface.

It’s additionally potential to generate a abstract whereas the decision remains to be ongoing (see the next screenshot), which may be particularly useful for lengthy buyer calls.

Previous to generative AI, brokers could be required to concentrate whereas additionally taking notes and performing different duties as required. By robotically transcribing the decision and utilizing LLMs to robotically create summaries, we are able to decrease the psychological burden on the agent, to allow them to concentrate on delivering a superior buyer expertise. This additionally results in extra correct after-call work, as a result of the transcription is an correct illustration of what occurred in the course of the name—not simply what the agent took notes on or remembered.

Abstract

The pattern LCA utility is supplied as open supply—use it as a place to begin on your personal answer, and assist us make it higher by contributing again fixes and options through GitHub pull requests. For details about deploying LCA, confer with Live call analytics and agent assist for your contact center with Amazon language AI services. Browse to the LCA GitHub repository to discover the code, signal as much as be notified of recent releases, and take a look at the README for the most recent documentation updates. For patrons who’re already on Amazon Join, you’ll be able to be taught extra about generative AI with Amazon Join by referring to How contact center leaders can prepare for generative AI.


In regards to the authors

Christopher Lott is a Senior Options Architect within the AWS AI Language Companies crew. He has 20 years of enterprise software program growth expertise. Chris lives in Sacramento, California and enjoys gardening, aerospace, and touring the world.

Smriti Ranjan is a Principal Product Supervisor within the AWS AI/ML crew specializing in language and search providers. Previous to becoming a member of AWS, she labored at Amazon Gadgets and different expertise startups main product and progress features. Smriti lives in Boston, MA and enjoys mountaineering, attending live shows and touring the world.

Leave a Reply

Your email address will not be published. Required fields are marked *