Automate buyer help with Amazon Bedrock, LangGraph, and Mistral fashions


AI brokers are reworking the panorama of buyer help by bridging the hole between giant language fashions (LLMs) and real-world functions. These clever, autonomous techniques are poised to revolutionize customer support throughout industries, ushering in a brand new period of human-AI collaboration and problem-solving. By harnessing the facility of LLMs and integrating them with specialised instruments and APIs, brokers can deal with advanced, multistep buyer help duties that had been beforehand past the attain of conventional AI techniques.As we glance to the long run, AI brokers will play an important function within the following areas:

  • Enhancing decision-making – Offering deeper, context-aware insights to enhance buyer help outcomes
  • Automating workflows – Streamlining customer support processes, from preliminary contact to decision, throughout varied channels
  • Human-AI interactions – Enabling extra pure and intuitive interactions between prospects and AI techniques
  • Innovation and information integration – Producing new options by combining numerous knowledge sources and specialised information to handle buyer queries extra successfully
  • Moral AI practices – Serving to present extra clear and explainable AI techniques to handle buyer considerations and construct belief

Constructing and deploying AI agent techniques for buyer help is a step towards unlocking the total potential of generative AI on this area. As these techniques evolve, they are going to remodel customer support, develop potentialities, and open new doorways for AI in enhancing buyer experiences.

On this publish, we exhibit the best way to use Amazon Bedrock and LangGraph to construct a customized buyer help expertise for an ecommerce retailer. By integrating the Mistral Large 2 and Pixtral Large fashions, we information you thru automating key buyer help workflows similar to ticket categorization, order particulars extraction, injury evaluation, and producing contextual responses. These rules are relevant throughout varied industries, however we use the ecommerce area as our main instance to showcase the end-to-end implementation and finest practices. This publish gives a complete technical walkthrough that can assist you improve your customer support capabilities and discover the newest developments in LLMs and multimodal AI.

LangGraph is a strong framework constructed on high of LangChain that allows the creation of cyclical, stateful graphs for advanced AI agent workflows. It makes use of a directed graph construction the place nodes symbolize particular person processing steps (like calling an LLM or utilizing a device), edges outline transitions between steps, and state is maintained and handed between nodes throughout execution. This structure is especially beneficial for buyer help automation involving workflows. LangGraph’s benefits embrace built-in visualization, logging (traces), human-in-the-loop capabilities, and the flexibility to prepare advanced workflows in a extra maintainable approach than conventional Python code.This publish gives particulars on the best way to do the next:

  • Use Amazon Bedrock and LangGraph to construct clever, context-aware buyer help workflows
  • Combine knowledge in a helpdesk device, like JIRA, within the LangChain workflow
  • Use LLMs and imaginative and prescient language fashions (VLMs) within the workflow to carry out context-specific duties
  • Extract info from photographs to assist in decision-making
  • Examine photographs to evaluate product injury claims
  • Generate responses for the shopper help tickets

Answer overview

This resolution includes the purchasers initiating help requests via electronic mail, that are routinely transformed into new help tickets in Atlassian Jira Service Management. The shopper help automation resolution then takes over, figuring out the intent behind every question, categorizing the tickets, and assigning them to a bot person for additional processing. The answer makes use of LangGraph to orchestrate a workflow involving AI brokers to extracts key identifiers similar to transaction IDs and order numbers from the help ticket. It analyzes the question and makes use of these identifiers to name related instruments, extracting further info from the database to generate a complete and context-aware response. After the response is ready, it’s up to date in Jira for human help brokers to evaluation earlier than sending the response again to the shopper. This course of is illustrated within the following determine. This resolution is able to extracting info not solely from the ticket physique and title but additionally from hooked up photographs like screenshots and exterior databases.

Solution Architecture

The answer makes use of two basis fashions (FMs) from Amazon Bedrock, every chosen based mostly on its particular capabilities and the complexity of the duties concerned. For example, the Pixtral mannequin is used for vision-related duties like picture comparability and ID extraction, whereas the Mistral Giant 2 mannequin handles quite a lot of duties like ticket categorization, response era, and power calling. Moreover, the answer contains fraud detection and prevention capabilities. It may possibly establish fraudulent product returns by evaluating the inventory product picture with the returned product picture to confirm in the event that they match and assess whether or not the returned product is genuinely broken. This integration of superior AI fashions with automation instruments enhances the effectivity and reliability of the shopper help course of, facilitating well timed resolutions and safety in opposition to fraudulent actions. LangGraph gives a framework for orchestrating the knowledge stream between brokers, that includes built-in state administration and checkpointing to facilitate seamless course of continuity. This performance permits the inclusion of preliminary ticket summaries and descriptions within the State object, with further info appended in subsequent steps of the workflows. By sustaining this evolving context, LangGraph allows LLMs to generate context-aware responses. See the next code:

# class to carry state info

class JiraAppState(MessagesState):
    key: str
    abstract: str
    description: str
    attachments: listing
    class: str
    response: str
    transaction_id: str
    order_no: str
    utilization: listing

The framework integrates effortlessly with Amazon Bedrock and LLMs, supporting task-specific diversification through the use of cost-effective fashions for less complicated duties whereas lowering the dangers of exceeding mannequin quotas. Moreover, LangGraph presents conditional routing for dynamic workflow changes based mostly on intermediate outcomes, and its modular design facilitates the addition or elimination of brokers to increase system capabilities.

Accountable AI

It’s essential for buyer help automation functions to validate inputs and ensure LLM outputs are safe and accountable. Amazon Bedrock Guardrails can considerably improve buyer help automation functions by offering configurable safeguards that monitor and filter each person inputs and AI-generated responses, ensuring interactions stay protected, related, and aligned with organizational insurance policies. Through the use of options similar to content material filters, which detect and block dangerous classes like hate speech, insults, sexual content material, and violence, in addition to denied matters to assist forestall discussions on delicate or restricted topics (for instance, authorized or medical recommendation), buyer help functions can keep away from producing or amplifying inappropriate or defiant info. Moreover, guardrails may also help redact personally identifiable info (PII) from dialog transcripts, defending person privateness and fostering belief. These measures not solely cut back the danger of reputational hurt and regulatory violations but additionally create a extra optimistic and safe expertise for patrons, permitting help groups to give attention to resolving points effectively whereas sustaining excessive requirements of security and accountability.

The next diagram illustrates this structure.

Guardrails

Observability

Together with Accountable AI, observability is important for buyer help functions to offer deep, real-time visibility into mannequin efficiency, utilization patterns, and operational well being, enabling groups to proactively detect and resolve points. With complete observability, you possibly can monitor key metrics similar to latency and token consumption, and monitor and analyze enter prompts and outputs for high quality and compliance. This stage of perception helps establish and mitigate dangers like hallucinations, immediate injections, poisonous language, and PII leakage, serving to make it possible for buyer interactions stay protected, dependable, and aligned with regulatory necessities.

Conditions

On this publish, we use Atlassian Jira Service Administration for example. You need to use the identical normal strategy to combine with different service administration instruments that present APIs for programmatic entry. The configuration required in Jira contains:

  • A Jira service administration mission with API token to allow programmatic entry
  • The next custom fields:
    • Title: Class, Kind: Choose Checklist (a number of selections)
    • Title: Response, Kind: Textual content Area (multi-line)
  • A bot person to assign tickets

The next code reveals a pattern Jira configuration:

JIRA_API_TOKEN = "<JIRA_API_TOKEN>"
JIRA_USERNAME = "<JIRA_USERNAME>"
JIRA_INSTANCE_URL = "https://<YOUR_JIRA_INSTANCE_NAME>.atlassian.web/"
JIRA_PROJECT_NAME = "<JIRA_PROJECT_NAME>"
JIRA_PROJECT_KEY = "<JIRA_PROJECT_KEY>"
JIRA_BOT_USER_ID = '<JIRA_BOT_USER_ID>'

Along with Jira, the next companies and Python packages are required:

  • A sound AWS account.
  • An AWS Identity and Access Management (IAM) function within the account that has ample permissions to create the mandatory sources.
  • Access to the next models hosted on Amazon Bedrock:
    • Mistral Giant 2 (mannequin ID: mistral.mistral-large-2407-v1:0).
    • Pixtral Giant (mannequin ID: us.mistral.pixtral-large-2502-v1:0). The Pixtral Giant mannequin is out there in Amazon Bedrock underneath cross-Area inference profiles.
  • A LangGraph utility up and operating domestically. For directions, see Quickstart: Launch Local LangGraph Server.

For this publish, we use the us-west-2 AWS Area. For particulars on obtainable Areas, see Amazon Bedrock endpoints and quotas.

The supply code of this resolution is out there within the GitHub repository. That is an instance code; it is best to conduct your individual due diligence and cling to the principle of least privilege.

Implementation with LangGraph

On the core of buyer help automation is a set of specialised instruments and features designed to gather, analyze, and combine knowledge from service administration techniques and a SQLite database. These instruments function the muse of our system, empowering it to ship context-aware responses. On this part, we delve into the important parts that energy our system.

BedrockClient class

The BedrockClient class is carried out within the cs_bedrock.py file. It gives a wrapper for interacting with Amazon Bedrock companies, particularly for managing language fashions and content material security guardrails in buyer help functions. It simplifies the method of initializing language fashions with acceptable configurations and managing content material security guardrails. This class is utilized by LangChain and LangGraph to invoke LLMs on Amazon Bedrock.

This class additionally gives strategies to create guardrails for accountable AI implementation. The next Amazon Bedrock Guardrails coverage filters sexual, violence, hate, insults, misconducts, and immediate assaults, and helps forestall fashions from producing inventory and funding recommendation, profanity, hate, violent and sexual content material. Moreover, it helps forestall exposing vulnerabilities in fashions by assuaging immediate assaults.

# guardrails coverage

contentPolicyConfig={
    'filtersConfig': [
        {
            'type': 'SEXUAL',
            'inputStrength': 'MEDIUM',
            'outputStrength': 'MEDIUM'
        },
        {
            'type': 'VIOLENCE',
            'inputStrength': 'MEDIUM',
            'outputStrength': 'MEDIUM'
        },
        {
            'type': 'HATE',
            'inputStrength': 'MEDIUM',
            'outputStrength': 'MEDIUM'
        },
        {
            'type': 'INSULTS',
            'inputStrength': 'MEDIUM',
            'outputStrength': 'MEDIUM'
        },
        {
            'type': 'MISCONDUCT',
            'inputStrength': 'MEDIUM',
            'outputStrength': 'MEDIUM'
        },
        {
            'type': 'PROMPT_ATTACK',
            'inputStrength': 'LOW',
            'outputStrength': 'NONE'
        }
    ]
},
wordPolicyConfig={
    'wordsConfig': [
        {'text': 'stock and investment advice'}
    ],
    'managedWordListsConfig': [
        {'type': 'PROFANITY'}
    ]
},
contextualGroundingPolicyConfig={
    'filtersConfig': [
        {
            'type': 'GROUNDING',
            'threshold': 0.65
        },
        {
            'type': 'RELEVANCE',
            'threshold': 0.75
        }
    ]
}

Database class

The Database class is outlined within the cs_db.py file. This class is designed to facilitate interactions with a SQLite database. It’s accountable for creating a neighborhood SQLite database and importing artificial knowledge associated to prospects, orders, refunds, and transactions. By doing so, it makes certain that the mandatory knowledge is available for varied operations. Moreover, the category contains handy wrapper features that simplify the method of querying the database.

JiraSM class

The JiraSM class is carried out within the cs_jira_sm.py file. It serves as an interface for interacting with Jira Service Administration. It establishes a connection to Jira through the use of the API token, person identify, and occasion URL, all of that are configured within the .env file. This setup gives safe and versatile entry to the Jira occasion. The category is designed to deal with varied ticket operations, together with studying tickets and assigning them to a preconfigured bot person. Moreover, it helps downloading attachments from tickets and updating customized fields as wanted.

CustomerSupport class

The CustomerSupport class is carried out within the cs_cust_support_flow.py file. This class encapsulates the shopper help processing logic through the use of LangGraph and Amazon Bedrock. Utilizing LangGraph nodes and instruments, this class orchestrates the shopper help workflow. The workflow initially determines the class of the ticket by analyzing its content material and classifying it as associated to transactions, deliveries, refunds, or different points. It updates the help ticket with the class detected. Following this, the workflow extracts pertinent info similar to transaction IDs or order numbers, which could contain analyzing each textual content and pictures, and queries the database for related particulars. The subsequent step is response era, which is context-aware and adheres to content material security tips whereas sustaining knowledgeable tone. Lastly, the workflow integrates with Jira, assigning classes, updating responses, and managing attachments as wanted.

The LangGraph orchestration is carried out within the build_graph operate, as illustrated within the following code. This operate additionally generates a visible illustration of the workflow utilizing a Mermaid graph for higher readability and understanding. This setup helps an environment friendly and structured strategy to dealing with buyer help duties.

def build_graph(self):
    """
    This operate prepares LangGraph nodes, edges, conditional edges, compiles the graph and shows it 
    """

    # create StateGraph object
    graph_builder = StateGraph(JiraAppState)

    # add nodes to the graph
    graph_builder.add_node("Decide Ticket Class", self.determine_ticket_category_tool)
    graph_builder.add_node("Assign Ticket Class in JIRA", self.assign_ticket_category_in_jira_tool)
    graph_builder.add_node("Extract Transaction ID", self.extract_transaction_id_tool)
    graph_builder.add_node("Extract Order Quantity", self.extract_order_number_tool)
    graph_builder.add_node("Discover Transaction Particulars", self.find_transaction_details_tool)
    
    graph_builder.add_node("Discover Order Particulars", self.find_order_details_tool)
    graph_builder.add_node("Generate Response", self.generate_response_tool)
    graph_builder.add_node("Replace Response in JIRA", self.update_response_in_jira_tool)

    graph_builder.add_node("instruments", ToolNode([StructuredTool.from_function(self.assess_damaged_delivery), StructuredTool.from_function(self.find_refund_status)]))
    
    # add edges to attach nodes
    graph_builder.add_edge(START, "Decide Ticket Class")
    graph_builder.add_edge("Decide Ticket Class", "Assign Ticket Class in JIRA")
    graph_builder.add_conditional_edges("Assign Ticket Class in JIRA", self.decide_ticket_flow_condition)
    graph_builder.add_edge("Extract Order Quantity", "Discover Order Particulars")
    
    graph_builder.add_edge("Extract Transaction ID", "Discover Transaction Particulars")
    graph_builder.add_conditional_edges("Discover Order Particulars", self.order_query_decision, ["Generate Response", "tools"])
    graph_builder.add_edge("instruments", "Generate Response")
    graph_builder.add_edge("Discover Transaction Particulars", "Generate Response")
    
    graph_builder.add_edge("Generate Response", "Replace Response in JIRA")
    graph_builder.add_edge("Replace Response in JIRA", END)

    # compile the graph
    checkpoint = MemorySaver()
    app = graph_builder.compile(checkpointer=checkpoint)
    self.graph_app = app
    self.util.log_data(knowledge="Workflow compiled efficiently", ticket_id='NA')

    # Visualize the graph
    show(Picture(app.get_graph().draw_mermaid_png(draw_method=MermaidDrawMethod.API)))

    return app

LangGraph generates the next Mermaid diagram to visually symbolize the workflow.

Mermaid diagram

Utility class

The Utility class, carried out within the cs_util.py file, gives important features to help the shopper help automation. It encompasses utilities for logging, file dealing with, utilization metric monitoring, and picture processing operations. The category is designed as a central hub for varied helper strategies, streamlining frequent duties throughout the applying. By consolidating these operations, it promotes code reusability and maintainability throughout the system. Its performance makes certain that the automation framework stays environment friendly and arranged.

A key function of this class is its complete logging capabilities. It gives strategies to log informational messages, errors, and vital occasions instantly into the cs_logs.log file. Moreover, it tracks Amazon Bedrock LLM token utilization and latency metrics, facilitating detailed efficiency monitoring. The category additionally logs the execution stream of application-generated prompts and LLM generated responses, aiding in troubleshooting and debugging. These log information may be seamlessly built-in with commonplace log pusher brokers, permitting for automated switch to most popular log monitoring techniques. This integration makes certain that system exercise is totally monitored and rapidly accessible for evaluation.

Run the agentic workflow

Now that the shopper help workflow is outlined, it may be executed for varied ticket sorts. The next features use the supplied ticket key to fetch the corresponding Jira ticket and obtain obtainable attachments. Moreover, they initialize the State object with particulars such because the ticket key, abstract, description, attachment file path, and a system immediate for the LLM. This State object is used all through the workflow execution.

def generate_response_for_ticket(ticket_id: str):
    
    llm, vision_llm, llm_with_guardrails = bedrock_client.init_llms(ticket_id=ticket_id)
    cust_support = CustomerSupport(llm=llm, vision_llm=vision_llm, llm_with_guardrails=llm_with_guardrails)
    app   = cust_support.build_graph()
    
    state = cust_support.get_jira_ticket(key=ticket_id)
    state = app.invoke(state, thread)
    
    util.log_usage(state['usage'], ticket_id=ticket_id)
    util.log_execution_flow(state["messages"], ticket_id=ticket_id)
    

The next code snippet invokes the workflow for the Jira ticket with key AS-6:

# initialize courses and create bedrock guardrails
bedrock_client = BedrockClient()
util = Utility()
guardrail_id = bedrock_client.create_guardrail()

# course of a JIRA ticket
generate_response_for_ticket(ticket_id='AS-6')

The next screenshot reveals the Jira ticket earlier than processing. Discover that the Response and Class fields are empty, and the ticket is unassigned.

Support Ticket - Initial

The next screenshot reveals the Jira ticket after processing. The Class subject is up to date as Refunds and the Response subject is up to date by the AI-generated content material.

Support Ticket - updated

This logs LLM utilization info as follows:

Mannequin                              Enter Tokens  Output Tokens Latency 
mistral.mistral-large-2407-v1:0      385               2         653  
mistral.mistral-large-2407-v1:0      452              27         884      
mistral.mistral-large-2407-v1:0     1039              36        1197   
us.mistral.pixtral-large-2502-v1:0  4632             425        5952   
mistral.mistral-large-2407-v1:0     1770             144        4556  

Clear up

Delete any IAM roles and insurance policies created particularly for this publish. Delete the native copy of this publish’s code.

If you happen to now not want entry to an Amazon Bedrock FM, you possibly can take away entry from it. For directions, see Add or remove access to Amazon Bedrock foundation models.

Delete the short-term information and guardrails used on this publish with the next code:

shutil.rmtree(util.get_temp_path())
bedrock_client.delete_guardrail()

Conclusion

On this publish, we developed an AI-driven buyer help resolution utilizing Amazon Bedrock, LangGraph, and Mistral fashions. This superior agent-based workflow effectively handles numerous buyer queries by integrating a number of knowledge sources and extracting related info from tickets or screenshots. It additionally evaluates injury claims to mitigate fraudulent returns. The answer is designed with flexibility, permitting the addition of latest situations and knowledge sources as companies have to evolve. With this multi-agent strategy, you possibly can construct sturdy, scalable, and clever techniques that redefine the capabilities of generative AI in buyer help.

Wish to discover additional? Take a look at the next GitHub repo. There, you possibly can observe the code in motion and experiment with the answer your self. The repository contains step-by-step directions for establishing and operating the multi-agent system, together with code for interacting with knowledge sources and brokers, routing knowledge, and visualizing workflows.


In regards to the authors

Deepesh DhapolaDeepesh Dhapola is a Senior Options Architect at AWS India, specializing in serving to monetary companies and fintech purchasers optimize and scale their functions on the AWS Cloud. With a robust give attention to trending AI applied sciences, together with generative AI, AI brokers, and the Mannequin Context Protocol (MCP), Deepesh makes use of his experience in machine studying to design progressive, scalable, and safe options. Passionate concerning the transformative potential of AI, he actively explores cutting-edge developments to drive effectivity and innovation for AWS prospects. Outdoors of labor, Deepesh enjoys spending high quality time along with his household and experimenting with numerous culinary creations.

Leave a Reply

Your email address will not be published. Required fields are marked *