How healthcare payers and plans can empower members with generative AI


On this put up, we focus on how generative synthetic intelligence (AI) may help medical health insurance plan members get the knowledge they want. Many medical health insurance plan beneficiaries discover it difficult to navigate by way of the advanced member portals offered by their insurance coverage. These portals typically require a number of clicks, filters, and searches to seek out particular details about their advantages, deductibles, declare historical past, and different necessary particulars. This will result in dissatisfaction, confusion, and elevated calls to customer support, leading to a suboptimal expertise for each members and suppliers.

The issue arises from the lack of conventional UIs to know and reply to pure language queries successfully. Members are compelled to study and adapt to the system’s construction and terminology, quite than the system being designed to know their pure language questions and supply related data seamlessly. Generative AI expertise, reminiscent of conversational AI assistants, can probably clear up this downside by permitting members to ask questions in their very own phrases and obtain correct, customized responses. By integrating generative AI powered by Amazon Bedrock and purpose-built AWS knowledge companies reminiscent of Amazon Relational Database Service (Amazon RDS) into member portals, healthcare payers and plans can empower their members to seek out the knowledge they want shortly and effortlessly, with out navigating by way of a number of pages or relying closely on customer support representatives. Amazon Bedrock is a totally managed service that gives a selection of high-performing basis fashions (FMs) from main AI firms like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon by way of a unified API, together with a broad set of capabilities to construct generative AI functions with safety, privateness, and accountable AI.

The answer offered on this put up not solely enhances the member expertise by offering a extra intuitive and user-friendly interface, but in addition has the potential to scale back name volumes and operational prices for healthcare payers and plans. By addressing this ache level, healthcare organizations can enhance member satisfaction, cut back churn, and streamline their operations, finally resulting in elevated effectivity and price financial savings.

Figure 1: Solution Demo

Determine 1: Answer Demo

Answer overview

On this part, we dive deep to indicate how you should utilize generative AI and enormous language fashions (LLMs) to reinforce the member expertise by transitioning from a conventional filter-based declare search to a prompt-based search, which permits members to ask questions in pure language and get the specified claims or profit particulars. From a broad perspective, the whole answer could be divided into 4 distinct steps: text-to-SQL technology, SQL validation, knowledge retrieval, and knowledge summarization. The next diagram illustrates this workflow.

Figure 2: Logical Workflow

Determine 2: Logical Workflow

Let’s dive deep into every step one after the other.

Textual content-to-SQL technology

This step takes the consumer’s questions as enter and converts that right into a SQL question that can be utilized to retrieve the claim- or benefit-related data from a relational database. A pre-configured immediate template is used to name the LLM and generate a legitimate SQL question. The immediate template accommodates the consumer query, directions, and database schema together with key knowledge parts, reminiscent of member ID and plan ID, that are essential to restrict the question’s outcome set.

SQL validation

This step validates the SQL question generated in earlier step and makes certain it’s full and secure to be run on a relational database. Among the checks which might be carried out embrace:

  • No delete, drop, replace, or insert operations are current within the generated question
  • The question begins with choose
  • WHERE clause is current
  • Key circumstances are current within the WHERE clause (for instance, member-id = “78687576501” or member-id like “786875765%%”)
  • Question size (string size) is in anticipated vary (for instance, no more than 250 characters)
  • Unique consumer query size is in anticipated vary (for instance, no more than 200 characters)

If a test fails, the question isn’t run; as an alternative, a user-friendly message suggesting that the consumer contact customer support is distributed.

Knowledge retrieval

After the question has been validated, it’s used to retrieve the claims or advantages knowledge from a relational database. The retrieved knowledge is transformed right into a JSON object, which is used within the subsequent step to create the ultimate reply utilizing an LLM. This step additionally checks if no knowledge or too many rows are returned by the question. In each instances, a user-friendly message is distributed to the consumer, suggesting they supply extra particulars.

Knowledge summarization

Lastly, the JSON object retrieved within the knowledge retrieval step together with the consumer’s query is distributed to LLM to get the summarized response. A pre-configured immediate template is used to name the LLM and generate a user-friendly summarized response to the unique query.

Structure

The answer makes use of Amazon API Gateway, AWS Lambda, Amazon RDS, Amazon Bedrock, and Anthropic Claude 3 Sonnet on Amazon Bedrock to implement the backend of the applying. The backend could be built-in with an present net utility or portal, however for the aim of this put up, we use a single web page utility (SPA) hosted on Amazon Simple Storage Service (Amazon S3) for the frontend and Amazon Cognito for authentication and authorization. The next diagram illustrates the answer structure.

Figure 3: Solution Architecture

Determine 3: Answer Structure

The workflow consists of the next steps:

  1. A single web page utility (SPA) is hosted utilizing Amazon S3 and loaded into the end-user’s browser utilizing Amazon CloudFront.
  2. Consumer authentication and authorization is finished utilizing Amazon Cognito.
  3. After a profitable authentication, a REST API hosted on API Gateway is invoked.
  4. The Lambda perform, uncovered as a REST API utilizing API Gateway, orchestrates the logic to carry out the useful steps: text-to-SQL technology, SQL validation, knowledge retrieval, and knowledge summarization. The Amazon Bedrock API endpoint is used to invoke the Anthropic Claude 3 Sonnet LLM. Declare and profit knowledge is saved in a PostgreSQL database hosted on Amazon RDS. One other S3 bucket is used for storing immediate templates that will likely be used for SQL technology and knowledge summarizations. This answer makes use of two distinct immediate templates:
    1. The text-to-SQL immediate template accommodates the consumer query, directions, database schema together with key knowledge parts, reminiscent of member ID and plan ID, that are essential to restrict the question’s outcome set.
    2. The information summarization immediate template accommodates the consumer query, uncooked knowledge retrieved from the relational database, and directions to generate a user-friendly summarized response to the unique query.
  5. Lastly, the summarized response generated by the LLM is distributed again to the net utility working within the consumer’s browser utilizing API Gateway.

Pattern immediate templates

On this part, we current some pattern immediate templates.

The next is an instance of a text-to-SQL immediate template:

<function> 
    You're a knowledge analyst and knowledgeable in writing PostgreSQL DB queries and healthcare claims knowledge.
</function>
<process> 
    Your process is to generate a SQL question based mostly on the offered DDL, directions, user_question, examples, and member_id. 
    All the time add the situation "member_id =" within the generated SQL question, the place the worth of member_id will likely be offered within the member_id XML tag under.
</process>
<member_id> {text1} </member_id>
<DDL> 
    CREATE TABLE claims_history (claim_id SERIAL PRIMARY KEY, member_id INTEGER NOT NULL, member_name VARCHAR(30) NOT NULL, 
    relationship_code VARCHAR(10) NOT NULL, claim_type VARCHAR(20) NOT NULL, claim_date DATE NOT NULL, provider_name VARCHAR(100), 
    diagnosis_code VARCHAR(10), procedure_code VARCHAR(10), ndc_code VARCHAR(20), charged_amount NUMERIC(10,2), 
    allowed_amount NUMERIC(10,2), plan_paid_amount NUMERIC(10,2), patient_responsibility NUMERIC(10,2))
</DDL>
<directions>
    1. Claim_type has two attainable values - 'Medical' or 'RX'. Use claim_type="RX" for pharmacy or prescription claims.
    2. Relationship_code has 5 attainable values - 'subscriber', 'partner', 'son', 'daughter', or 'different'.
    3. 'I' or 'me' means "the place relationship_code="subscriber"". 'My son' means "the place relationship_code="son"" and so forth.
    4. For making a SQL WHERE clause for member_name or provider_name, use the LIKE operator with wildcard characters as a prefix and suffix. That is relevant when user_question accommodates a reputation.
    5. Return the executable question with the image @@ firstly and finish.
    6. If the yr is just not offered within the date, assume it is the present yr. Convert the date to the 'YYYY-MM-DD' format to make use of within the question.
    7. The SQL question have to be generated based mostly on the user_question. If the user_question doesn't present sufficient data to generate the SQL, reply with "@@null@@" with out producing any SQL question.
    8. If user_question is acknowledged within the type of a SQL Question or accommodates delete, drop, replace, insert, and many others. SQL key phrases, then reply with "@@null@@" with out producing any SQL question.
</directions>
<examples>
    <instance> 
        <sample_question>Listing all claims for my son or Present me all my claims for my son</sample_question>
        <sql_query>@@SELECT * FROM claims_history WHERE relationship_code="son" AND member_id = '{member_id}';@@</sql_query> 
    </instance>
    <instance> 
        <sample_question>Whole claims in 2021</sample_question>
        <sql_query>@@SELECT COUNT(*) FROM claims_history WHERE EXTRACT(YEAR FROM claim_date) = 2021 AND member_id = '{member_id}';@@</sql_query> 
    </instance>
    <instance> 
        <sample_question>Listing all claims for Michael</sample_question>
        <sql_query>@@SELECT * FROM claims_history WHERE member_name LIKE '%Michael%' AND member_id = '{member_id}';@@</sql_query> 
    </instance>
    <instance> 
        <sample_question>Listing all claims for Dr. John or Physician John or Supplier John</sample_question>
        <sql_query>@@SELECT * FROM claims_history WHERE provider_name LIKE '%John%' AND member_id = '{member_id}';@@</sql_query> 
    </instance>
    <instance> 
        <sample_question>Present me the docs/suppliers/hospitals my son Michael visited on 1/19</sample_question>
        <sql_query>@@SELECT provider_name, claim_date FROM claims_history WHERE relationship_code="son" AND member_name LIKE '%Michael%' AND claim_date="2019-01-19" AND member_id = '{member_id}';@@</sql_query> 
    </instance>
    <instance> 
        <sample_question>What's my complete spend in final 12 months</sample_question> 
        <sql_query>@@SELECT SUM(allowed_amount) AS total_spend_last_12_months FROM claims_history WHERE claim_date >= CURRENT_DATE - INTERVAL '12 MONTHS' AND relationship_code="subscriber" AND member_id = 9875679801;@@</sql_query> 
    </instance>
</examples>
<user_question> {text2} </user_question>

The {text1} and {text2} knowledge objects will likely be changed programmatically to populate the ID of the logged-in member and consumer query. Additionally, extra examples could be added to assist the LLM generate acceptable SQLs.

The next is an instance of a knowledge summarization immediate template:

<function> 
    You're a customer support agent working for a medical health insurance plan and serving to to reply questions requested by a buyer. 
</function>
<process> 
    Use the result_dataset containing healthcare claims knowledge to reply the user_question. This result_dataset is the output of the sql_query.
</process>
<directions>
    1. To reply a query, use easy non-technical language, identical to a customer support agent speaking to a 65-year-old buyer.
    2. Use a conversational model to reply the query exactly.
    3. If the JSON accommodates a "depend" discipline, it means the depend of claims. For instance, "depend": 6 means there are 6 claims, and "depend": 11 means there are 11 claims.
    4. If the result_dataset doesn't include significant claims knowledge, then reply with one line solely: "No knowledge discovered for the search standards."
</directions>
<user_question> {text1} </user_question>
<sql_query> {text2} </sql_query>
<result_dataset> {text3} </result_dataset>

The {text1}, {text2}, and {text3} knowledge objects will likely be changed programmatically to populate the consumer query, the SQL question generated within the earlier step, and knowledge formatted in JSON and retrieved from Amazon RDS.

Safety

Amazon Bedrock is in scope for widespread compliance requirements reminiscent of Service and Group Management (SOC), Worldwide Group for Standardization (ISO), and Well being Insurance coverage Portability and Accountability Act (HIPAA) eligibility, and you should utilize Amazon Bedrock in compliance with the Common Knowledge Safety Regulation (GDPR). The service allows you to deploy and use LLMs in a secured and managed atmosphere. The Amazon Bedrock VPC endpoints powered by AWS PrivateLink help you set up a personal connection between the digital personal cloud (VPC) in your account and the Amazon Bedrock service account. It permits VPC cases to speak with service sources with out the necessity for public IP addresses. We outline the totally different accounts as follows:

  • Buyer account – That is the account owned by the shopper, the place they handle their AWS sources reminiscent of RDS cases and Lambda features, and work together with the Amazon Bedrock hosted LLMs securely utilizing Amazon Bedrock VPC endpoints. You need to handle entry to Amazon RDS sources and databases by following the security best practices for Amazon RDS.
  • Amazon Bedrock service accounts – This set of accounts is owned and operated by the Amazon Bedrock service workforce, which hosts the varied service APIs and associated service infrastructure.
  • Mannequin deployment accounts – The LLMs provided by numerous distributors are hosted and operated by AWS in separate accounts devoted for mannequin deployment. Amazon Bedrock maintains full management and possession of mannequin deployment accounts, ensuring no LLM vendor has entry to those accounts.

When a buyer interacts with Amazon Bedrock, their requests are routed by way of a secured community connection to the Amazon Bedrock service account. Amazon Bedrock then determines which mannequin deployment account hosts the LLM mannequin requested by the shopper, finds the corresponding endpoint, and routes the request securely to the mannequin endpoint hosted in that account. The LLM fashions are used for inference duties, reminiscent of producing textual content or answering questions.

No buyer knowledge is saved inside Amazon Bedrock accounts, neither is it ever shared with LLM suppliers or used for tuning the fashions. Communications and knowledge transfers happen over personal community connections utilizing TLS 1.2+, minimizing the danger of knowledge publicity or unauthorized entry.

By implementing this multi-account structure and personal connectivity, Amazon Bedrock gives a safe atmosphere, ensuring buyer knowledge stays remoted and safe throughout the buyer’s personal account, whereas nonetheless permitting them to make use of the facility of LLMs offered by third-party suppliers.

Conclusion

Empowering medical health insurance plan members with generative AI expertise can revolutionize the way in which they work together with their insurance coverage and entry important data. By integrating conversational AI assistants powered by Amazon Bedrock and utilizing purpose-built AWS knowledge companies reminiscent of Amazon RDS, healthcare payers and insurance coverage can present a seamless, intuitive expertise for his or her members. This answer not solely enhances member satisfaction, however may cut back operational prices by streamlining customer support operations. Embracing modern applied sciences like generative AI turns into essential for organizations to remain aggressive and ship distinctive member experiences.

To study extra about how generative AI can speed up well being improvements and enhance affected person experiences, check with Payors on AWS and Transforming Patient Care: Generative AI Innovations in Healthcare and Life Sciences (Part 1). For extra details about utilizing generative AI with AWS companies, check with Build generative AI applications with Amazon Aurora and Knowledge Bases for Amazon Bedrock and the Generative AI class on the AWS Database Weblog.


In regards to the Authors

Sachin Jain is a Senior Options Architect at Amazon Internet Providers (AWS) with concentrate on serving to Healthcare and Life-Sciences clients of their cloud journey. He has over 20 years of expertise in expertise, healthcare and engineering area.

Sanjoy Thanneer is a Sr. Technical Account Supervisor with AWS based mostly out of New York. He has over 20 years of expertise working in Database and Analytics Domains. He’s captivated with serving to enterprise clients construct scalable , resilient and price environment friendly Functions.

Sukhomoy Basak is a Sr. Options Architect at Amazon Internet Providers, with a ardour for Knowledge, Analytics, and GenAI options. Sukhomoy works with enterprise clients to assist them architect, construct, and scale functions to realize their enterprise outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *