Introducing agent-to-agent protocol assist in Amazon Bedrock AgentCore Runtime
We not too long ago introduced the assist for Agent-to-Agent (A2A) protocol on Amazon Bedrock AgentCore Runtime. With this addition, brokers can uncover friends, share capabilities, and coordinate actions throughout platforms utilizing standardized communication.
Amazon Bedrock AgentCore Runtime gives a safe, serverless surroundings designed for deploying AI brokers and instruments. It really works with any framework and model, helps real-time and long-running workloads, and helps session isolation with built-in authentication. With assist for MCP, and now the A2A protocol, Bedrock AgentCore Runtime allows seamless communication between brokers. Brokers constructed utilizing totally different frameworks, Strands Agents, OpenAI Agents SDK, LangGraph, Google ADK, or Claude Agents SDK, can share context, capabilities, and reasoning in a standard, verifiable format.
On this put up, we reveal how you need to use the A2A protocol for AI brokers constructed with totally different frameworks to collaborate seamlessly. You’ll discover ways to deploy A2A servers on AgentCore Runtime, configure agent discovery and authentication, and construct a real-world multi-agent system for incident response. We’ll cowl the entire A2A request lifecycle, from agent card discovery to process delegation, displaying how standardized protocols remove the complexity of multi-agent coordination.
Understanding multi-agent programs
Constructing efficient agentic programs requires a number of foundational parts. These embody memory, each short-term for sustaining dialog context and long-term for retaining insights throughout periods; tools that brokers can entry both natively or by MCP servers; identity for safer authentication and permission administration, permitting brokers to behave on behalf of customers or autonomously entry assets; and guardrails to detect dangerous content material, assist stop hallucinations, and ensure responses align with insurance policies and factual accuracy.

Whereas MCP connects a single agent to its instruments and information, A2A lets a number of brokers coordinate with each other. For instance, a retail stock agent would possibly use MCP to question product databases, then use A2A to speak with exterior provider brokers to put orders.
The A2A protocol brings advantages to multi-agent programs by seamless interoperability throughout numerous boundaries. Brokers constructed with totally different frameworks like Strands or OpenAI, powered by numerous LLMs similar to Anthropic Claude, GPT-4, or Llama, and hosted on totally different programs together with AWS or edge units can talk and coordinate effortlessly with out requiring advanced translation layers. This interoperability is complemented by unfastened coupling and modularity, the place every agent operates as an unbiased unit that may be developed, examined, deployed, and even upgraded with out disrupting all the system. New specialised brokers can be a part of the surroundings seamlessly, and the failure of 1 agent stays remoted attributable to well-defined interplay boundaries, serving to stop cascading failures throughout the system. The protocol additionally helps dynamic agent discovery and orchestration. Brokers promote their capabilities by standardized schemas whereas orchestrator brokers can uncover and invoke specialised brokers based mostly on real-time process necessities.
A2A request lifecycle on Amazon Bedrock AgentCore Runtime
The A2A protocol defines a structured request lifecycle with particular parts that work collectively to coordinate multi-agent communication. Listed below are the important thing components:
- Person: Initiates requests by the Consumer Agent, both as a human operator or automated service defining objectives that require multi-agent help.
- A2A Consumer (Consumer Agent): Acts on behalf of the person, initiating communication utilizing the A2A protocol to find and request duties from distant brokers.
- A2A Server (Distant Agent): Exposes HTTP endpoints implementing the A2A protocol to obtain requests, course of duties, and return outcomes. Completely different brokers can serve this function, dealing with each synchronous and asynchronous interactions utilizing JSON-RPC 2.0 over HTTP/S or Server-Despatched Occasions.
- Agent Card: A JSON metadata file that every agent publishes to promote its id, capabilities, endpoints, and authentication necessities. This allows the dynamic discovery characteristic, the place brokers question what their peer brokers can do earlier than delegating duties.
- Process Object: Represents every unit of labor flowing by the system with a singular ID and lifecycle. As brokers coordinate, duties could also be long-running, contain a number of turns, and span a number of brokers working collectively.
- Artifact: The output produced when a process completes, which might embody structured textual content, JSON, photographs, audio, or different multimodal content material. Brokers change these artifacts as they collaborate to meet the person’s authentic request.
Multi-agent use case: Monitoring and incident response
To reveal the ability of multi-agent programs utilizing A2A on Amazon Bedrock AgentCore Runtime, we’ll stroll by an enterprise monitoring and incident response answer. This real-world use-case showcases how specialised brokers constructed with totally different frameworks coordinate seamlessly to deal with advanced operational challenges by the A2A protocol.
The monitoring and incident response answer implements a hub-and-spoke structure with three specialised brokers, every utilizing Amazon Bedrock AgentCore options – modular constructing blocks that present core capabilities like AgentCore Memory for context-aware responses, AgentCore Identity utilizing Amazon Cognito for safer authentication for brokers and what motion every agent can carry out, AgentCore Gateway for safer and centralized entry to instruments, and observability to hint, debug, and monitor AI brokers’ efficiency. View the structure and demonstration video under for reference:

The multi-agent system incorporates the next parts:
- Host agent (Google ADK): Acts because the clever routing layer and coordination hub for the agent interactions. Demonstrates the cross-system interoperability utilizing A2A. This agent runs on Amazon Bedrock AgentCore Runtime utilizing Google’s Agent Development Kit, but communicates seamlessly with brokers hosted on AWS by the standardized A2A protocol. Key obligations of the host agent embody:
- Dynamic agent discovery: Fetches Identification Supplier (IDP) configuration from AWS Methods Supervisor Parameter Retailer for every distant agent, enabling safer authentication throughout the multi-agent system
- Functionality consciousness: Retrieves agent playing cards from every A2A server to grasp accessible abilities and endpoints
- Clever routing: Analyzes person queries and routes them to the suitable specialist agent based mostly on capabilities
- Multi-agent coordination: Orchestrates advanced workflows requiring a number of brokers
- Monitoring agent (Strands Brokers SDK): Serves because the operational intelligence layer, repeatedly analyzing CloudWatch logs, metrics, dashboards, and alarms throughout AWS providers. This agent makes a speciality of figuring out anomalies, monitoring error patterns, and surfacing actionable insights from huge quantities of telemetry information. When uncommon patterns emerge, the monitoring Agent initiates conversations with different specialised brokers to coordinate response actions.Key obligations of the monitoring agent embody:
- CloudWatch integration:
- Lists and analyzes CloudWatch dashboards
- Fetches logs for particular AWS providers (Lambda, ECS, EC2)
- Screens alarms and alert states
- Analyzes log teams for patterns and errors
- Cross-account entry: Helps monitoring throughout a number of AWS accounts
- CloudWatch integration:
- Operational agent (OpenAI SDK): Offers remediation methods and exterior data integration. When the monitoring agent detects a crucial situation, it communicates instantly with the operational agent by A2A, offering context about the issue and requesting particular remediation actions. Key obligations of the operational agent embody:
- Internet search: Makes use of Tavily API to seek for AWS finest practices, troubleshooting guides, and options
- Remediation methods: Proposes options based mostly on detected points

Implementing the multi-agent monitoring answer
Now that we’ve explored how these three specialised brokers collaborate to deal with AWS incidents, let’s stroll by easy methods to construct and deploy this multi-agent system utilizing Amazon Bedrock AgentCore Runtime.
The implementation follows a progressive method:
- Begin with the muse – We’ll deploy a easy A2A server to grasp the core mechanics of agent deployment, authentication, and invocation on AgentCore Runtime
- Construct the monitoring system – Utilizing the identical deployment patterns, we’ll assemble every specialised agent (Monitoring, Operational, and Host) with their particular instruments and capabilities
- Join the brokers – Configure A2A communication channels between brokers, enabling them to find and invoke one another by standardized protocols
- Observe the system in motion – Watch the demo video displaying real-time incident detection, cross-agent coordination, and automatic response
All code examples, full agent implementations, and deployment scripts for this multi-agent monitoring system can be found in our GitHub repository.
Getting began with A2A on AgentCore Runtime
To grasp the basics of deploying A2A servers on Amazon Bedrock AgentCore Runtime, together with step-by-step directions for creating, testing, deploying, and invoking brokers, seek advice from the A2A Protocol Support documentation. This information covers:
- Creating and configuring A2A servers with any framework (Strands, OpenAI SDK, LangGraph)
- Native testing and validation
- Deployment utilizing the AgentCore CLI
- Authentication setup (OAuth 2.0 and AWS IAM)
- Agent Card retrieval and discovery
- Consumer implementation for invoking deployed brokers
When you’re acquainted with these fundamentals, you’ll be able to apply the identical patterns to construct every part of the multi-agent monitoring system.
View the total instance on this GitHub sample. For this put up, we’ll concentrate on this use case implementation.
Conditions
To deploy the multi-agent monitoring system implementation, comply with the prerequisite steps:
- AWS account: You want an lively AWS account with applicable permissions
- AWS CLI: Set up and configure AWS CLI along with your credentials
- Set up uv.
- Supported Areas: This answer is presently examined and supported within the following AWS Regions.
Observe: To deploy in different Areas, you’ll have to replace the DynamoDB prefix listing mappings in cloudformation/vpc-stack.yaml. See the VPC Stack documentation for particulars.
Deployment steps
This information walks you thru deploying a multi-agent system on AWS utilizing infrastructure-as-code. The best strategy to deploy this answer is utilizing our automated deployment script:
Step 1: Clone the repository
Step 2: Run the deployment script
This deployment script will confirm that the AWS CLI is put in and configured, test if the AWS credentials are legitimate, affirm that the Area is ready to us-west-2, interactively gather the required parameters, generate distinctive S3 bucket names and mechanically deploy all stacks within the right order. The approximate deployment time is 10-Quarter-hour.
Step 3: Present the runtime CLI parameters
Subsequent, present the parameters used at deployment. Press enter for every of the choices to make use of the default Amazon Bedrock mannequin ID and the CloudFormation stack names for every of the brokers.
API keys: You’ll want the next API keys (the deployment script will immediate for these):
After you have configured the data, begin the deployment course of and monitor it under within the AWS Console and terminal respectively.
Step 4: Present the runtime CLI parameters
Run the frontend utilizing following instructions. This units up and runs the React frontend UI that enables customers to work together with the multi-agent incident response system for monitoring AWS infrastructure, querying CloudWatch metrics and logs, and looking for remediation methods by the coordinated A2A brokers.
This deployment creates a multi-agent A2A system with three specialised AI brokers operating on Amazon Bedrock AgentCore Runtime and orchestrated utilizing the A2A protocol. The Cognito stack provisions OAuth 2.0-based machine-to-machine authentication by making a Cognito person pool with 4 distinct consumer purposes (WebSearch, Monitoring, Gateway, and Host Agent shoppers).
The monitoring agent (constructed with the Strands SDK) connects to CloudWatch metrics and logs by an AgentCore Gateway utilizing a Smithy mannequin definition, with customized semantic reminiscence methods for incident monitoring.
The operations agent (constructed with OpenAI Brokers SDK) interfaces with Tavily API for remediation analysis and the host agent (constructed with Google ADK) acts because the coordinator utilizing HTTP protocol to delegate duties to the 2 specialised A2A brokers.
Finish-to-end incident response workflow
On this part, we’ll stroll by an end-to-end workflow the place the host agent manages conversations, will get the necessities from the person, and selects the very best agent to route the request to (monitoring or operations agent). The monitoring and operations agent expose their agent playing cards that’s utilized by the host agent for orchestration. On this instance, we’ll check with easy error evaluation from numerous log teams and seek for remediation methods.

The workflow consists of the next steps:
- Preliminary greeting: The person sends a greeting message asking “Hello! How are you?” to the host agent. The host agent processes the request. The host agent responds again to the person with a pleasant greeting saying “I’m doing nicely, thanks!”
- Capabilities question: The person asks the host agent “What are your capabilities?” to grasp what the agent can do. The host agent explains to the person that it’s an orchestration agent designed for AWS monitoring and operations based mostly on the distant agent connections that it has entry to.
- Listing log teams and dashboards: The person requests the host agent to listing the log teams and dashboards of their AWS account. The host agent acknowledges this can be a monitoring process and executes the
transfer_to_agentinstrument to delegate the work. The request is transferred from the host agent to the monitoring agent for specialised dealing with. The monitoring agent makes use of the Agent-to-Agent (A2A) Json RPC Transport protocol to speak. The monitoring agent retrieves the data and returns outcomes displaying 0 dashboards and 153 log teams discovered within the account. The host agent receives the outcomes from the monitoring agent and shows the dashboards and log teams data to the person. - Analyze particular log group: The person requests the host agent to search for errors in a particular log group at path
/aws/bedrock-agentcore/runtimes/hostadk-<runtimeId>-DEFAULT. The host agent determines this requires monitoring experience and executes thetransfer_to_agentinstrument. The request is transferred to the monitoring agent with directions to investigate the required log group for errors. The monitoring agent analyzes the log group and discovers 9 errors and 18 warnings, particularly figuring out OTLP Export Failures. The host agent receives the evaluation outcomes and shows an in depth error evaluation report back to the person. - Debug and repair suggestions: The person asks the host agent to debug the errors and supply a report on the fixes wanted. The request is transferred to the operations agent to seek for options associated to OTLP export failures. The operations agent makes use of A2A JsonRPC Transport to try the search and performs internet search to offer an answer.
Safety with A2A on Amazon Bedrock AgentCore Runtime
Amazon Bedrock AgentCore Runtime helps two authentication strategies for securing A2A communication:
OAuth 2.0 authentication: The A2A consumer authenticates with an exterior authorization server to acquire a JSON Internet Token (JWT), which is then included with all requests to the A2A server. This token-based method allows safe, standardized authentication utilizing both machine-to-machine (M2M) credentials or person federation, permitting the A2A server to confirm the consumer’s id and implement entry controls based mostly on the token’s claims.
AWS IAM authentication: The A2A consumer assumes an IAM function with permissions to invoke the A2A server’s agent. This method leverages AWS SigV4 request signing and IAM insurance policies to regulate entry, assuaging the necessity for exterior token administration whereas offering fine-grained permissions.
What’s supported in Amazon Bedrock AgentCore Runtime with A2A
Amazon Bedrock AgentCore Runtime gives complete assist for A2A communication. View among the capabilities supported:
- Stateless server: Amazon Bedrock AgentCore Runtime can host A2A servers that expose an HTTP interface, operating a stateless HTTP server on port 9000 and supporting JSON-RPC messaging. The runtime acts as a clear proxy, passing JSON-RPC requests and responses unchanged to protect protocol constancy.
- Authenticated agent playing cards: Helps authenticated agent card at
/.well-known/agent-card.jsoncontaining its capabilities & abilities permitting different brokers to find it mechanically. - Authentication with safe inbound auth: Amazon Bedrock AgentCore Runtime helps safe authentication by way of AWS SigV4 and OAuth 2.0, ensuring the agent-to-agent communication is allowed and safe. The A2A server authenticates each incoming request utilizing the credentials supplied within the HTTP headers, leveraging Amazon Bedrock AgentCore Identity.
- Authorization with safe outbound auth: Amazon Bedrock AgentCore Runtime allows safe outbound authorization by each IAM execution roles and AgentCore Identification. Every agent assumes an outlined IAM execution function, granting it the mandatory permissions to entry AWS assets extra securely. For interactions with exterior providers, brokers can use Amazon Bedrock AgentCore Identification, which gives managed OAuth 2.0 assist for third-party id suppliers similar to Google, GitHub, Slack, and extra.
- VPC connectivity: You possibly can configure Amazon Bedrock AgentCore Runtime to connect with assets in your Amazon Virtual Private Cloud (VPC). By configuring VPC connectivity, you allow safe entry to non-public assets similar to databases, inner APIs, and providers inside your VPC.
- Leverage AWS PrivateLink: Amazon Bedrock AgentCore allows safe, non-public connections between your Digital Non-public Cloud (VPC) and AgentCore providers utilizing AWS PrivateLink. By creating interface VPC endpoints, you’ll be able to maintain A2A server communication inside your VPC with out traversing the general public web.
- Lifecycle administration: Amazon Bedrock AgentCore Runtime allows you to configure lifecycle rules to handle useful resource utilization with
idleRuntimeSessionTimeoutandmaxLifetime. Idle or long-running periods are mechanically terminated for environment friendly useful resource utilization and to keep up system efficiency.
Conclusion
The Agent-to-Agent protocol assist in Amazon Bedrock AgentCore Runtime gives the assist for constructing scalable, interoperable multi-agent programs. By offering standardized communication between AI brokers, no matter their underlying framework, mannequin, or internet hosting infrastructure, organizations can compose refined agentic options with the A2A protocol. The AWS monitoring and incident response instance demonstrates the sensible energy of this method: a Google ADK-based orchestrator coordinating with Strands and OpenAI SDK brokers, all deployed on AgentCore Runtime, working collectively to detect points, seek for options, and advocate fixes. This stage of interoperability would historically require in depth customized integration work, however A2A makes it easy by standardized protocols.As AI programs proceed to evolve from single-purpose instruments to collaborative environments, protocols like A2A and MCP turn out to be important constructing blocks. They create a future the place brokers could be found, composed, and orchestrated dynamically, enabling organizations to construct as soon as and combine wherever.
Concerning the authors
Madhur Prashant is an Utilized Generative AI Architect at Amazon Internet Providers. He’s passionate in regards to the intersection of human pondering and Agentic AI. His pursuits lie in generative AI, cognitive science and particularly constructing options which are useful and innocent, and most of all optimum for patrons. Exterior of labor, he loves doing yoga, climbing, spending time along with his twin and taking part in the guitar.
Eashan Kaushik is a Specialist Options Architect AI/ML at Amazon Internet Providers. He’s pushed by creating cutting-edge generative AI options whereas prioritizing a customer-centric method to his work. Earlier than this function, he obtained an MS in Pc Science from NYU Tandon Faculty of Engineering. Exterior of labor, he enjoys sports activities, lifting, and operating marathons.
Sriharsha M S is a Principal Gen AI specialist answer architect within the Strategic Specialist group at Amazon Internet Providers. He works with strategic AWS prospects who’re profiting from AI/ML to unravel advanced enterprise issues. He gives technical steering and design recommendation to foundational mannequin science and agentic AI purposes at scale. His experience spans utility {hardware} accelerators, structure, huge information, analytics and machine studying.
Jeffrey Burke is an Utilized Generative AI Options Architect at Amazon Internet Providers (AWS), the place he makes a speciality of designing and implementing cutting-edge generative AI options for enterprise prospects. With a ardour for instructing advanced applied sciences, he focuses on translating refined AI ideas into sensible, scalable options that drive enterprise worth. He has a MS in Information Science and BS in Chemical Engineering.
Shreyas Subramanian is a Principal Information Scientist and helps prospects by utilizing Generative AI to unravel their enterprise challenges utilizing the AWS platform. Shreyas has a background in giant scale optimization and Deep Studying, and he’s a researcher finding out using Machine Studying and Reinforcement Studying for accelerating studying and optimization duties. Shreyas can be an Amazon best-selling e book creator with a number of analysis papers and patents to his identify.
Andy Palmer is a Director of Know-how for AWS Strategic Accounts. His groups present Specialist Options Structure abilities throughout a lot of speciality area areas, together with AIML, generative AI, information and analytics, safety, community, and open supply software program. Andy and his group have been on the forefront of guiding our most superior prospects by their generative AI journeys and serving to to search out methods to use these new instruments to each present drawback areas and internet new improvements and product experiences.
Sayee Kulkarni is a Software program Growth Engineer on the AWS Bedrock AgentCore service. Her group is chargeable for constructing and sustaining the AgentCore Runtime platform, a foundational part that permits prospects to leverage agentic AI capabilities. She is pushed by delivering tangible buyer worth, and this customer-centric focus motivates her work. Sayee performed a key function in designing and launching Agent-to-Agent (A2A) capabilities for AgentCore, empowering prospects to construct refined multi-agent programs that autonomously collaborate to unravel advanced enterprise challenges.