Operating deep analysis AI brokers on Amazon Bedrock AgentCore
AI brokers are evolving past primary single-task helpers into extra highly effective programs that may plan, critique, and collaborate with different brokers to resolve complicated issues. Deep Agents—a lately launched framework constructed on LangGraph—carry these capabilities to life, enabling multi-agent workflows that mirror real-world group dynamics. The problem, nevertheless, is not only constructing such brokers but in addition working them reliably and securely in manufacturing. That is the place Amazon Bedrock AgentCore Runtime is available in. By offering a safe, serverless setting purpose-built for AI brokers and instruments, Runtime makes it doable to deploy Deep Brokers at enterprise scale with out the heavy lifting of managing infrastructure.
On this publish, we show the right way to deploy Deep Brokers on AgentCore Runtime. As proven within the following determine, AgentCore Runtime scales any agent and supplies session isolation by allocating a brand new microVM for every new session.

What’s Amazon Bedrock AgentCore?
Amazon Bedrock AgentCore is each framework-agnostic and model-agnostic, providing you with the flexibleness to deploy and function superior AI brokers securely and at scale. Whether or not you’re constructing with Strands Agents, CrewAI, LangGraph, LlamaIndex, or one other framework—and working them on a big language mannequin (LLM)—AgentCore supplies the infrastructure to help them. Its modular providers are purpose-built for dynamic agent workloads, with instruments to increase agent capabilities and controls required for manufacturing use. By assuaging the undifferentiated heavy lifting of constructing and managing specialised agent infrastructure, AgentCore helps you to carry your most well-liked framework and mannequin and deploy with out rewriting code.
Amazon Bedrock AgentCore affords a complete suite of capabilities designed to rework native agent prototypes into production-ready programs. These embrace persistent reminiscence for sustaining context in and throughout conversations, entry to current APIs utilizing Mannequin Context Protocol (MCP), seamless integration with company authentication programs, specialised instruments for net looking and code execution, and deep observability into agent reasoning processes. On this publish, we focus particularly on the AgentCore Runtime element.
Core capabilities of AgentCore Runtime
AgentCore Runtime supplies a serverless, safe internet hosting setting particularly designed for agentic workloads. It packages code into a light-weight container with a easy, constant interface, making it equally well-suited for working brokers, instruments, MCP servers, or different workloads that profit from seamless scaling and built-in id administration.AgentCore Runtime affords prolonged execution occasions as much as 8 hours for complicated reasoning duties, handles giant payloads for multimodal content material, and implements consumption-based pricing that expenses solely throughout energetic processing—not whereas ready for LLM or software responses. Every consumer session runs in full isolation inside devoted micro digital machines (microVMs), sustaining safety and serving to to forestall cross-session contamination between agent interactions. The runtime works with many frameworks (for instance: LangGraph, CrewAI, Strands, and so forth) and plenty of basis mannequin suppliers, whereas offering built-in company authentication, specialised agent observability, and unified entry to the broader AgentCore setting by a single SDK.
Actual-world instance: Deep Brokers integration
On this publish we’re going to deploy the lately launched Deep Agents implementation example on AgentCore Runtime—displaying simply how little effort it takes to get the newest agent improvements up and working.

The pattern implementation within the previous diagram consists of:
- A analysis agent that conducts deep web searches utilizing the Tavily API
- A critique agent that evaluations and supplies suggestions on generated stories
- A primary orchestrator that manages the workflow and handles file operations
Deep Brokers makes use of LangGraph’s state administration to create a multi-agent system with:
- Constructed-in activity planning by a
write_todossoftware that helps brokers break down complicated requests - Digital file system the place brokers can learn/write recordsdata to keep up context throughout interactions
- Sub-agent structure permitting specialised brokers to be invoked for particular duties whereas sustaining context isolation
- Recursive reasoning with excessive recursion limits (greater than 1,000) to deal with complicated, multi-step workflows
This structure permits Deep Brokers to deal with analysis duties that require a number of rounds of data gathering, synthesis, and refinement.The important thing integration factors in our code showcase how brokers work with AgentCore. The wonder is in its simplicity—we solely want so as to add a few traces of code to make an agent AgentCore-compatible:
That’s it! The remainder of the code—mannequin initialization, API integrations, and agent logic—stays precisely because it was. AgentCore handles the infrastructure whereas your agent handles the intelligence. This integration sample works for many Python agent frameworks, making AgentCore really framework-agnostic.
Deploying to AgentCore Runtime: Step-by-step
Let’s stroll by the precise deployment course of utilizing the AgentCore Starter ToolKit, which dramatically simplifies the deployment workflow.
Stipulations
Earlier than you start, ensure you have:
- Python 3.10 or larger
- AWS credentials configured
- Amazon Bedrock AgentCore SDK put in
Step 1: IAM permissions
There are two completely different AWS Identity and Access Management (IAM) permissions it’s worthwhile to contemplate when deploying an agent in an AgentCore Runtime—the function you, as a developer use to create AgentCore assets and the execution function that an agent must run in an AgentCore Runtime. Whereas the latter function can now be auto-created by the AgentCore Starter Toolkit (auto_create_execution_role=True), the previous should be outlined as described in IAM Permissions for AgentCore Runtime.
Step 2: Add a wrapper to your agent
As proven within the previous Deep Brokers instance, add the AgentCore imports and decorator to your current agent code.
Step 3: Deploy utilizing the AgentCore starter toolkit
The starter toolkit supplies a three-step deployment course of:
Step 4: What occurs behind the scenes
If you run the deployment, the starter equipment routinely:
- Generates an optimized Docker file with Python 3.13-slim base picture and OpenTelemetry instrumentation
- Builds your container with the dependencies from
necessities.txt - Creates an Amazon Elastic Container Registry (Amazon ECR) repository (
if auto_create_ecr=True) and pushes your picture - Deploys to AgentCore Runtime and displays the deployment standing
- Configures networking and observability with Amazon CloudWatch and AWS X-Ray integration
Your complete course of sometimes takes 2–3 minutes, after which your agent is able to deal with requests at scale. Every new session is launched in its personal recent AgentCore Runtime microVM, sustaining full setting isolation.
The starter equipment generates a configuration file (.bedrock_agentcore.yaml) that captures your deployment settings, making it simple to redeploy or replace your agent later.
Invoking your deployed agent
After deployment, you have got two choices for invoking your agent:
Possibility 1: Utilizing the beginning equipment (proven in Step 3)
Possibility 2: Utilizing boto3 SDK straight
Deep Brokers in motion
Because the code executes in Bedrock AgentCore Runtime, the first agent orchestrates specialised sub-agents—every with its personal objective, immediate, and gear entry—to resolve complicated duties extra successfully. On this case, the orchestrator immediate (research_instructions) units the plan:
- Write the query to query.txt
- Fan out to a number of research-agent calls (every on a single sub-topic) utilizing the internet_search software
- Synthesize findings into final_report.md
- Name critique-agent to judge gaps and construction
- Optionally loop again to extra analysis/edits till high quality is met
Right here it’s in motion:
Clear up
When completed, don’t neglect to de-allocate provisioned AgentCore Runtime along with the container repository that was created through the course of:
Conclusion
Amazon Bedrock AgentCore represents a paradigm shift in how we deploy AI brokers. By abstracting away infrastructure complexity whereas sustaining framework and mannequin flexibility, AgentCore permits builders to concentrate on constructing subtle agent logic quite than managing deployment pipelines. Our Deep Brokers deployment demonstrates that even complicated, multi-agent programs with exterior API integrations will be deployed with minimal code adjustments. The mixture of enterprise-grade safety, built-in observability, and serverless scaling makes AgentCore the only option for manufacturing AI agent deployments. Particularly for deep analysis brokers, AgentCore affords the next distinctive capabilities that you could discover:
- AgentCore Runtime can deal with asynchronous processing and lengthy working (as much as 8 hours) brokers. Asynchronous duties enable your agent to proceed processing after responding to the consumer and deal with long-running operations with out blocking responses. Your background analysis sub-agent may very well be asynchronously researching for hours.
- AgentCore Runtime works with AgentCore Memory, enabling capabilities reminiscent of constructing upon earlier findings, remembering analysis preferences, and sustaining complicated investigation context with out shedding progress between classes.
- You should utilize AgentCore Gateway to increase your deep analysis to incorporate proprietary insights from enterprise providers and information sources. By exposing these differentiated assets as MCP instruments, your brokers can rapidly take benefit and mix that with publicly obtainable data.
Able to deploy your brokers to manufacturing? Right here’s the right way to get began:
- Set up the AgentCore starter equipment:
pip set up bedrock-agentcore-starter-toolkit - Experiment: Deploy your code by following this step by step guide.
The period of production-ready AI brokers is right here. With AgentCore, the journey from prototype to manufacturing has by no means been shorter.
In regards to the authors
Vadim Omeltchenko is a Sr. AI/ML Options Architect who’s keen about serving to AWS prospects innovate within the cloud. His prior IT expertise was predominantly on the bottom.
Eashan Kaushik is a Specialist Options Architect AI/ML at Amazon Net Providers. He’s pushed by creating cutting-edge generative AI options whereas prioritizing a customer-centric method to his work. Earlier than this function, he obtained an MS in Laptop Science from NYU Tandon Faculty of Engineering. Outdoors of labor, he enjoys sports activities, lifting, and working marathons.
Shreyas Subramanian is a Principal information scientist and helps prospects by utilizing Machine Studying to resolve their enterprise challenges utilizing the AWS platform. Shreyas has a background in giant scale optimization and Machine Studying, and in use of Machine Studying and Reinforcement Studying for accelerating optimization duties.
Mark Roy is a Principal Machine Studying Architect for AWS, serving to prospects design and construct generative AI options. His focus since early 2023 has been main resolution structure efforts for the launch of Amazon Bedrock, the flagship generative AI providing from AWS for builders. Mark’s work covers a variety of use instances, with a major curiosity in generative AI, brokers, and scaling ML throughout the enterprise. He has helped firms in insurance coverage, monetary providers, media and leisure, healthcare, utilities, and manufacturing. Previous to becoming a member of AWS, Mark was an architect, developer, and know-how chief for over 25 years, together with 19 years in monetary providers. Mark holds six AWS Certifications, together with the ML Specialty Certification.