Supercharge your LLM efficiency with Amazon SageMaker Massive Mannequin Inference container v15
Right this moment, we’re excited to announce the launch of Amazon SageMaker Massive Mannequin Inference (LMI) container v15, powered by vLLM 0.8.4 with assist for the vLLM V1 engine. This model now helps the newest open-source fashions, resembling Meta’s Llama 4 fashions Scout and Maverick, Google’s Gemma 3, Alibaba’s Qwen, Mistral AI, DeepSeek-R, and plenty of extra. Amazon SageMaker AI continues to evolve its generative AI inference capabilities to satisfy the rising calls for in efficiency and mannequin assist for foundation models (FMs).
This launch introduces vital efficiency enhancements, expanded mannequin compatibility with multimodality (that’s, the flexibility to grasp and analyze text-to-text, images-to-text, and text-to-images knowledge), and gives built-in integration with vLLM that will help you seamlessly deploy and serve large language models (LLMs) with the best efficiency at scale.
What’s new?
LMI v15 brings a number of enhancements that enhance throughput, latency, and value:
- An async mode that straight integrates with vLLM’s AsyncLLMEngine for improved request dealing with. This mode creates a extra environment friendly background loop that constantly processes incoming requests, enabling it to deal with a number of concurrent requests and stream outputs with larger throughput than the earlier Rolling-Batch implementation in v14.
- Help for the vLLM V1 engine, which delivers as much as 111% larger throughput in comparison with the earlier V0 engine for smaller fashions at excessive concurrency. This efficiency enchancment comes from diminished CPU overhead, optimized execution paths, and extra environment friendly useful resource utilization within the V1 structure. LMI v15 helps each V1 and V0 engines, with V1 being the default. When you’ve got a necessity to make use of V0, you need to use the V0 engine by specifying
VLLM_USE_V1=0. vLLM V1’s engine additionally comes with a core re-architecture of the serving engine with simplified scheduling, zero-overhead prefix caching, clear tensor-parallel inference, environment friendly enter preparation, and superior optimizations with torch.compile and Flash Consideration 3. For extra info, see the vLLM Blog. - Expanded API schema assist with three versatile choices to permit seamless integration with purposes constructed on well-liked API patterns:
- Message format suitable with the OpenAI Chat Completions API.
- OpenAI Completions format.
- Textual content Era Inference (TGI) schema to assist backward compatibility with older fashions.
- Multimodal assist, with enhanced capabilities for vision-language fashions together with optimizations resembling multimodal prefix caching
- Constructed-in assist for operate calling and gear calling, enabling refined agent-based workflows.
Enhanced mannequin assist
LMI v15 helps an increasing roster of state-of-the-art fashions, together with the newest releases from main mannequin suppliers. The container affords ready-to-deploy compatibility for however not restricted to:
- Llama 4 – Llama-4-Scout-17B-16E and Llama-4-Maverick-17B-128E-Instruct
- Gemma 3 – Google’s light-weight and environment friendly fashions, recognized for his or her sturdy efficiency regardless of smaller dimension
- Qwen 2.5 – Alibaba’s superior fashions together with QwQ 2.5 and Qwen2-VL with multimodal capabilities
- Mistral AI fashions – Excessive-performance fashions from Mistral AI that provide environment friendly scaling and specialised capabilities
- DeepSeek-R1/V3 – Cutting-edge reasoning fashions
Every mannequin household will be deployed utilizing the LMI v15 container by specifying the suitable mannequin ID, for instance, meta-llama/Llama-4-Scout-17B-16E, and configuration parameters as surroundings variables, with out requiring customized code or optimization work.
Benchmarks
Our benchmarks show the efficiency benefits of LMI v15’s V1 engine in comparison with earlier variations:
| Mannequin | Batch dimension | Occasion sort | LMI v14 throughput [tokens/s] (V0 engine) | LMI v15 throughput [tokens/s] (V1 engine) | Enchancment | |
| 1 | deepseek-ai/DeepSeek-R1-Distill-Llama-70B | 128 | p4d.24xlarge | 1768 | 2198 | 24% |
| 2 | meta-llama/Llama-3.1-8B-Instruct | 64 | ml.g6e.2xlarge | 1548 | 2128 | 37% |
| 3 | mistralai/Mistral-7B-Instruct-v0.3 | 64 | ml.g6e.2xlarge | 942 | 1988 | 111% |
DeepSeek-R1 Llama 70B for numerous ranges of concurrency

Llama 3.1 8B Instruct for numerous degree of concurrency

Mistral 7B for numerous ranges of concurrency

The async engine in LMI v15 exhibits power in high-concurrency situations, the place a number of simultaneous requests profit from the optimized request dealing with. These benchmarks spotlight that the V1 engine in async mode delivers between 24% and 111% larger throughput in comparison with LMI v14 utilizing rolling batch within the fashions examined in excessive concurrency situations for batch dimension of 64 and 128. We advise to bear in mind the next concerns for optimum efficiency:
- Greater batch sizes improve concurrency however include a pure tradeoff by way of latency
- Batch sizes of 4 and eight present the very best latency for many use circumstances
- Batch sizes as much as 64 and 128 obtain most throughput with acceptable latency trade-offs
API codecs
LMI v15 helps three API schemas: OpenAI Chat Completions, OpenAI Completions, and TGI.
- Chat Completions – Message format is suitable with OpenAI Chat Completions API. Use this schema for software calling, reasoning, and multimodal use circumstances. Here’s a pattern of the invocation with the Messages API:
- OpenAI Completions format – The Completions API endpoint is not receiving updates:
- TGI – Helps backward compatibility with older fashions:
Getting began with LMI v15
Getting began with LMI v15 is seamless, and you’ll deploy with LMI v15 in just a few strains of code. The container is obtainable via Amazon Elastic Container Registry (Amazon ECR), and deployments will be managed via SageMaker AI endpoints. To deploy fashions, you’ll want to specify the Hugging Face mannequin ID, occasion sort, and configuration choices as surroundings variables.
For optimum efficiency, we advocate the next cases:
- Llama 4 Scout: ml.p5.48xlarge
- DeepSeek R1/V3: ml.p5e.48xlarge
- Qwen 2.5 VL-32B: ml.g5.12xlarge
- Qwen QwQ 32B: ml.g5.12xlarge
- Mistral Massive: ml.g6e.48xlarge
- Gemma3-27B: ml.g5.12xlarge
- Llama 3.3-70B: ml.p4d.24xlarge
To deploy with LMI v15, observe these steps:
- Clone the notebook to your Amazon SageMaker Studio pocket book or to Visible Studio Code (VS Code). You may then run the pocket book to do the preliminary setup and deploy the mannequin from the Hugging Face repository to the SageMaker AI endpoint. We stroll via the important thing blocks right here.
- LMI v15 maintains the identical configuration sample as earlier variations, utilizing surroundings variables within the kind
OPTION_<CONFIG_NAME>. This constant strategy makes it simple for customers acquainted with earlier LMI variations emigrate to v15.HF_MODEL_IDunits the mannequin id from Hugging Face. You too can obtain mannequin from Amazon Simple Storage Service (Amazon S3).HF_TOKENunits the token to obtain the mannequin. That is required for gated fashions like Llama-4OPTION_MAX_MODEL_LEN. That is the max mannequin context size.OPTION_MAX_ROLLING_BATCH_SIZEunits the batch dimension for the mannequin.OPTION_MODEL_LOADING_TIMEOUTunits the timeout worth for SageMaker to load the mannequin and run well being checks.SERVING_FAIL_FAST=true. We advocate setting this flag as a result of it permits SageMaker to gracefully restart the container when an unrecoverable engine error happens.OPTION_ROLLING_BATCH= disabledisables the rolling batch implementation of LMI, which was the default providing in LMI V14. We advocate utilizing async as an alternative as this newest implementation and gives higher efficiencyOPTION_ASYNC_MODE=truepermits async mode.OPTION_ENTRYPOINTgives the entrypoint for vLLM’s async integrations
- Set the newest container (on this instance we used
0.33.0-lmi15.0.0-cu128), AWS Area (us-east-1), and create a mannequin artifact with all of the configurations. To evaluation the newest accessible container model, see Available Deep Learning Containers Images. - Deploy the mannequin to the endpoint utilizing
mannequin.deploy(). - Invoke the mannequin, SageMaker inference gives two APIs to invoke the model-
InvokeEndpointandInvokeEndpointWithResponseStream. You may select both choice primarily based in your wants.
To run multi-modal inference with Llama-4 Scout, see the notebook for the total code pattern to run inference requests with pictures.
Conclusion
Amazon SageMaker LMI container v15 represents a big step ahead in massive mannequin inference capabilities. With the brand new vLLM V1 engine, async working mode, expanded mannequin assist, and optimized efficiency, you possibly can deploy cutting-edge LLMs with larger efficiency and suppleness. The container’s configurable choices provide the flexibility to fine-tune deployments in your particular wants, whether or not optimizing for latency, throughput, or price.
We encourage you to discover this launch for deploying your generative AI fashions.
Take a look at the provided example notebooks to start out deploying fashions with LMI v15.
Concerning the authors
Vivek Gangasani is a Lead Specialist Options Architect for Inference at AWS. He helps rising generative AI corporations construct progressive options utilizing AWS providers and accelerated compute. Presently, he’s targeted on creating methods for fine-tuning and optimizing the inference efficiency of enormous language fashions. In his free time, Vivek enjoys mountaineering, watching films, and making an attempt totally different cuisines.
Siddharth Venkatesan is a Software program Engineer in AWS Deep Studying. He presently focusses on constructing options for big mannequin inference. Previous to AWS he labored within the Amazon Grocery org constructing new cost options for patrons world-wide. Exterior of labor, he enjoys snowboarding, the outside, and watching sports activities.
Felipe Lopez is a Senior AI/ML Specialist Options Architect at AWS. Previous to becoming a member of AWS, Felipe labored with GE Digital and SLB, the place he targeted on modeling and optimization merchandise for industrial purposes.
Banu Nagasundaram leads product, engineering, and strategic partnerships for Amazon SageMaker JumpStart, the SageMaker machine studying and generative AI hub. She is enthusiastic about constructing options that assist prospects speed up their AI journey and unlock enterprise worth.
Dmitry Soldatkin is a Senior AI/ML Options Architect at Amazon Internet Companies (AWS), serving to prospects design and construct AI/ML options. Dmitry’s work covers a variety of ML use circumstances, with a main curiosity in Generative AI, deep studying, and scaling ML throughout the enterprise. He has helped corporations in lots of industries, together with insurance coverage, monetary providers, utilities, and telecommunications. You may join with Dmitry on LinkedIn.