Obtain as much as ~2x larger throughput whereas lowering prices by as much as ~50% for generative AI inference on Amazon SageMaker with the brand new inference optimization toolkit – Half 2


As generative synthetic intelligence (AI) inference turns into more and more important for companies, prospects are in search of methods to scale their generative AI operations or combine generative AI fashions into present workflows. Mannequin optimization has emerged as a vital step, permitting organizations to stability cost-effectiveness and responsiveness, bettering productiveness. Nonetheless, price-performance necessities differ extensively throughout use instances. For chat purposes, minimizing latency is essential to supply an interactive expertise, whereas real-time purposes like suggestions require maximizing throughput. Navigating these trade-offs poses a big problem to quickly undertake generative AI, since you should fastidiously choose and consider completely different optimization methods.

To beat these challenges, we’re excited to introduce the inference optimization toolkit, a totally managed mannequin optimization function in Amazon SageMaker. This new function delivers as much as ~2x larger throughput whereas lowering prices by as much as ~50% for generative AI fashions reminiscent of Llama 3, Mistral, and Mixtral fashions. For instance, with a Llama 3-70B mannequin, you possibly can obtain as much as ~2400 tokens/sec on a ml.p5.48xlarge occasion v/s ~1200 tokens/sec beforehand with none optimization.

This inference optimization toolkit makes use of the newest generative AI mannequin optimization methods reminiscent of compilation, quantization, and speculative decoding that can assist you scale back the time to optimize generative AI fashions from months to hours, whereas attaining one of the best price-performance to your use case. For compilation, the toolkit makes use of the Neuron Compiler to optimize the mannequin’s computational graph for particular {hardware}, reminiscent of AWS Inferentia, enabling quicker runtimes and diminished useful resource utilization. For quantization, the toolkit makes use of Activation-aware Weight Quantization (AWQ) to effectively shrink the mannequin measurement and reminiscence footprint whereas preserving high quality. For speculative decoding, the toolkit employs a quicker draft mannequin to foretell candidate outputs in parallel, enhancing inference pace for longer textual content technology duties. To study extra about every approach, seek advice from Optimize model inference with Amazon SageMaker. For extra particulars and benchmark outcomes for widespread open supply fashions, see Achieve up to ~2x higher throughput while reducing costs by up to ~50% for generative AI inference on Amazon SageMaker with the new inference optimization toolkit – Part 1.

On this submit, we exhibit the right way to get began with the inference optimization toolkit for supported fashions in Amazon SageMaker JumpStart and the Amazon SageMaker Python SDK. SageMaker JumpStart is a totally managed mannequin hub that permits you to discover, fine-tune, and deploy widespread open supply fashions with only a few clicks. You need to use pre-optimized fashions or create your personal {custom} optimizations. Alternatively, you possibly can accomplish this utilizing the SageMaker Python SDK, as proven within the following notebook. For the complete checklist of supported fashions, seek advice from Optimize model inference with Amazon SageMaker.

Utilizing pre-optimized fashions in SageMaker JumpStart

The inference optimization toolkit supplies pre-optimized fashions which were optimized for best-in-class cost-performance at scale, with none influence to accuracy. You’ll be able to select the configuration based mostly on the latency and throughput necessities of your use case and deploy in a single click on.

Taking the Meta-Llama-3-8b mannequin in SageMaker JumpStart for example, you possibly can select Deploy from the mannequin web page. Within the deployment configuration, you possibly can broaden the mannequin configuration choices, choose the variety of concurrent customers, and deploy the optimized mannequin.

Deploying a pre-optimized mannequin with the SageMaker Python SDK

You can too deploy a pre-optimized generative AI mannequin utilizing the SageMaker Python SDK in only a few strains of code. Within the following code, we arrange a ModelBuilder class for the SageMaker JumpStart mannequin. ModelBuilder is a category within the SageMaker Python SDK that gives fine-grained management over varied deployment points, reminiscent of occasion varieties, community isolation, and useful resource allocation. You need to use it to create a deployable mannequin occasion, changing framework fashions (like XGBoost or PyTorch) or Inference Specs into SageMaker-compatible fashions for deployment. Discuss with Create a model in Amazon SageMaker with ModelBuilder for extra particulars.

sample_input = {
    "inputs": "Hiya, I am a language mannequin,",
    "parameters": {"max_new_tokens":128, "do_sample":True}
}

sample_output = [
    {
        "generated_text": "Hello, I'm a language model, and I'm here to help you with your English."
    }
]
schema_builder = SchemaBuilder(sample_input, sample_output)

builder = ModelBuilder(
    mannequin="meta-textgeneration-llama-3-8b", # JumpStart mannequin ID
    schema_builder=schema_builder,
    role_arn=role_arn,
)

Listing the accessible pre-benchmarked configurations with the next code:

builder.display_benchmark_metrics()

Select the suitable instance_type and config_name from the checklist based mostly in your concurrent customers, latency, and throughput necessities. Within the previous desk, you possibly can see the latency and throughput throughout completely different concurrency ranges for the given occasion sort and config title. If config title is lmi-optimized, which means the configuration is pre-optimized by SageMaker. Then you possibly can name .construct() to run the optimization job. When the job is full, you possibly can deploy to an endpoint and check the mannequin predictions. See the next code:

# set deployment config with pre-configured optimization
bulder.set_deployment_config(
    instance_type="ml.g5.12xlarge", 
    config_name="lmi-optimized"
)

# construct the deployable mannequin
mannequin = builder.construct()

# deploy the mannequin to a SageMaker endpoint
predictor = mannequin.deploy(accept_eula=True)

# use pattern enter payload to check the deployed endpoint
predictor.predict(sample_input)

Utilizing the inference optimization toolkit to create {custom} optimizations

Along with making a pre-optimized mannequin, you possibly can create {custom} optimizations based mostly on the occasion sort you select. The next desk supplies a full checklist of accessible mixtures. Within the following sections, we discover compilation on AWS Inferentia first, after which attempt the opposite optimization methods for GPU cases.

Occasion Varieties Optimization Approach Configurations
AWS Inferentia Compilation Neuron Compiler
GPUs Quantization AWQ
GPUs Speculative Decoding SageMaker supplied or Carry Your Personal (BYO) draft mannequin

Compilation from SageMaker JumpStart

For compilation, you possibly can choose the identical Meta-Llama-3-8b mannequin from SageMaker JumpStart and select Optimize on the mannequin web page. Within the optimization configuration web page, you possibly can select ml.inf2.8xlarge to your occasion sort. Then present an output Amazon Simple Storage Service (Amazon S3) location for the optimized artifacts. For giant fashions like Llama 2 70B, for instance, the compilation job can take greater than an hour. Subsequently, we advocate utilizing the inference optimization toolkit to carry out ahead-of-time compilation. That approach, you solely have to compile one time.

Compilation utilizing the SageMaker Python SDK

For the SageMaker Python SDK, you possibly can configure the compilation by altering the atmosphere variables within the .optimize() operate. For extra particulars on compilation_config, seek advice from LMI NeuronX ahead-of-time compilation of models tutorial.

compiled_model = builder.optimize(
    instance_type="ml.inf2.8xlarge",
    accept_eula=True,
    compilation_config={
        "OverrideEnvironment": {
            "OPTION_TENSOR_PARALLEL_DEGREE": "2",
            "OPTION_N_POSITIONS": "2048",
            "OPTION_DTYPE": "fp16",
            "OPTION_ROLLING_BATCH": "auto",
            "OPTION_MAX_ROLLING_BATCH_SIZE": "4",
            "OPTION_NEURON_OPTIMIZE_LEVEL": "2",
        }
   },
   output_path=f"s3://{output_bucket_name}/compiled/"
)

# deploy the compiled mannequin to a SageMaker endpoint
predictor = compiled_model.deploy(accept_eula=True)

# use pattern enter payload to check the deployed endpoint
predictor.predict(sample_input)

Quantization and speculative decoding from SageMaker JumpStart

For optimizing fashions on GPU, ml.g5.12xlarge is the default deployment occasion sort for Llama-3-8b. You’ll be able to select quantization, speculative decoding, or each as optimization choices. Quantization makes use of AWQ to cut back the mannequin’s weights to low-bit (INT4) representations. Lastly, you possibly can present an output S3 URL to retailer the optimized artifacts.

With speculative decoding, you possibly can enhance latency and throughput by both utilizing the SageMaker supplied draft mannequin or bringing your personal draft mannequin from the general public Hugging Face mannequin hub or add from your personal S3 bucket.

After the optimization job is full, you possibly can deploy the mannequin or run additional analysis jobs on the optimized mannequin. On the SageMaker Studio UI, you possibly can select to make use of the default pattern datasets or present your personal utilizing an S3 URI. On the time of writing, the consider efficiency possibility is simply accessible by way of the Amazon SageMaker Studio UI.

Quantization and speculative decoding utilizing the SageMaker Python SDK

The next is the SageMaker Python SDK code snippet for quantization. You simply want to offer the quantization_config attribute within the .optimize() operate.

optimized_model = builder.optimize(
    instance_type="ml.g5.12xlarge",
    accept_eula=True,
    quantization_config={
        "OverrideEnvironment": {
            "OPTION_QUANTIZE": "awq"
        }
    },
    output_path=f"s3://{output_bucket_name}/quantized/"
)

# deploy the optimized mannequin to a SageMaker endpoint
predictor = optimized_model.deploy(accept_eula=True)

# use pattern enter payload to check the deployed endpoint
predictor.predict(sample_input)

For speculative decoding, you possibly can change to a speculative_decoding_config attribute and configure SageMaker or a {custom} mannequin. You might want to regulate the GPU utilization based mostly on the sizes of the draft and goal fashions to suit them each on the occasion for inference.

optimized_model = builder.optimize(
    instance_type="ml.g5.12xlarge",
    accept_eula=True,
    speculative_decoding_config={
        "ModelProvider": "sagemaker"
    }
    # speculative_decoding_config={
        # "ModelProvider": "{custom}",
        # use S3 URI or HuggingFace mannequin ID for {custom} draft mannequin
        # word: utilizing HuggingFace mannequin ID as draft mannequin requires HF_TOKEN in atmosphere variables
        # "ModelSource": "s3://custom-bucket/draft-model", 
    # }
)

# deploy the optimized mannequin to a SageMaker endpoint
predictor = optimized_model.deploy(accept_eula=True)

# use pattern enter payload to check the deployed endpoint
predictor.predict(sample_input)

Conclusion

Optimizing generative AI fashions for inference efficiency is essential for delivering cost-effective and responsive generative AI options. With the launch of the inference optimization toolkit, now you can optimize your generative AI fashions, utilizing the newest methods reminiscent of speculative decoding, compilation, and quantization to realize as much as ~2x larger throughput and scale back prices by as much as ~50%. This helps you obtain the optimum price-performance stability to your particular use instances with only a few clicks in SageMaker JumpStart or a number of strains of code utilizing the SageMaker Python SDK. The inference optimization toolkit considerably simplifies the mannequin optimization course of, enabling your corporation to speed up generative AI adoption and unlock extra alternatives to drive higher enterprise outcomes.

To study extra, seek advice from Optimize model inference with Amazon SageMaker and Achieve up to ~2x higher throughput while reducing costs by up to ~50% for generative AI inference on Amazon SageMaker with the new inference optimization toolkit – Part 1.


In regards to the Authors

James Wu is a Senior AI/ML Specialist Options Architect
Saurabh Trikande is a Senior Product Supervisor
Rishabh Ray Chaudhury is a Senior Product Supervisor
Kumara Swami Borra is a Entrance Finish Engineer
Alwin (Qiyun) Zhao is a Senior Software program Growth Engineer
Qing Lan is a Senior SDE

Leave a Reply

Your email address will not be published. Required fields are marked *