Deploy Falcon-40B with massive mannequin inference DLCs on Amazon SageMaker


Final week, Technology Innovation Institute (TII) launched TII Falcon LLM, an open-source foundational massive language mannequin (LLM). Skilled on 1 trillion tokens with Amazon SageMaker, Falcon boasts top-notch efficiency (#1 on the Hugging Face leaderboard at time of writing) whereas being comparatively light-weight and cheaper to host than different LLMs comparable to llama-65B. On this publish, we exhibit the best way to deploy Falcon for functions like language understanding and automatic writing help utilizing massive mannequin inference deep studying containers on SageMaker.

The Falcon has landed on SageMaker

TII is the utilized analysis group inside Abu Dhabi’s Advanced Technology Research Council; its group of scientists, researchers, and engineers is devoted to the invention of transformative applied sciences and growth of scientific breakthroughs that can future-proof our society. Earlier this yr, TII got down to practice a state-of-the-art, open-source LLM and used the infrastructure, tooling, and experience of SageMaker to get the job accomplished (to be taught extra about how this mannequin was educated on SageMaker, consult with Technology Innovation Institute trains the state-of-the-art Falcon LLM 40B foundation model on Amazon SageMaker). The results of this effort is TII Falcon LLM.

Skilled on 1 trillion tokens, Falcon boasts top-notch efficiency in opposition to the Eleuther AI Language Model Evaluation Harness and is at the moment #1 on the Hugging Face leaderboard for accuracy. The mannequin is obtainable in two completely different sizes—Falcon-40B and Falcon-7B—and can be utilized for state-of-the-art efficiency in functions comparable to language understanding, conversational experiences, and automatic writing help. This publish will allow you to get began with deploying Falcon on SageMaker for high-accuracy inference in a lot of these domains.

SageMaker massive mannequin inference DLCs simplify LLM internet hosting

Internet hosting LLMs comparable to Falcon-40B and Falcon-7B could be difficult. Bigger fashions are sometimes extra correct as a result of they embrace billions of parameters, however their measurement also can end in slower inference latency or worse throughput. Internet hosting an LLM can require extra GPU reminiscence and optimized kernels to realize acceptable efficiency. To additional complicate issues, though smaller fashions comparable to Falcon-7B can usually match on a single GPU comparable to an NVIDIA A10G occasion that powers AWS G5 occasion sorts, bigger fashions like Falcon-40B can’t. When this occurs, methods comparable to tensor parallelism should be used to shard that bigger mannequin into a number of items and reap the benefits of the reminiscence of a number of GPUs. Legacy internet hosting options used for smaller fashions usually don’t provide such a performance, including to the issue.

SageMaker massive mannequin inference (LMI) deep studying containers (DLCs) may help. LMI DLCs are a whole end-to-end answer for internet hosting LLMs like Falcon-40B. On the entrance finish, they embrace a high-performance mannequin server (DJL Serving) designed for big mannequin inference with options comparable to token streaming and automated mannequin replication inside an occasion to extend throughput. On the backend, LMI DLCs additionally embrace a number of high-performance mannequin parallel engines, comparable to DeepSpeed and FasterTransformer, that may shard and handle mannequin parameters throughout a number of GPUs. These engines additionally embrace optimized kernels for well-liked transformer fashions, which may speed up inference by as much as thrice sooner. With LMI DLCs, you merely have to create a configuration file to get began with LLM internet hosting on SageMaker. To be taught extra about SageMaker LMI DLCs, consult with Model parallelism and large model inference and our list of available images. You too can try our earlier publish about internet hosting Bloom-175B on SageMaker utilizing LMI DLCs.

Resolution overview

This publish walks you thru the best way to host Falcon-40B utilizing DeepSpeed on SageMaker utilizing LMI DLCs. Falcon-40B requires that we use a number of A10 GPUs, whereas Falcon-7B solely requires a single GPU. We’ve additionally ready examples you possibly can reference to host Falcon-40B and Falcon-7B utilizing each DeepSpeed and Speed up. Yow will discover our code examples on GitHub.

This instance could be run in SageMaker pocket book cases or Amazon SageMaker Studio notebooks. For internet hosting Falcon-40B utilizing LMI and DeepSpeed, we have to use an ml.g5.24xlarge occasion. These cases present 4x NVIDIA A10G GPUs, which every help 96 GiB of GPU reminiscence. As well as, the host offers 96 vCPUs and 384 GiB of host reminiscence. The LMI container will assist handle a lot of the undifferentiated heavy lifting related to internet hosting LLMs, together with downloading the mannequin and partitioning the mannequin artifact in order that its comprising parameters could be unfold throughout a number of GPUs.

Quotas for SageMaker machine studying (ML) cases can fluctuate between accounts. For those who obtain an error indicating you’ve exceeded your quota for g5.24xlarge cases whereas following this publish, you possibly can enhance the restrict by means of the Service Quotas console.

Pocket book walkthrough

To start, we begin by putting in and importing the mandatory dependencies for our instance. We use the Boto3 SDK in addition to the SageMaker SDK. Word that we use Amazon Simple Storage Service (Amazon S3) to retailer the mannequin artifacts that we want for SageMaker and LMI to make use of, so we arrange an S3 prefix variable accordingly. See the next code:

import sagemaker
import jinja2
from sagemaker import image_uris
import boto3
import os
import time
import json
from pathlib import Path
from sagemaker.utils import name_from_base

function = sagemaker.get_execution_role()  # execution function for the endpoint
sess = sagemaker.session.Session()  # sagemaker session for interacting with completely different AWS APIs
bucket = sess.default_bucket()  # bucket to accommodate artifacts
model_bucket = sess.default_bucket()  # bucket to accommodate artifacts
s3_code_prefix_deepspeed = "hf-large-model-djl-/code_falcon40b/deepspeed"  # folder inside bucket the place code artifact will go
area = sess._region_name
account_id = sess.account_id()
s3_client = boto3.consumer("s3")
sm_client = boto3.consumer("sagemaker")
smr_client = boto3.consumer("sagemaker-runtime")
jinja_env = jinja2.Setting()

We then create a neighborhood folder for our workspace to retailer our mannequin artifacts:

!mkdir -p code_falcon40b_deepspeed

We first create a serving.properties configuration file within the native listing we created. This serving.properties file signifies to the LMI container and the front-end DJL Serving library which mannequin parallelization and inference optimization engine we need to use. Yow will discover the configuration choices for each DeepSpeed and Hugging Face Speed up in Configurations and settings. Right here, be aware that we set the possibility.model_id parameter to outline which Hugging Face mannequin to tug from. SageMaker makes working with Hugging Face fashions easy, and this one line is all you want. As well as, we set possibility.tensor_parallel_degree to a worth of 4 as a result of we now have 4 GPUs on our ml.g5.24xlarge occasion. This parameter defines what number of partitions of the mannequin to create and distribute. Word that if we had used a bigger occasion with eight GPUs, comparable to ml.g5.48xlarge, and nonetheless set a worth of 4, then LMI would robotically create two replicas of the mannequin (two replicas unfold throughout 4 GPUs every). See the next code:

%%writefile ./code_falcon40b_deepspeed/serving.properties
engine=Python
#to deploy falcon-40b-instruct set the model_id worth to 'tiiuae/falcon-40b-instruct'
possibility.model_id=tiiuae/falcon-40b
possibility.tensor_parallel_degree=4
#possibility.s3url = {{s3url}}

You too can swap out tiiuae/falcon-40b with tiiuae/falcon-40b-instruct if it fits your wants higher.

We additionally embrace a necessities.txt file which you can specify to put in packages that you simply require:

%%writefile ./code_falcon40b_deepspeed/necessities.txt
einops
torch==2.0.1

The very last thing we want is the mannequin.py file that shall be used together with your mannequin:

%%writefile ./code_falcon40b_deepspeed/mannequin.py
from djl_python import Enter, Output
import os
import torch
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from typing import Any, Dict, Tuple
import warnings

predictor = None


def get_model(properties):
    model_name = properties["model_id"]
    local_rank = int(os.getenv("LOCAL_RANK", "0"))
    mannequin = AutoModelForCausalLM.from_pretrained(
        model_name,
        low_cpu_mem_usage=True,
        trust_remote_code=True,
        torch_dtype=torch.bfloat16,
        device_map="auto",
    )
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    generator = pipeline(
        activity="text-generation", mannequin=mannequin, tokenizer=tokenizer, device_map="auto"
    )
    return generator


def deal with(inputs: Enter) -> None:
    international predictor
    if not predictor:
        predictor = get_model(inputs.get_properties())
    if inputs.is_empty():
        # Mannequin server makes an empty name to warmup the mannequin on startup
        return None
    information = inputs.get_as_json()
    textual content = information["text"]
    text_length = information["text_length"]
    outputs = predictor(textual content, do_sample=True, min_length=text_length, max_length=text_length)
    end result = {"outputs": outputs}
    return Output().add_as_json(end result)

That’s it! At this level, we now have created all of the artifacts you will have deploy Falcon-40B with DeepSpeed! We package deal the listing right into a *.tar.gz file and add it to Amazon S3. Word that the precise mannequin has not been downloaded or packaged into this file. The LMI container will obtain the mannequin for you from Hugging Face instantly. You even have the choice to focus on an S3 bucket if you need your personal copy of the mannequin in a location that shall be extra performant to obtain. LMI additionally contains optimization for downloading from Amazon S3 with excessive efficiency. See the next code:

s3_code_artifact_deepspeed= sess.upload_data("mannequin.tar.gz", bucket, s3_code_prefix_deepspeed)
print(f"S3 Code or Mannequin tar for deepspeed uploaded to --- > {s3_code_artifact_deepspeed}")

All that’s left to do at this level is to outline the container we need to use and create a mannequin object:

inference_image_uri = (
    f"763104351884.dkr.ecr.{area}.amazonaws.com/djl-inference:0.22.1-deepspeed0.8.3-cu118"
)
model_name_acc = name_from_base(f"falcon40b-model-ds")
create_model_response = sm_client.create_model(
    ModelName=model_name_acc,
    ExecutionRoleArn=function,
    PrimaryContainer={"Picture": inference_image_uri, "ModelDataUrl": s3_code_artifact_deepspeed},
)
model_arn = create_model_response["ModelArn"]

Then we create an endpoint configuration and create the endpoint:


endpoint_config_name = f"{model_name}-config"
endpoint_name = f"{model_name}-endpoint"
endpoint_config_response = sm_client.create_endpoint_config(
    EndpointConfigName=endpoint_config_name,
    ProductionVariants=[
        {
            "VariantName": "variant1",
            "ModelName": model_name,
            "InstanceType": "ml.g5.24xlarge",
            "InitialInstanceCount": 1,
            "ModelDataDownloadTimeoutInSeconds": 3600,
            "ContainerStartupHealthCheckTimeoutInSeconds": 3600,
            # "VolumeSizeInGB": 512
        },
    ],
)
endpoint_config_response

create_endpoint_response = sm_client.create_endpoint(
    EndpointName=f"{endpoint_name}", EndpointConfigName=endpoint_config_name
)
print(f"Created Endpoint: {create_endpoint_response['EndpointArn']}")

Configuration gadgets to bear in mind for profitable internet hosting

An vital consideration for big mannequin internet hosting is making certain there may be satisfactory time for mannequin obtain from Hugging Face. In our exams, the Falcon-40B took about 90 minutes to obtain onto the occasion. A key set of configurations to permit for this are ContainerStartupHealthCheckTimeoutInSeconds and ModelDataDownloadTimeoutInSeconds. Ensure the SageMaker endpoint configuration has a worth of 3600 for every of those. Moreover, it’s a lot simpler to obtain from Amazon S3 as an alternative of the unique mannequin zoo utilizing the LMI containers which might be specifically designed for LLMS that use the S5cmd utility, which cuts the mannequin obtain time to round 10 minutes.

You’ll be able to monitor the standing of the endpoint by calling DescribeEndpoint, which can let you know when every part is full. Your endpoint is now prepared to reply to inference requests! As a result of LMI handles the mannequin partitioning and orchestration for you, every request shall be processed utilizing all 4 GPUs out there on our ml.g5.12xlarge occasion. This enables us to host LLMs and enhance efficiency when you scale GPU accelerators horizontally. See the next code:

response_model = smr_client.invoke_endpoint(
    EndpointName=endpoint_name,
    Physique=json.dumps({"textual content": "What's the objective of life?", "text_length": 150}),
    ContentType="software/json",
)

response_model["Body"].learn().decode("utf8")

In case you are accomplished and wish to delete the endpoint configuration, endpoint, and mannequin object, you possibly can run the next instructions:

sm_client.delete_endpoint(EndpointName=endpoint_name)
sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)
sm_client.delete_model(ModelName=model_name)

This code we referenced on this publish could be discovered within the full notebook on GitHub.

Conclusion

SageMaker Internet hosting and the LMI DLC makes it simple so that you can host LLMs like Falcon-40B. It takes on the undifferentiated heavy lifting in orchestrating what’s required to host fashions throughout a number of GPUs and offers configurable choices to fit your wants. As well as, utilizing Hugging Face fashions turns into very simple, with built-in help for these fashions.

On this publish, we confirmed how you should use SageMaker to host the Falcon-40B mannequin utilizing DeepSpeed. As well as, we offered examples in GitHub to host Falcon-40B utilizing Speed up, and the smaller Falcon-7B fashions. We encourage you to provide this a attempt on SageMaker with LMI and get hands-on with the best-performing publicly out there LLM thus far!


In regards to the authors

James Park is a Options Architect at Amazon Internet Providers. He works with Amazon.com to design, construct, and deploy know-how options on AWS, and has a selected curiosity in AI and machine studying. In h is spare time he enjoys searching for out new cultures, new experiences,  and staying updated with the most recent know-how tendencies.Yow will discover him on LinkedIn.

Abhi Shivaditya is a Senior Options Architect at AWS, working with strategic international enterprise organizations to facilitate the adoption of AWS companies in areas comparable to Synthetic Intelligence, distributed computing, networking, and storage. His experience lies in Deep Studying within the domains of Pure Language Processing (NLP) and Laptop Imaginative and prescient. Abhi assists prospects in deploying high-performance machine studying fashions effectively inside the AWS ecosystem.

Robert Van Dusen is a Senior Product Supervisor with Amazon SageMaker. He leads deep studying mannequin optimization for functions comparable to massive mannequin inference.

Evandro Franco is an AI/ML Specialist Options Architect engaged on Amazon Internet Providers. He helps AWS prospects overcome enterprise challenges associated to AI/ML on high of AWS. He has greater than 15 years working with know-how, from software program growth, infrastructure, serverless, to machine studying.

Qing Lan is a Software program Improvement Engineer in AWS. He has been engaged on a number of difficult merchandise in Amazon, together with excessive efficiency ML inference options and excessive efficiency logging system. Qing’s group efficiently launched the primary Billion-parameter mannequin in Amazon Promoting with very low latency required. Qing has in-depth information on the infrastructure optimization and Deep Studying acceleration.

Frank Liu is a Software program Engineer for AWS Deep Studying. He focuses on constructing progressive deep studying instruments for software program engineers and scientists. In his spare time, he enjoys mountain climbing with family and friends.

Leave a Reply

Your email address will not be published. Required fields are marked *