Language Mannequin (LLM) will not be essentially the ultimate step in productionizing your Generative AI software. An usually forgotten, but essential a part of the MLOPs lifecycle is correctly load testing your LLM and guaranteeing it is able to face up to your anticipated manufacturing site visitors. Load testing at a excessive stage is the follow of testing your software or on this case your mannequin with the site visitors it will expect in a manufacturing atmosphere to make sure that it’s performant.
Up to now we’ve mentioned load testing conventional ML fashions utilizing open supply Python instruments resembling Locust. Locust helps seize common efficiency metrics resembling requests per second (RPS) and latency percentiles on a per request foundation. Whereas that is efficient with extra conventional APIs and ML fashions it doesn’t seize the complete story for LLMs.
LLMs historically have a a lot decrease RPS and better latency than conventional ML fashions on account of their dimension and bigger compute necessities. Usually the RPS metric does probably not present probably the most correct image both as requests can tremendously differ relying on the enter to the LLM. As an example you might need a question asking to summarize a big chunk of textual content and one other question that may require a one-word response.
For this reason tokens are seen as a way more correct illustration of an LLM’s efficiency. At a excessive stage a token is a bit of textual content, every time an LLM is processing your enter it “tokenizes” the enter. A token differs relying particularly on the LLM you might be utilizing, however you’ll be able to think about it as an illustration as a phrase, sequence of phrases, or characters in essence.

What we’ll do on this article is discover how we are able to generate token based mostly metrics so we are able to perceive how your LLM is acting from a serving/deployment perspective. After this text you’ll have an concept of how one can arrange a load-testing device particularly to benchmark totally different LLMs within the case that you’re evaluating many fashions or totally different deployment configurations or a mixture of each.
Let’s get arms on! If you’re extra of a video based mostly learner be happy to observe my corresponding YouTube video down beneath:
NOTE: This text assumes a fundamental understanding of Python, LLMs, and Amazon Bedrock/SageMaker. If you’re new to Amazon Bedrock please consult with my starter information right here. If you wish to be taught extra about SageMaker JumpStart LLM deployments consult with the video right here.
DISCLAIMER: I’m a Machine Studying Architect at AWS and my opinions are my very own.
Desk of Contents
- LLM Particular Metrics
- LLMPerf Intro
- Making use of LLMPerf to Amazon Bedrock
- Further Assets & Conclusion
LLM-Particular Metrics
As we briefly mentioned within the introduction with regard to LLM internet hosting, token based mostly metrics usually present a significantly better illustration of how your LLM is responding to totally different payload sizes or kinds of queries (summarization vs QnA).
Historically now we have all the time tracked RPS and latency which we’ll nonetheless see right here nonetheless, however extra so at a token stage. Listed below are among the metrics to pay attention to earlier than we get began with load testing:
- Time to First Token: That is the length it takes for the primary token to generate. That is particularly helpful when streaming. As an example when utilizing ChatGPT we begin processing data when the primary piece of textual content (token) seems.
- Complete Output Tokens Per Second: That is the entire variety of tokens generated per second, you’ll be able to consider this as a extra granular different to the requests per second we historically observe.
These are the foremost metrics that we’ll concentrate on, and there’s a number of others resembling inter-token latency that will even be displayed as a part of the load assessments. Be mindful the parameters that additionally affect these metrics embody the anticipated enter and output token dimension. We particularly play with these parameters to get an correct understanding of how our LLM performs in response to totally different technology duties.
Now let’s check out a device that allows us to toggle these parameters and show the related metrics we’d like.
LLMPerf Intro
LLMPerf is constructed on high of Ray, a well-liked distributed computing Python framework. LLMPerf particularly leverages Ray to create distributed load assessments the place we are able to simulate real-time manufacturing stage site visitors.
Be aware that any load-testing device can also be solely going to have the ability to generate your anticipated quantity of site visitors if the shopper machine it’s on has sufficient compute energy to match your anticipated load. As an example as you scale the concurrency or throughput anticipated to your mannequin, you’d additionally need to scale the shopper machine(s) the place you might be operating your load take a look at.
Now particularly inside LLMPerf there’s a number of parameters which are uncovered which are tailor-made for LLM load testing as we’ve mentioned:
- Mannequin: That is the mannequin supplier and your hosted mannequin that you just’re working with. For our use-case it’ll be Amazon Bedrock and Claude 3 Sonnet particularly.
- LLM API: That is the API format by which the payload needs to be structured. We use LiteLLM which offers a standardized payload construction throughout totally different mannequin suppliers, thus simplifying the setup course of for us particularly if we need to take a look at totally different fashions hosted on totally different platforms.
- Enter Tokens: The imply enter token size, you can even specify a regular deviation for this quantity.
- Output Tokens: The imply output token size, you can even specify a regular deviation for this quantity.
- Concurrent Requests: The variety of concurrent requests for the load take a look at to simulate.
- Take a look at Length: You possibly can management the length of the take a look at, this parameter is enabled in seconds.
LLMPerf particularly exposes all these parameters by their token_benchmark_ray.py script which we configure with our particular values. Let’s have a look now at how we are able to configure this particularly for Amazon Bedrock.
Making use of LLMPerf to Amazon Bedrock
Setup
For this instance we’ll be working in a SageMaker Traditional Pocket book Occasion with a conda_python3 kernel and ml.g5.12xlarge occasion. Be aware that you just need to choose an occasion that has sufficient compute to generate the site visitors load that you just need to simulate. Make sure that you even have your AWS credentials for LLMPerf to entry the hosted mannequin be it on Bedrock or SageMaker.
LiteLLM Configuration
We first configure our LLM API construction of alternative which is LiteLLM on this case. With LiteLLM there’s help throughout numerous mannequin suppliers, on this case we configure the completion API to work with Amazon Bedrock:
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = "Enter your entry key ID"
os.environ["AWS_SECRET_ACCESS_KEY"] = "Enter your secret entry key"
os.environ["AWS_REGION_NAME"] = "us-east-1"
response = completion(
mannequin="anthropic.claude-3-sonnet-20240229-v1:0",
messages=[{ "content": "Who is Roger Federer?","role": "user"}]
)
output = response.selections[0].message.content material
print(output)
To work with Bedrock we configure the Mannequin ID to level in direction of Claude 3 Sonnet and go in our immediate. The neat half with LiteLLM is that messages key has a constant format throughout mannequin suppliers.
Publish-execution right here we are able to concentrate on configuring LLMPerf for Bedrock particularly.
LLMPerf Bedrock Integration
To execute a load take a look at with LLMPerf we are able to merely use the offered token_benchmark_ray.py script and go within the following parameters that we talked of earlier:
- Enter Tokens Imply & Commonplace Deviation
- Output Tokens Imply & Commonplace Deviation
- Max variety of requests for take a look at
- Length of take a look at
- Concurrent requests
On this case we additionally specify our API format to be LiteLLM and we are able to execute the load take a look at with a easy shell script like the next:
%%sh
python llmperf/token_benchmark_ray.py
--model bedrock/anthropic.claude-3-sonnet-20240229-v1:0
--mean-input-tokens 1024
--stddev-input-tokens 200
--mean-output-tokens 1024
--stddev-output-tokens 200
--max-num-completed-requests 30
--num-concurrent-requests 1
--timeout 300
--llm-api litellm
--results-dir bedrock-outputs
On this case we hold the concurrency low, however be happy to toggle this quantity relying on what you’re anticipating in manufacturing. Our take a look at will run for 300 seconds and publish length it’s best to see an output listing with two recordsdata representing statistics for every inference and in addition the imply metrics throughout all requests within the length of the take a look at.
We are able to make this look a little bit neater by parsing the abstract file with pandas:
import json
from pathlib import Path
import pandas as pd
# Load JSON recordsdata
individual_path = Path("bedrock-outputs/bedrock-anthropic-claude-3-sonnet-20240229-v1-0_1024_1024_individual_responses.json")
summary_path = Path("bedrock-outputs/bedrock-anthropic-claude-3-sonnet-20240229-v1-0_1024_1024_summary.json")
with open(individual_path, "r") as f:
individual_data = json.load(f)
with open(summary_path, "r") as f:
summary_data = json.load(f)
# Print abstract metrics
df = pd.DataFrame(individual_data)
summary_metrics = {
"Mannequin": summary_data.get("mannequin"),
"Imply Enter Tokens": summary_data.get("mean_input_tokens"),
"Stddev Enter Tokens": summary_data.get("stddev_input_tokens"),
"Imply Output Tokens": summary_data.get("mean_output_tokens"),
"Stddev Output Tokens": summary_data.get("stddev_output_tokens"),
"Imply TTFT (s)": summary_data.get("results_ttft_s_mean"),
"Imply Inter-token Latency (s)": summary_data.get("results_inter_token_latency_s_mean"),
"Imply Output Throughput (tokens/s)": summary_data.get("results_mean_output_throughput_token_per_s"),
"Accomplished Requests": summary_data.get("results_num_completed_requests"),
"Error Fee": summary_data.get("results_error_rate")
}
print("Claude 3 Sonnet - Efficiency Abstract:n")
for ok, v in summary_metrics.gadgets():
print(f"{ok}: {v}")
The ultimate load take a look at outcomes will look one thing like the next:

As we are able to see we see the enter parameters that we configured, after which the corresponding outcomes with time to first token(s) and throughput with regard to imply output tokens per second.
In a real-world use case you may use LLMPerf throughout many various mannequin suppliers and run assessments throughout these platforms. With this device you need to use it holistically to determine the proper mannequin and deployment stack to your use-case when used at scale.
Further Assets & Conclusion
Your entire code for the pattern could be discovered at this related Github repository. For those who additionally need to work with SageMaker endpoints yow will discover a Llama JumpStart deployment load testing pattern right here.
All in all load testing and analysis are each essential to making sure that your LLM is performant towards your anticipated site visitors earlier than pushing to manufacturing. In future articles we’ll cowl not simply the analysis portion, however how we are able to create a holistic take a look at with each parts.
As all the time thanks for studying and be happy to depart any suggestions and join with me on Linkedln and X.