Llama-2-7b performance on AWS Inferentia2 (Latency & Througput)

How fast is Llama-2-7b on Inferentia2? Let’s figure out!

For this benchmark we will use the following configurations:

Model type batch_size sequence_length
Llama2 7B BS1 1 4096
Llama2 7B BS4 4 4096
Llama2 7B BS8 8 4096
Llama2 7B BS16 16 4096

Note: all models are compiled to use the full extent of cores available on the inf2.48xlarge instance.

Note: please refer to the inferentia2 product page for details on the available instances.

To evaluate the models, we generate tokens up to a total sequence length of 1024, starting from 256 input tokens (i.e. we generate 256, 512 and 768 tokens).

Encoding time (time to first token)

The encoding time or time to first token is the time required to process the input tokens and generate the first output token. It is a very important metric, as it corresponds to the latency directly perceived by the user when streaming generated tokens.

We test the encoding time for increasing context sizes, 256 input tokens corresponding roughly to a typical Q/A usage, while 768 is more typical of a Retrieval Augmented Generation (RAG) use-case.

Encoding time is expressed in seconds.

Llama2 7b inferentia2 encoding-time

End-to-end Latency

The end-to-end latency corresponds to the total time to reach a sequence length of 1024 tokens.

It therefore includes the encoding and generation time.

Latency is expressed in seconds.

Llama2 7b inferentia2 end-to-end latency

Throughput

We adopt the same convention as other benchmarks to evaluate the throughput, by dividing the end-to-end latency by the sum of both input and output tokens. In other words, we divide the end-to-end latency by batch_size * sequence_length to obtain the number of generated tokens per second.

Throughput is expressed in tokens/second.

Llama2 7b inferentia2 throughput