test_mllama_v12 / README.md
alex-ht
first commit
0bc439d
|
raw
history blame
3.97 kB
metadata
datasets:
  - fixie-ai/librispeech_asr
  - fixie-ai/common_voice_17_0
  - fixie-ai/peoples_speech
  - fixie-ai/gigaspeech
  - fixie-ai/multilingual_librispeech
  - fixie-ai/wenetspeech
  - fixie-ai/covost2
language:
  - ar
  - de
  - en
  - es
  - fr
  - hi
  - it
  - ja
  - nl
  - pt
  - ru
  - sv
  - tr
  - uk
  - zh
library_name: transformers
license: mit
metrics:
  - bleu

Model Card for Ultravox

Ultravox is a multimodal Speech LLM built around a pretrained Llama3.1-8B-Instruct and whisper-large-v3-turbo backbone.

See https://ultravox.ai for the GitHub repo and more information.

Model Details

Model Description

Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special <|audio|> pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.

In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model.

  • Developed by: Fixie.ai
  • License: MIT

Model Sources

Usage

Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc.

To use the model, try the following:

# pip install transformers peft librosa

import transformers
import numpy as np
import librosa

pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_4_1-llama-3_1-8b', trust_remote_code=True)

path = "<path-to-input-audio>"  # TODO: pass the audio here
audio, sr = librosa.load(path, sr=16000)


turns = [
  {
    "role": "system",
    "content": "You are a friendly and helpful character. You love to answer questions for people."
  },
]
pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30)

Training Details

The model uses a pre-trained Llama3.1-8B-Instruct backbone as well as the encoder part of whisper-large-v3-turbo.

Only the multi-modal adapter is trained, while Whisper encoder and Llama are kept frozen.

We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based Llama backbone.

Training Data

The training dataset is a mix of ASR datasets, extended with continuations generated by Llama 3.1 8B, and speech translation datasets, which yield a modest improvement in translation evaluations.

Training Procedure

Supervised speech instruction finetuning via knowledge-distillation. For more info, see training code in Ultravox repo.

Training Hyperparameters

  • Training regime: BF16 mixed precision training
  • Hardward used: 8x H100 GPUs

Speeds, Sizes, Times

The current version of Ultravox, when invoked with audio content, has a time-to-first-token (TTFT) of approximately 150ms, and a tokens-per-second rate of ~50-100 when using an A100-40GB GPU, all using a Llama 3.1 8B backbone.

Check out the audio tab on TheFastest.ai for daily benchmarks and a comparison with other existing models.

Evaluation

Ultravox 0.4 8B Ultravox 0.4.1 8B
en_ar 11.17 12.28
en_de 25.47 27.13
es_en 37.11 39.16
ru_en 38.96 39.65
en_ca 27.46 29.94
zh_en 10.08 14.55