Edit model card

This model has the safetensors weights for the Madlad-400 8B param language model.

The HF transformers code to run inference is not ready yet. The original implementation is in JAX/Flaxformer.

The model architecture is the same as Palm 8B.

It's a decoder-only T5 with 32 layers, 16 query heads, 1 KV head, and 4096 embedding size.

These are the main differences relative to the original T5 architecture:

  • SwiGLU Activation
  • Parallel Layers
  • Multi-Query Attention
  • RoPE Embeddings
  • Shared Input-Output Embeddings
  • No biases
  • Bidirectional attention
  • Layer Norm with center_scale_at_zero and final layer with use_scale=False

If you are looking for the language models models, here are the available versions:

Article: MADLAD-400: A Multilingual And Document-Level Large Audited Dataset

Abstract:

We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.

Downloads last month
115
Safetensors
Model size
8.63B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train jbochi/madlad400-8b-lm

Collection including jbochi/madlad400-8b-lm