Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This repo shows how to convert a fairseq NLLB-MoE model to transformers and run a forward pass

As the fairseq repository is not really optimised to run inference out-of-the-box, make sure you have a very very big CPU/GPU RAM. Around 600 GB are required to run an inference with the fairseq model, as you need to load the checkpoints (~300GB) then build the model (~300GB again), then finally you can load the checkpoints in the model.

0. Download the original checkpoints:

The checkpoints in this repository were obtained using the following command (ased on the instructions given on the fairseq repository):

wget --trust-remote-name path_to_nllb
tar -cf model.tar.zf

The NLLB checkpoints should noz

1. Install PyTorch

Use the following command:

pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

2. Install fairseq

git clone https://github.com/facebookresearch/fairseq.git
cd fairscale
git checkout prefetch_fsdp_params_simple
pip3 install -e .

3. Clone this repo (click top right on "How to clone")

4. Run the inference script:

Convert the checkpoints on the fly using the conversion script. transformers is required to do this:

cd <path/to/cloned/repo>
python3 /home/arthur_huggingface_co/fairseq/weights/checkpoints/convert_nllb_moe_sharded_original_checkpoint_to_pytorch.py --pytorch_dump_folder_path <dump_folder> --nllb_moe_checkpoint_path <nllb_checkpoint_path>

4. Run the inference script:

cd <path/to/cloned/repo>
bash run.sh
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .