Edit model card

Llama-3.1-8B Instruct African Ultrachat

  • Developed by: vutuka
  • License: apache-2.0
  • Finetuned from model : meta-llama/meta-llama-3.1-8b-instruct
  • Max Content Length : 8192
  • Max Steps : 800
  • Training Time : 02h-22min-08s
  • Setup :
    • 1 x RTX A6000
    • 16 vCPU
    • 58 GB RAM
    • 150 GB Storage
  • Fine Tuned Language :
    • Amharic
    • Hausa
    • Igbo
    • Kinyarwanda
    • Southern Sotho
    • Shona
    • Somali
    • Swahili
    • Xhosa
    • Yoruba
    • Zulu
    • English
    • French

Introducing Llama 3.1-8B Instruct Fine-Tuned on the Masakhane African UltraChat Dataset

We are excited to announce the fine-tuned version of the Llama 3.1-8B Instruct model, which has been trained on the Masakhane African UltraChat dataset. This fine-tuning leverages the robust architecture of the Llama 3.1 model, designed for high-performance multilingual tasks and long context processing, to enhance its capabilities in understanding and generating responses in African languages.

Model Overview

Llama 3.1-8B Instruct is part of the Llama 3 family, developed by Meta. It features an optimized transformer architecture and supports multiple languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. This model variant is particularly suited for instruction-tuned tasks, making it ideal for dialogue and assistant-like applications.

Training and Fine-Tuning

The model was fine-tuned using the Masakhane African UltraChat dataset, which is a diverse and extensive collection of conversational data aimed at promoting and enhancing NLP capabilities for African languages. The fine-tuning process involved supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), ensuring the model aligns well with human preferences for helpfulness and safety.

trainer = SFTTrainer(
    model = model,
    tokenizer = tokenizer,
    train_dataset = shuffled_dataset,
    dataset_text_field = "text",
    max_seq_length = max_seq_length,
    dataset_num_proc = 2,
    packing = False, # Can make training 5x faster for short sequences.
    args = TrainingArguments(
        per_device_train_batch_size = 2,
        gradient_accumulation_steps = 4,
        warmup_steps = 5,
        max_steps = 800,
        do_eval=True,
        learning_rate = 3e-4,
        log_level="debug",
        #fp16 = not is_bfloat16_supported(),
        bf16 = True,
        logging_steps = 10,
        optim = "adamw_8bit",
        weight_decay = 0.01,
        lr_scheduler_type = "linear",
        seed = 3407,
        output_dir = "outputs",
        report_to='wandb',
        warmup_ratio=0.3,
    ),
)

Performance and Capabilities

The fine-tuned Llama 3.1-8B model demonstrates improved performance in understanding and generating text in African languages, providing accurate and contextually appropriate responses. It is designed to handle various conversational tasks, from casual dialogue to more complex inquiries, making it a valuable tool for applications targeting African language users.

Key Features

  • Multilingual Support: Enhanced capabilities in multiple languages, including African languages.
  • Long Context Handling: Supports up to 128k tokens, making it suitable for long-form conversations.
  • Instruction-Tuned: Optimized for generating accurate and helpful responses based on user instructions.
  • High Performance: Utilizes advanced techniques like Grouped-Query Attention (GQA) for improved scalability and efficiency.

Tokenizer & Chat Format

from unsloth.chat_templates import get_chat_template

tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
    mapping={
        "role": "role",
        "content": "content",
        "user": "",
        "assistant": "",
    }
)

def formatting_prompts_func(examples):
    convos = examples["messages"]
    texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos]
    return { "text" : texts, }
pass

Inference with Unsloth

def chat_llama3_african_ultrachat(message: str, context: str):
    FastLanguageModel.for_inference(model) # Enable native 2x faster inference
    
    messages = [
        {"role": "system", "content": context},
        {"role": "user", "content": message},
    ]
    inputs = tokenizer.apply_chat_template(
        messages,
        tokenize = True,
        add_generation_prompt = True, # Must add for generation
        return_tensors = "pt",
    ).to("cuda")
    
    from transformers import TextStreamer
    text_streamer = TextStreamer(tokenizer)
    #_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 1024, use_cache = True)
    output = model.generate(input_ids = inputs, max_new_tokens = 1024, use_cache = True)
    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    
    # Extract the assistant's message
    user_marker = "user"
    assistant_marker = "assistant"
    
    response_start = generated_text.find(assistant_marker) + len(assistant_marker)
    response_end = generated_text.find(user_marker, response_start)
    
    if response_end == -1:
        response = generated_text[response_start:].strip()
    else:
        response = generated_text[response_start:response_end].strip()
    
    return response
chat_llama3_african_ultrachat(
    message="Habari !",
    context="Wewe ni wakala wa mtandaoni anayesaidia ambaye hujibu maswali kwa upole na heshima."
)
<|begin_of_text|><|start_header_id|>user<|end_header_id|>

Wewe ni wakala wa mtandaoni anayesaidia ambaye hujibu maswali kwa upole na heshima.<|eot_id|><|start_header_id|>user<|end_header_id|>

Habari!<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Habari yako? Je, unatafuta ushauri au maelezo kuhusu jambo maalum? Ni furaha yangu kusaidia.<|eot_id|>

Inference with Unsloth Chat (new)

  • Run our code in a T4 and try the model.
#@title ↙️ Press ▶ to start 🦥 Unsloth Studio Chat for Gemma-2 2b Instruct

# Unsloth Studio
# Copyright (C) 2024-present the Unsloth AI team. All rights reserved.

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.

# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <https://www.gnu.org/licenses/>.
!git clone https://github.com/unslothai/studio > /dev/null 2>&1
with open("studio/unsloth_studio/chat.py", "r") as chat_module:
    code = chat_module.read().replace(
        'MODEL_NAME = "vutuka/Llama-3.1-8B-Instruct-African-Ultrachat"',
        'MODEL_NAME = "unsloth/gemma-2-2b-it-bnb-4bit"',
    )
exec(code)
  • Change the chat.py
# Unsloth Studio
# Copyright (C) 2024-present the Unsloth AI team. All rights reserved.

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.

# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <https://www.gnu.org/licenses/>.

from IPython.display import clear_output
import subprocess
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
MODEL_NAME = "vutuka/Llama-3.1-8B-Instruct-African-Ultrachat"

print("Installing packages for 🦥 Unsloth Studio ... Please wait 1 minute ...")

install_first = [
    "pip", "install",
    "huggingface_hub[hf_transfer]",
]
install_first = subprocess.Popen(install_first)
install_first.wait()

install_second = [
    "pip", "install",
    "gradio",
    "unsloth[colab-new]@git+https://github.com/unslothai/unsloth.git",
]
install_second = subprocess.Popen(install_second)

from huggingface_hub import snapshot_download
import warnings
warnings.filterwarnings(action = "ignore", category = UserWarning,    module = "torch")
warnings.filterwarnings(action = "ignore", category = UserWarning,    module = "huggingface_hub")
warnings.filterwarnings(action = "ignore", category = FutureWarning,  module = "huggingface_hub")
warnings.filterwarnings(action = "ignore", category = RuntimeWarning, module = "subprocess")
warnings.filterwarnings(action = "ignore", category = UserWarning,    module = "transformers")
warnings.filterwarnings(action = "ignore", category = FutureWarning,  module = "accelerate")
warnings.filterwarnings(action = "ignore", category = RuntimeWarning, module = "multiprocessing")
warnings.filterwarnings(action = "ignore", category = RuntimeWarning, module = "multiprocess")

from huggingface_hub.utils import disable_progress_bars
disable_progress_bars()
snapshot_download(repo_id = MODEL_NAME, repo_type = "model")

install_second.wait()

install_dependencies = [
    "pip", "install", "--no-deps",
    "xformers<0.0.27", "trl<0.9.0", "peft", "accelerate", "bitsandbytes",
]
install_dependencies = subprocess.Popen(install_dependencies)
install_dependencies.wait()
clear_output()


from contextlib import redirect_stdout
import io
import logging
logging.getLogger("transformers.utils.hub").setLevel(logging.CRITICAL+1)

print("Loading model ... Please wait 1 more minute! ...")

with redirect_stdout(io.StringIO()):
    from unsloth import FastLanguageModel
    import torch
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = MODEL_NAME,
        max_seq_length = None,
        dtype = None,
        load_in_4bit = True,
    )
    FastLanguageModel.for_inference(model)
pass
clear_output()

import gradio
gradio.strings.en["SHARE_LINK_DISPLAY"] = ""
from transformers import TextIteratorStreamer, StoppingCriteria, StoppingCriteriaList
from threading import Thread

class StopOnTokens(StoppingCriteria):
    def __init__(self, stop_token_ids):
        self.stop_token_ids = tuple(set(stop_token_ids))
    pass

    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
        return input_ids[0][-1].item() in self.stop_token_ids
    pass
pass

def async_process_chatbot(message, history):
    eos_token = tokenizer.eos_token
    stop_on_tokens = StopOnTokens([eos_token,])
    text_streamer  = TextIteratorStreamer(tokenizer, skip_prompt = True)

    # From https://www.gradio.app/guides/creating-a-chatbot-fast
    history_transformer_format = history + [[message, ""]]
    messages = []
    for item in history_transformer_format:
        messages.append({"role": "user",      "content": item[0]})
        messages.append({"role": "assistant", "content": item[1]})
    pass
    # Remove last assistant and instead use add_generation_prompt
    messages.pop(-1)

    input_ids = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt = True,
        return_tensors = "pt",
    ).to("cuda", non_blocking = True)

    # Add stopping criteria - will not output EOS / EOT
    generation_kwargs = dict(
        input_ids = input_ids,
        streamer = text_streamer,
        max_new_tokens = 1024,
        stopping_criteria = StoppingCriteriaList([stop_on_tokens,]),
        temperature = 0.7,
        do_sample = True,
    )
    thread = Thread(target = model.generate, kwargs = generation_kwargs)
    thread.start()

    # Yield will save the output to history!
    generated_text = ""
    for new_text in text_streamer:
        if new_text.endswith(eos_token):
            new_text = new_text[:len(new_text) - len(eos_token)]
        generated_text += new_text
        yield generated_text
    pass
pass

studio_theme = gradio.themes.Soft(
    primary_hue = "teal",
)

scene = gradio.ChatInterface(
    async_process_chatbot,
    chatbot = gradio.Chatbot(
        height = 325,
        label = "Unsloth Studio Chat",
    ),
    textbox = gradio.Textbox(
        placeholder = "Message Unsloth Chat",
        container = False,
    ),
    title = None,
    theme = studio_theme,
    examples = None,
    cache_examples = False,
    retry_btn = None,
    undo_btn = "Remove Previous Message",
    clear_btn = "Restart Entire Chat",
)

scene.launch(quiet = True)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train vutuka/Llama-3.1-8B-Instruct-African-Ultrachat