Text Generation
Transformers
Safetensors
Slovenian
English
llama
conversational
text-generation-inference
Inference Endpoints
Edit model card

SambaLingo-Slovenian-Chat

SambaLingo-Slovenian-Chat is a human aligned chat model trained in Slovenian and English. It is trained using direct preference optimization on top the base model SambaLingo-Slovenian-Base. The base model adapts Llama-2-7b to Slovenian by training on 17 billion tokens from the Slovenian split of the Cultura-X dataset. Try this model at SambaLingo-chat-space.

Model Description

Getting Started

Loading Model With Hugging Face

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Slovenian-Chat", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Slovenian-Chat", device_map="auto", torch_dtype="auto")

Interacting With Model Pipeline

Please make sure to set use_fast=False when loading the tokenizer.

from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Slovenian-Chat", device_map="auto", use_fast=False)
messages = [
                {"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]

Suggested Inference Parameters

  • Temperature: 0.8
  • Repetition penalty: 1.0
  • Top-p: 0.9

Prompting Guidelines

To prompt this model, please use the following chat template:

<|user|>\n{question}</s>\n<|assistant|>\n

Example Prompts and Generations

<|user|>
Kdaj je bil ustvarjen google? Zakaj je prevladoval nad drugimi iskalniki?</s>
<|assistant|>
Google je bil ustanovljen septembra 1998 v garaži Larryja Pagea in Sergeya Brina v Mountain Viewu v Kaliforniji.

Google je prevladoval nad drugimi iskalniki zaradi več dejavnikov:

1. Učinkovitost: Google je bil veliko hitrejši in učinkovitejši od drugih iskalnikov, saj je zagotavljal rezultate iskanja v delčku sekunde.

2. Uporabniku prijazen vmesnik: Googlov vmesnik je bil uporabniku prijazen in enostaven za krmarjenje, kar je uporabnikom olajšalo iskanje, kar iščejo.

3. Personalizacija: Google je uporabnikom omogočil prilagajanje rezultatov iskanja z uporabo njihovih iskalnih navad, zgodovine brskanja in drugih podatkov.

4. Kakovost rezultatov iskanja: Google je zagotavljal visokokakovostne rezultate iskanja, ki so bili točni, ustrezni in brez neželene vsebine.

5. Oglaševalski model: Google je uvedel uspešen oglaševalski model, ki je podjetjem omogočil, da ciljajo na uporabnike na podlagi njihovih iskalnih poizvedb, s čimer je povečal učinkovitost njihovih oglaševalskih kampanj.

6. Strateška partnerstva: Google je sklenil strateška partnerstva z drugimi podjetji, kot so Apple, Android in YouTube, kar mu je pomagalo razširiti doseg in povečati bazo uporabnikov.

7. Inovacije: Google je bil vedno v ospredju inovacij, saj je predstavil nove funkcije in tehnologije, kot so Google Maps, Google Now in Google Assistant, ki so izboljšale uporabniško izkušnjo in povečale njegovo priljubljenost.

Training Details

The alignment phase follows the recipe for Zephyr-7B, and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).

The SFT phase was done on the ultrachat_200k dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.

The DPO phase was done on the ultrafeedback dataset and cai-conversation-harmless dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.

Tokenizer Details

We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.

Evaluation

For evaluation results see our paper: SambaLingo: Teaching Large Language Models New Languages

Uses

Direct Use

Use of this model is governed by the Meta’s Llama 2 Community License Agreement. Please review and accept the license before downloading the model weights.

Out-of-Scope Use

SambaLingo should NOT be used for:

  • Mission-critical applications
  • Applications that involve the safety of others
  • Making highly important decisions

Bias, Risks, and Limitations

Like all LLMs, SambaLingo has certain limitations:

  • Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
  • Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
  • Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
  • Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
  • Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.

Acknowledgments

We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.

We would like to give a special thanks to the following groups:

  • Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
  • Nguyen et al for open sourcing CulturaX dataset
  • CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
  • EleutherAI for their open source evaluation framework
  • Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo

Cite SambaLingo

@misc{csaki2024sambalingo,
      title={SambaLingo: Teaching Large Language Models New Languages}, 
      author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
      year={2024},
      eprint={2404.05829},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
511
Safetensors
Model size
6.95B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train sambanovasystems/SambaLingo-Slovenian-Chat

Collection including sambanovasystems/SambaLingo-Slovenian-Chat