Edit model card

Hebrew-Gemma-11B-Instruct

Base Models:

Instruct Models:

The Hebrew-Gemma-11B-Instruct Large Language Model (LLM) is a instruct fine-tuned version of the Hebrew-Gemma-11B generative text model using a variety of conversation datasets.

It is continued pretrain of gemma-7b, extended to a larger scale and trained on 3B additional tokens of both English and Hebrew text data.

Instruction format

This format must be strictly respected, otherwise the model will generate sub-optimal outputs.

<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
Here is a simple hellow world program<end_of_turn><eos>
  • The conversation starts with <bos>.
  • Each turn is preceded by a <start_of_turn> delimiter and then the role of the entity (user or model).
  • Turns finish with the <end_of_turn> token.
  • Conversation finish with the <eos> token.

You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template.

A simple example using the tokenizer's chat template:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Hebrew-Gemma-11B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda")

chat = [
    { "role": "user", "content": "כתוב קוד פשוט בפייתון שמדפיס למסך את התאריך של היום" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

Terms of Use

As an extention of Gemma-7B, this model is subject to the original license and terms of use by Google.

Benchmark Results

  • Coming Soon!

Notice

Hebrew-Gemma-11B is a pretrained base model and therefore does not have any moderation mechanisms.

Authors

  • Trained by Yam Peleg.
  • In collaboration with Jonathan Rouach and Arjeo, inc.
Downloads last month
2,723
Safetensors
Model size
10.5B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including yam-peleg/Hebrew-Gemma-11B-Instruct