Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities
The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text.
For full details of this model please read our release blog post or the technical report.
This is the base model designed for completion (not for chat!) in the GGUF format for use with llama.cpp.
There are two versions available - float16 precision (*.F16.gguf
) and 4-bit quantized precision (*.Q4_K_M.gguf
).
You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0
here.
Model Architecture
DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:
- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.
- Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.
Notice
DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.
Citation
If you use this model, please cite:
@misc{shmidman2024adaptingllmshebrewunveiling,
title={Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities},
author={Shaltiel Shmidman and Avi Shmidman and Amir DN Cohen and Moshe Koppel},
year={2024},
eprint={2407.07080},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.07080},
}
- Downloads last month
- 84