metadata
license: apache-2.0
pipeline_tag: text-generation
language:
- en
- he
tags:
- pretrained
inference:
parameters:
temperature: 0.7
Model Card for DictaLM-2.0-AWQ
The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text.
For full details of this model please read our release blog post.
This model contains the AWQ 4-bit quantized version of the base model DictaLM-2.0.
You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM-2.0
here.
Example Code
Running this code requires less than 5GB of GPU VRAM.
from transformers import pipeline
# This loads the model onto the GPU in bfloat16 precision
model = pipeline('text-generation', 'dicta-il/dictalm2.0-AWQ', device_map='cuda')
# Sample few shot examples
prompt = """
注讘专: 讛诇讻转讬
注转讬讚: 讗诇讱
注讘专: 砖诪专转讬
注转讬讚: 讗砖诪讜专
注讘专: 砖诪注转讬
注转讬讚: 讗砖诪注
注讘专: 讛讘谞转讬
注转讬讚:
"""
print(model(prompt.strip(), do_sample=False, max_new_tokens=4, stop_sequence='\n'))
# [{'generated_text': '注讘专: 讛诇讻转讬\n注转讬讚: 讗诇讱\n\n注讘专: 砖诪专转讬\n注转讬讚: 讗砖诪讜专\n\n注讘专: 砖诪注转讬\n注转讬讚: 讗砖诪注\n\n注讘专: 讛讘谞转讬\n注转讬讚: 讗讘讬谉\n\n'}]
Model Architecture
DictaLM-2.0 is based on the Mistral-7B-v0.1 model with the following changes:
- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.
- Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.
Notice
DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.
Citation
If you use this model, please cite:
[Will be added soon]