File size: 3,483 Bytes
9486daf f8b3b43 039894b f78b66e 039894b f78b66e 039894b f78b66e 039894b 6e869ae 039894b 6e869ae 039894b 6e869ae 039894b 6e869ae 039894b cbc604b 039894b 6e869ae f78b66e 039894b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: cc-by-4.0
language:
- he
inference: false
---
# **DictaLM**: A Large Generative Language Model for Modern Hebrew
A large generative pretrained transformer (GPT) language model for Hebrew, released [link to be added].
This model was fine-tuned for instructions:
- General questions:
```
ืื ืื ืืืช ืกืคืจ?
```
```
ืงืืืืชื ืืชื ืงื ืืืฆืืข. ืืื ืืืจื ืื ืืื ื ืืืคื ืืื?
```
- Simple tasks:
```
ืชืฆืืข ืืื ืจืขืืื ืืช ืืคืขืืืืช ืขื ืืืืื ืื ื 5:
```
- Information retrieval from a paragraph context:
```
ืืืกืืง ืืืื ื ืืื ืืืจื ืืืกืืจืชืืช ืืืขืชืืงื ืืงืืืฃ ืืืชืื. ืฉืืื ืื ืืืจืฉืช ืืื ืืื ืจื ืืืืคื ืืืกื ืืขืืืื ืืงืืืืช ืืืฉืจืื ืืืืงืืืืช ืจืืื ืืขืืื. ืฉืืืืช ืืกืืง ืืื ื ืืืคืฉืจืืช ืืืกืืื ืขืืืืืช ืืืงืืืืช ืืื ืืื ืืืื ืืื ืืขืืืช ืืฉืืืืช ืืืืืื ืืช ืืืืื. ืืืืชืื ืืืืืขืืื ืืืืื (ืืืืืฉื, ืื ืืืื ืืืืชืื ืืฉืื) ืืชืืื ืืืชืจ ืืกืืง ืืื ื ืืืืื ืฉืืคืจื ืคืืืช ื ืคืืข ืืืืื ืืืกืืง ืืฉืืื ืื (ืคืืืขืืช ืืงืืืคืช ืืคืจื ืืืืชืื ืืฉืื ืคืืืช ืืฉืืขืืชืืืช). ืืื ืื ืืืขืืฃ ืืกืืง ืืื ื ืืืืืจืื ืืื ืืืืคืืืจืคืื ืืืงืืืืช ืื ืฆืคืืคืืช ืืขืฆืื ืื ืืืคืฉืจืื ืืืฉื ื ืืื ืืืืื ืืื ืื. ืืฉืืื ืืืื ืืช ืืืคืฉืจืช ืื ืืืกืืง ืขืฆืื ืฉืื ืื ืืืืขืืื ืฉืื ืื, ืืืชืื ืืงืฆื ืืืฉืืช ืืคืจื ืืืืขื ืืื ืขืฅ.
ืขื ืืกืืก ืืคืกืงื ืืืืช, ืื ืืื ืืืชืจืื ืฉื ืืกืืง ืืื ื ืืืืื ืช ืงืฆื ืืืฉืืช ืืคืจื?
```
## Sample usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictalm-7b-instruct')
model = AutoModelForCausalLM.from_pretrained('dicta-il/dictalm-7b-instruct', trust_remote_code=True).cuda()
model.eval()
with torch.inference_mode():
prompt = 'ืชืฆืืข ืืื ืจืขืืื ืืช ืืคืขืืืืช ืขื ืืืืื ืื ื 5:\n'
kwargs = dict(
inputs=tokenizer(prompt, return_tensors='pt').input_ids.to(model.device),
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.75,
max_length=100,
min_new_tokens=5
)
print(tokenizer.batch_decode(model.generate(**kwargs), skip_special_tokens=True))
```
There are many different parameters you can input into `kwargs` for different results (greedy, beamsearch, different samplign configurations, longer/shorter respones, etc.).
You can view the full list of parameters you can pass to the `generate` function [here](https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/text_generation#transformers.GenerationMixin.generate).
## Citation
If you use DictaLM in your research, please cite ```ADD CITATION HERE```
**BibTeX:**
```ADD BIBTEXT HERE```
## License
Shield: [![CC BY 4.0][cc-by-shield]][cc-by]
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg |