File size: 2,449 Bytes
a1aef4b
e1ed1b5
a1aef4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f61aca
 
a1aef4b
 
 
 
0f61aca
a1aef4b
 
 
 
 
 
 
 
 
79b3cd2
 
 
 
 
 
e1ed1b5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: afl-3.0
datasets:
- kz-transformers/multidomain-kazakh-dataset
language:
- kk
library_name: transformers
pipeline_tag: fill-mask
---

# RoBERTa-kaz-large

## Model Description
`roberta-kaz-large` is a RoBERTa-based language model for the Kazakh language, trained from scratch using the RobertaForMaskedLM architecture. It has been trained on the "kz-transformers/multidomain-kazakh-dataset" from Hugging Face, which covers diverse domains to ensure broad generalization capabilities.

## Usage
The model can be used with the Hugging Face Transformers library:
```python
from transformers import RobertaTokenizerFast, RobertaForMaskedLM

tokenizer = RobertaTokenizerFast.from_pretrained('nur-dev/roberta-kaz-large')
model = RobertaForMaskedLM.from_pretrained('nur-dev/roberta-kaz-large')
```
Or directly with a pipeline for MLM:
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='nur-dev/roberta-kaz-large')
predicted = pipe("Қазіргі <mask> әлемдік деңгейдегі <mask> университеттері сапалы білім, зияткерлік және мәдени <mask> беретін <mask> <mask> <mask> ғана емес, сонымен қатар мемлекет үшін <mask> қабілетті адами капиталды құратын <mask>, ғылым және өндірісті интеграциялаудың <mask> <mask> болып табылады.")

for t in predicted:
  print(t[0]['score'], t[0]['token_str'])
```
## Training procedure
The model was trained using two NVIDIA A100 GPUs on over 5.3 million examples from the "kz-transformers/multidomain-kazakh-dataset." We conducted training across 10 epochs, handling large batches of data efficiently through gradient accumulation. The learning setup included a slow build-up in the learning rate to maximize learning stability and was optimized over 208,100 steps, focusing on improving the model’s ability to understand and generate the Kazakh language.

## Limitations and Bias
As with any language model, roberta-kaz-large may inherently learn biases present in the training data. Users should be cautious and evaluate the model in diverse contexts to ensure it performs as expected, especially in sensitive applications.

## Model Authors

**Name:** Kadyrbek Nurgali
- **Email:** [email protected]
- **LinkedIn:** [Kadyrbek Nurgali](https://www.linkedin.com/in/nurgali-kadyrbek-504260231/)