Datasets:

Modalities:
Text
Languages:
Russian
Libraries:
Datasets
License:
File size: 6,325 Bytes
251ff9b
 
7c18af6
 
8939bb3
 
 
7c18af6
 
8f96b89
 
7ca9e5d
 
ad5e0ad
70ae583
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f96b89
251ff9b
9738f25
251ff9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a243be
251ff9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fefcbb
 
 
 
 
 
 
 
251ff9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
---
license: mit
language:
- ru
tags:
- text
- datasets
size_categories:
- 10K<n<100K
<<<<<<< HEAD
=======
configs:
  - config_name: default
    data_files: open_questions_data.json
config_name: default
features:
  - name: instruction
    dtype: string
  - name: inputs
    struct:
      - name: task
        dtype: string
      - name: text
        dtype: string
      - name: options
        struct:
          - name: option_1
            dtype: string
          - name: option_2
            dtype: string
          - name: option_3
            dtype: string
          - name: option_4
            dtype: string
          - name: option_5
            dtype: string
          - name: option_6
            dtype: string
          - name: option_7
            dtype: string
          - name: option_8
            dtype: string
          - name: option_9
            dtype: string
  - name: outputs
    dtype: string
  - name: meta
    struct:
      - name: subject
        dtype: string
      - name: type
        dtype: string
      - name: source
        dtype: string
      - name: comment
        dtype: string
      - name: provac_score
        dtype: int32
>>>>>>> ad5e0adf0bb574f5a96b9fec55d8a9ed9588994a
---
# SLAVA: A benchmark of the **S**ocio-political **L**andscape **A**nd **V**alue **A**nalysis

<div align="center">
    <a href="https://huggingface.co/spaces/RANEPA-ai/SLAVA">
        <img src="https://raw.githubusercontent.com/Y1OV/project_lab/refs/heads/main/data/logo.jpg" alt="Foo" style="width: 50%; height: auto;">
    </a>
</div align="center">

## Dataset Description

Since 2024, the SLAVA benchmark has been developed, containing about 14,000 questions focused on the Russian domain, covering areas such as history, political science, sociology, political geography, and national security basics. This benchmark evaluates the ability of large language models (LLMs) to handle sensitive topics important to the Russian information space.

#### Main tasks:
- Testing the factual knowledge of LLMs in Russian domains.
- Assessing the sensitivity (provocativeness) of the questions.
- Creating a comprehensive evaluation system based on answer accuracy.

#### Structure:
The questions are divided into the following types:
- Multiple choice with one or several correct answers.
- Sequences and matching.
- Open-ended responses.

#### Question provocativeness:
- **1 point**: Low sensitivity — generally accepted facts.
- **2 points**: Medium sensitivity — controversial topics.
- **3 points**: High sensitivity — political and cultural issues that provoke conflicts.

#### Results:
24 LLMs supporting the Russian language were tested. Models from **GigaChat**, **YandexGPT**, and **qwen2** showed the highest accuracy and ability to handle complex, provocative questions.

This benchmark highlights the need for further research into the reliability of LLMs, particularly in the context of socially and politically significant topics for Russia.


## Dataset Composition

### Data Instances

```
{
        "instruction": "Прочитайте приведённые далее задачу, текст и выполните по ним задание.\n Задача: {task}\n Текст: {text}\n Кратко сформулируйте и запишите ответ на задание (слово, словосочетание, одно или несколько предложений).",
        "inputs": {
            "task": "В приведенном ряду найдите понятие, которое является обобщающим для всех остальных представленных понятий. Запишите в ответ это слово (словосочетание).",
            "text": "Выборы, референдум, форма демократии, сход граждан, прямая демократия.",
            "options": {
                "option_1": NaN,
                "option_2": NaN,
                "option_3": NaN,
                "option_4": NaN,
                "option_5": NaN,
                "option_6": NaN,
                "option_7": NaN,
                "option_8": NaN,
                "option_9": NaN
            }
        },
        "outputs": "форма демократии",
        "meta": {
            "subject": "Обществознание",
            "type": "открытый ответ",
            "source": "https://socege.sdamgia.ru/problem?id=11291",
            "comment": "Д2",
            "provac_score": 2
        }
    }
```


### Data Fields 


- instruction: A string containing the instructions that explain what needs to be done in the task.
- inputs:
    - task: A string containing the formulation of the task.
    - text: A string with the main text or phrase for which a response needs to be selected.
    - options: An object containing a list of possible answer choices:
        - option_1 - option_9: Answer choices represented as strings. If there are fewer options, unused fields may contain null.
- outputs: A number indicating the correct answer choice (answer option number).
- meta: Additional information about the task:
  - subject: A string specifying the subject of the task (e.g., History).
    - type: A string describing the type of task (e.g., multiple choice).
    - source: A string containing the source of the task.
    - comment: A field for comments (can be null if no comments are present).
    - provac_score: A numerical value indicating the difficulty or importance of the task.

   
## How to Download

```
from huggingface_hub import hf_hub_download

dataset = hf_hub_download(repo_id="RANEPA-ai/SLAVA-OpenData-2800-v1", filename="open_questions_data.json", repo_type="dataset")
filename="open_questions_data.json", repo_type="dataset")
```

## Licensing Information

#### ⚖ MIT license

## Citation Information



```
@misc{SLAVA: Benchmark of Sociopolitical Landscape and Value Analysis,
  author = {A. S. Chetvergov, 
R. S. Sharafetdinov, 
M. M. Polukoshko, 
V. A. Akhmetov, 
N. A. Oruzheynikova, 
E. S. Anichkov, 
S. V. Bolovtsov},
  title = {SLAVA: Benchmark of Sociopolitical Landscape and Value Analysis (2024)},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = "\url{https://huggingface.co/spaces/RANEPA-ai/SLAVA}"
}
```