|
--- |
|
annotations_creators: |
|
- shibing624 |
|
language_creators: |
|
- shibing624 |
|
language: |
|
- zh |
|
license: cc-by-4.0 |
|
multilinguality: |
|
- monolingual |
|
size_categories: |
|
- 1M<n<10M |
|
source_datasets: |
|
- https://huggingface.co/datasets |
|
task_categories: |
|
- text-classification |
|
task_ids: |
|
- natural-language-inference |
|
- semantic-similarity-scoring |
|
- text-scoring |
|
paperswithcode_id: snli |
|
pretty_name: Stanford Natural Language Inference |
|
--- |
|
# Dataset Card for SNLI_zh |
|
|
|
## Dataset Description |
|
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec) |
|
- **Dataset:** [zh NLI](https://huggingface.co/datasets/shibing624/nli-zh-all) |
|
- **Size of downloaded dataset files:** 4.7 GB |
|
- **Total amount of disk used:** 4.7 GB |
|
### Dataset Summary |
|
|
|
中文自然语言推理(NLI)数据合集(nli-zh-all) |
|
|
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。 |
|
|
|
中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果: |
|
|
|
**Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) |
|
|
|
### Languages |
|
|
|
数据集均是简体中文文本。 |
|
|
|
## Dataset Structure |
|
### Data Instances |
|
An example of 'train' looks as follows. |
|
``` |
|
{"text1":"借款后多长时间给打电话","text2":"借款后多久打电话啊","label":1} |
|
{"text1":"没看到微粒贷","text2":"我借那么久也没有提升啊","label":0} |
|
``` |
|
|
|
### Data Fields |
|
The data fields are the same among all splits. |
|
|
|
- `text1`: a `string` feature. |
|
- `text2`: a `string` feature. |
|
- `label`: a classification label, with possible values including entailment(1), contradiction(0)。 |
|
|
|
### Data Splits |
|
|
|
after remove None and len(text) < 1 data: |
|
```shell |
|
|
|
$ wc -l nli-zh-all/* |
|
48818 nli-zh-all/alpaca_gpt4-train.jsonl |
|
5000 nli-zh-all/amazon_reviews-train.jsonl |
|
519255 nli-zh-all/belle-train.jsonl |
|
16000 nli-zh-all/cblue_chip_sts-train.jsonl |
|
549326 nli-zh-all/chatmed_consult-train.jsonl |
|
10142 nli-zh-all/cmrc2018-train.jsonl |
|
395927 nli-zh-all/csl-train.jsonl |
|
50000 nli-zh-all/dureader_robust-train.jsonl |
|
709761 nli-zh-all/firefly-train.jsonl |
|
9568 nli-zh-all/mlqa-train.jsonl |
|
455875 nli-zh-all/nli_zh-train.jsonl |
|
50486 nli-zh-all/ocnli-train.jsonl |
|
2678694 nli-zh-all/simclue-train.jsonl |
|
419402 nli-zh-all/snli_zh-train.jsonl |
|
3024 nli-zh-all/webqa-train.jsonl |
|
1213780 nli-zh-all/wiki_atomic_edits-train.jsonl |
|
93404 nli-zh-all/xlsum-train.jsonl |
|
1006218 nli-zh-all/zhihu_kol-train.jsonl |
|
8234680 total |
|
|
|
``` |
|
|
|
### Data Length |
|
|
|
![len](https://huggingface.co/datasets/shibing624/nli-zh-all/resolve/main/nli-zh-all-len.png) |
|
|
|
## Dataset Creation |
|
### Curation Rationale |
|
受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集, |
|
这里把这个数据集上传到huggingface的datasets,方便大家使用。 |
|
### Source Data |
|
#### Initial Data Collection and Normalization |
|
#### Who are the source language producers? |
|
数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。 |
|
|
|
- SNLI: |
|
@inproceedings{snli:emnlp2015, |
|
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.}, |
|
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, |
|
Publisher = {Association for Computational Linguistics}, |
|
Title = {A large annotated corpus for learning natural language inference}, |
|
Year = {2015} |
|
} |
|
|
|
#### Who are the annotators? |
|
原作者。 |
|
|
|
|
|
### Social Impact of Dataset |
|
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. |
|
|
|
Systems that are successful at such a task may be more successful in modeling semantic representations. |
|
|
|
|
|
### Licensing Information |
|
|
|
for reasearch |
|
|
|
用于学术研究 |
|
|
|
|
|
|
|
### Contributions |
|
|
|
[shibing624](https://github.com/shibing624) add this dataset. |
|
|