repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Richard0113/distilbert-base-uncased-finetuned-cola | Richard0113 | distilbert | 24 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8311
- Matthews Correlation: 0.5199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5263 | 1.0 | 535 | 0.5272 | 0.4152 |
| 0.3504 | 2.0 | 1070 | 0.4835 | 0.5021 |
| 0.2372 | 3.0 | 1605 | 0.6059 | 0.5056 |
| 0.182 | 4.0 | 2140 | 0.7617 | 0.5179 |
| 0.1319 | 5.0 | 2675 | 0.8311 | 0.5199 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 22473d96a6b3119cb7082ebf68fcef88 |
DOOGLAK/Article_100v9_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['article100v9_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_100v9_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v9_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3011
- Precision: 0.4913
- Recall: 0.5293
- F1: 0.5096
- Accuracy: 0.8977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 44 | 0.3780 | 0.3029 | 0.2939 | 0.2984 | 0.8623 |
| No log | 2.0 | 88 | 0.3133 | 0.4705 | 0.4818 | 0.4761 | 0.8922 |
| No log | 3.0 | 132 | 0.3011 | 0.4913 | 0.5293 | 0.5096 | 0.8977 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| 478d523c37021d33d9c0d3ad80d9e1af |
caffsean/bert-base-cased-deep-ritmo-sampa | caffsean | bert | 11 | 6 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,255 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-deep-ritmo-sampa
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4042 | 1.0 | 1875 | 3.0610 |
| 2.8648 | 2.0 | 3750 | 2.6298 |
| 2.6572 | 3.0 | 5625 | 2.5550 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 6361d26ed33844c366ba8cfee0bb32f9 |
facebook/mask2former-swin-large-cityscapes-semantic | facebook | mask2former | 5 | 153 | transformers | 0 | image-segmentation | true | false | false | other | null | ['coco'] | null | 1 | 0 | 1 | 0 | 1 | 0 | 1 | ['vision', 'image-segmentation'] | false | true | true | 2,931 | false |
# Mask2Former
Mask2Former model trained on Cityscapes semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png)
## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | d7e81a703415c161af740493ce7e12d8 |
yam1ke/distilbert-base-uncased-finetuned-ner | yam1ke | distilbert | 10 | 22 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,549 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9285
- Recall: 0.9362
- F1: 0.9324
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2452 | 1.0 | 878 | 0.0709 | 0.9184 | 0.9206 | 0.9195 | 0.9803 |
| 0.0501 | 2.0 | 1756 | 0.0621 | 0.9212 | 0.9328 | 0.9270 | 0.9830 |
| 0.0299 | 3.0 | 2634 | 0.0607 | 0.9285 | 0.9362 | 0.9324 | 0.9839 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 56d2b58d7e41e0a908072f2b7180b68e |
rajat99/Fine_Tuning_XLSR_300M_testing_model | rajat99 | wav2vec2 | 9 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,347 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_XLSR_300M_testing_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2861
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5178 | 23.53 | 400 | 3.2861 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 3ee93b8bb85d51bba1ed800c4fdc573a |
Wizounovziki/t5-small-devices-sum-ver3 | Wizounovziki | t5 | 11 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,350 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-devices-sum-ver3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1325
- Rouge1: 95.6631
- Rouge2: 83.6149
- Rougel: 95.6622
- Rougelsum: 95.6632
- Gen Len: 4.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 467 | 0.3307 | 90.9817 | 74.3762 | 90.9596 | 90.9781 | 4.7527 |
| 1.0254 | 2.0 | 934 | 0.2365 | 92.6761 | 78.1252 | 92.6664 | 92.6682 | 4.8004 |
| 0.3526 | 3.0 | 1401 | 0.1904 | 93.8503 | 80.4523 | 93.8286 | 93.8338 | 4.8221 |
| 0.2643 | 4.0 | 1868 | 0.1638 | 94.8079 | 82.1779 | 94.7815 | 94.7853 | 4.917 |
| 0.2075 | 5.0 | 2335 | 0.1503 | 95.1619 | 82.6284 | 95.1533 | 95.1578 | 4.9263 |
| 0.1831 | 6.0 | 2802 | 0.1408 | 95.2357 | 82.8152 | 95.2261 | 95.2263 | 4.9287 |
| 0.161 | 7.0 | 3269 | 0.1386 | 95.4993 | 83.2609 | 95.4935 | 95.4933 | 4.9269 |
| 0.1589 | 8.0 | 3736 | 0.1344 | 95.6363 | 83.4727 | 95.6304 | 95.632 | 4.9309 |
| 0.1517 | 9.0 | 4203 | 0.1330 | 95.6702 | 83.6329 | 95.6669 | 95.6736 | 4.9301 |
| 0.1436 | 10.0 | 4670 | 0.1325 | 95.6631 | 83.6149 | 95.6622 | 95.6632 | 4.9279 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| b46294928425f3e120964e95d3d1d26d |
cartesinus/multilingual_minilm-amazon_massive-intent_eu6_noen | cartesinus | bert | 12 | 36 | transformers | 0 | text-classification | true | false | false | mit | ['de', 'fr', 'it', 'pt', 'es', 'pl'] | ['AmazonScience/massive'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'nlu', 'text-classification', 'intent-classification'] | true | true | true | 2,022 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual_minilm-amazon_massive-intent_eu_noen
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7794
- Accuracy: 0.8551
- F1: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.7624 | 1.0 | 4318 | 1.5462 | 0.6331 | 0.6331 |
| 0.9535 | 2.0 | 8636 | 0.9628 | 0.7698 | 0.7698 |
| 0.6849 | 3.0 | 12954 | 0.8034 | 0.8097 | 0.8097 |
| 0.5163 | 4.0 | 17272 | 0.7444 | 0.8290 | 0.8290 |
| 0.3973 | 5.0 | 21590 | 0.7346 | 0.8383 | 0.8383 |
| 0.331 | 6.0 | 25908 | 0.7369 | 0.8453 | 0.8453 |
| 0.2876 | 7.0 | 30226 | 0.7325 | 0.8510 | 0.8510 |
| 0.2319 | 8.0 | 34544 | 0.7726 | 0.8496 | 0.8496 |
| 0.2098 | 9.0 | 38862 | 0.7803 | 0.8543 | 0.8543 |
| 0.1863 | 10.0 | 43180 | 0.7794 | 0.8551 | 0.8551 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2 | 040e4028975297f8c00ba9c03272a79c |
Jaiti/distilbert-base-uncased-finetuned-ner | Jaiti | distilbert | 12 | 19 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 927 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| e24f7e4a705581d5c80a0448a518a8cb |
eicu/avatar-jsjessy-low-facetuned-650 | eicu | null | 33 | 18 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,995 | false | ### avatar-jsjessy-low-facetuned-650 Dreambooth model trained by eicu with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
jsjessy (use that on your prompt)
![jsjessy 0](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%281%29.jpg)![jsjessy 1](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%282%29.jpg)![jsjessy 2](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%283%29.jpg)![jsjessy 3](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%284%29.jpg)![jsjessy 4](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%285%29.jpg)![jsjessy 5](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%286%29.jpg)![jsjessy 6](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%287%29.jpg)![jsjessy 7](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%288%29.jpg)![jsjessy 8](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%289%29.jpg)![jsjessy 9](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%2810%29.jpg)![jsjessy 10](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%2811%29.jpg)![jsjessy 11](https://huggingface.co/eicu/avatar-jsjessy-low-facetuned-650/resolve/main/concept_images/jsjessy_%2812%29.jpg)
| 052ff355c1bceeba32fddbc7a15bf973 |
asalics/distilbert-base-uncased-finetuned-emotion | asalics | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.924
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 |
| 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| a23a588d84a015e8c53ff8e6a39dc013 |
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps | sd-concepts-library | null | 9 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,309 | false | ### Isabell Schulte - PVIII - 4tiles - 6000steps on Stable Diffusion
This is the `<isabell-schulte-p8-style-4tiles-6000s>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<isabell-schulte-p8-style-4tiles-6000s> 0](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps/resolve/main/concept_images/0.jpeg)
![<isabell-schulte-p8-style-4tiles-6000s> 1](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps/resolve/main/concept_images/1.jpeg)
![<isabell-schulte-p8-style-4tiles-6000s> 2](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps/resolve/main/concept_images/3.jpeg)
![<isabell-schulte-p8-style-4tiles-6000s> 3](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps/resolve/main/concept_images/2.jpeg)
| b913c9630d6c55a621dce163b11e7c3d |
gagan3012/k2t-new | gagan3012 | t5 | 9 | 88 | transformers | 0 | text2text-generation | true | false | true | mit | ['en'] | ['common_gen'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | true | true | 2,195 | false |
# keytotext
![keytotext (1)](https://user-images.githubusercontent.com/49101362/116334480-f5e57a00-a7dd-11eb-987c-186477f94b6e.png)
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[![pypi Version](https://img.shields.io/pypi/v/keytotext.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/keytotext/)
[![Downloads](https://static.pepy.tech/personalized-badge/keytotext?period=total&units=none&left_color=grey&right_color=orange&left_text=Pip%20Downloads)](https://pepy.tech/project/keytotext)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```
![carbon (3)](https://user-images.githubusercontent.com/49101362/116220679-90e64180-a755-11eb-9246-82d93d924a6c.png)
## UI:
UI: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)
![image](https://user-images.githubusercontent.com/49101362/116162205-fc042980-a6fd-11eb-892e-8f6902f193f4.png)
| 1d38f438a110eb61e4d3e51e15116dd1 |
milyiyo/distilbert-base-uncased-finetuned-amazon-review | milyiyo | distilbert | 12 | 27 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['amazon_reviews_multi'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,219 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon-review
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3494
- Accuracy: 0.693
- F1: 0.7003
- Precision: 0.7095
- Recall: 0.693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 |
| 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 |
| 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 |
| 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 |
| 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 |
| 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 |
| 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 |
| 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 |
| 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 |
| 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 7e10ea7475a81b1dd2b09d96c55daf4e |
gokuls/bert-base-uncased-wnli | gokuls | bert | 17 | 61 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,675 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6968
- Accuracy: 0.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7192 | 1.0 | 5 | 0.6968 | 0.4789 |
| 0.6928 | 2.0 | 10 | 0.7003 | 0.2676 |
| 0.6921 | 3.0 | 15 | 0.7057 | 0.5211 |
| 0.6931 | 4.0 | 20 | 0.7282 | 0.3944 |
| 0.6922 | 5.0 | 25 | 0.7579 | 0.2535 |
| 0.68 | 6.0 | 30 | 0.8314 | 0.2254 |
| 0.6652 | 7.0 | 35 | 0.8990 | 0.1831 |
| 0.627 | 8.0 | 40 | 1.0187 | 0.2254 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1b7c43519b30566b6153333a05ecd9fa |
orkg/orkgnlp-bioassays-semantification | orkg | null | 5 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 431 | false |
This Repository includes the files required to run the `BioAssays Semantification` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
The [Scikit-Learn](https://scikit-learn.org/stable/) models are converted using [skl2onnx](https://github.com/onnx/sklearn-onnx) and may not include all original scikit-learn functionalities. | a5357898dc76891563a69c0b3ab9ff9a |
Omerdor/wet | Omerdor | null | 13 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,168 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# wet
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/wet/tensorboard?#scalars)
| 0d55ed075a795719b18f372c66c7447a |
aipicasso/cool-japan-diffusion-2-1-0-beta | aipicasso | null | 18 | 988 | diffusers | 17 | text-to-image | false | false | false | other | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 6,427 | false |
# Cool Japan Diffusion 2.1.0 Beta Model Card
![アイキャッチ](eyecatch.jpg)
[注意事项。从2023年1月10日起,中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告)
English version is [here](README_en.md).
# はじめに
学習用Cool Japan DiffusionはStable Diffsionをファインチューニングして、アニメやマンガ、ゲームなどのクールジャパンを表現することに特化したモデルです。なお、内閣府のクールジャパン戦略とは特に関係はありません。
# ライセンスについて
ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。
例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。
この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。
ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。
営利企業にいる方は法務部にいる人と相談してください。
趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。
なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。
# 法律や倫理について
本モデルは日本にて作成されました。したがって、日本の法律が適用されます。
本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。
また、本モデルの配布については、著作権法や刑法175条に照らしてみても、
正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。
ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。
しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。
これは学習する著作物に対して著作者の許可を得ていないためです。
ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。
したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。
# 使い方
手軽に楽しみたい方は、パソコンならば右上側にあるテキストフォームに入れて生成してみてください。
スマートフォンならば、上に戻って生成してみてください。
詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2022/12/30/102636)にかかれています。
モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-0-beta/resolve/main/v2-1-0-beta.ckpt)からダウンロードできます。
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **開発者:** Robin Rombach, Patrick Esser, Alfred Increment
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** CreativeML Open RAIL++-M-NC License
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。
- **補足:**
- **参考文献:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## モデルの使用例
Stable Diffusion v2と同じ使い方です。
たくさんの方法がありますが、2つのパターンを提供します。
- Web UI
- Diffusers
### Web UIの場合
こちらの[取扱説明書](https://alfredplpl.hatenablog.com/entry/2022/12/30/102636)に従って作成してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
import torch
model_id = "aipicasso/cool-japan-diffusion-2-1-0-beta"
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "anime, a portrait of a girl with black short hair and red eyes, kimono, full color illustration, official art, 4k, detailed"
negative_prompt="low quality, bad face, bad anatomy, bad hand, lowres, jpeg artifacts, 2d, 3d, cg, text"
image = pipe(prompt,negative_prompt=negative_prompt).images[0]
image.save("girl.png")
```
**注意**:
- [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。
- GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。
#### 想定される用途
- コンテスト
- [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿
- ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。また、事前に申請して、確認を取るようにします。
- コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。
- 画像生成AIに関する報道
- 公共放送だけでなく、営利企業でも可能
- 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。
- クールジャパンの紹介
- 他国の人にクールジャパンとはなにかを説明すること。
- 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- Latent Diffusion Modelとクールジャパンとの相性
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- 自己表現
- SNS上で自分の感情や思考を表現すること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 収益化されているYouTubeなどのコンテンツへの使用
- 商用のサービスとして直接提供すること
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 特に既存のキャラクターは公開しないでください(著作権法に違反するおそれ)
- なお、学習していない[キャラクターも生成できる](https://twitter.com/ThePioneerJPnew/status/1609074173892235264?s=20&t=-rY1ufzNeIDT3Fm5YdME6g)そうです。(このツイート自体は研究目的として許可しています。)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- よくわかっていない
### バイアス
Stable Diffusionと同じバイアスが掛かっています。
気をつけてください。
## 学習
**学習データ**
次のデータを主に使ってStable Diffusionをファインチューニングしています。
- VAEについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成)
- U-Netについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 40万ペア
**学習プロセス**
Stable DiffusionのVAEとU-Netをファインチューニングしました。
- **ハードウェア:** RTX 3090
- **オプティマイザー:** AdamW
- **Gradient Accumulations**: 1
- **バッチサイズ:** 1
## 評価結果
## 環境への影響
ほとんどありません。
- **ハードウェアタイプ:** RTX 3090
- **使用時間(単位は時間):** 300
- **クラウド事業者:** なし
- **学習した場所:** 日本
- **カーボン排出量:** そんなにない
## 参考文献
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。
| 91468e31578e85464518be2462953747 |
Hazam/distilbert-base-uncased-finetuned-imdb | Hazam | distilbert | 9 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 771b8561a3d9f3e73585c8afeb56bcce |
Psunrise/finetuning-customer-sentiment-model-300-samples | Psunrise | roberta | 21 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-customer-sentiment-model-300-samples
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5949
- Accuracy: 0.7558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 8efe64a74690ffbad3caa3342cf33fd9 |
MultiBertGunjanPatrick/multiberts-seed-1-400k | MultiBertGunjanPatrick | bert | 7 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | true | true | 6,483 | false | # MultiBERTs Seed 1 Checkpoint 400k (uncased)
Seed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-400k')
model = BertModel.from_pretrained("multiberts-seed-1-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 91183398ccc9c4b5e3a9794ac73a3d49 |
musika/nes-acoustic-more-energy-vocals | musika | null | 13 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'music', 'generation', 'tensorflow'] | false | true | true | 1,081 | false |
# Musika Model: Nes_Acoustic_More_Energy_Vocals
## Model provided by: nakas
Pretrained Nes_Acoustic_More_Energy_Vocals model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained Nes_Acoustic_More_Energy_Vocals model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
| 872729c11313a074070ac07f59bcf4a5 |
google/bigbird-roberta-large | google | big_bird | 8 | 1,016 | transformers | 8 | fill-mask | true | false | true | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia', 'cc_news'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,769 | false |
# BigBird large model
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdModel
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large")
# you can change `attention_type` to full attention like this:
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", block_size=16, num_random_blocks=2)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training Data
This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2).
## Training Procedure
Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask.
Model is warm started from RoBERTa’s checkpoint.
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
| a5048e278d01469b41a96db2697645c8 |
ErykWdowiak/GPTalian | ErykWdowiak | gpt2 | 15 | 8 | transformers | 0 | text-generation | true | false | true | apache-2.0 | ['en', 'it', 'scn', 'nap'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'gpt2'] | false | true | true | 610 | false |
# GPTalian
This is a GPT2 model of Italian regional languages trained on [collections of Italian "dialect poetry"](http://dialectpoetry.com) by Luigi Bonaffini.
This is a multilingual model. Italians use the word "dialect" to describe their regional languages, but they are separate languages. And there's a lot of English in this dataset too.
The challenge of this project is to train a model to write the languages of Italy.
For those who do not know Italian, here's some (lowercase) text that you can type into the API box:
- oggi si parla il dialetto
- la sua poesia viene di
- ma non sempre trova
| eb8b0916a49be434d607c700c1cfa0bc |
Helsinki-NLP/opus-mt-niu-sv | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-niu-sv
* source languages: niu
* target languages: sv
* OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.sv | 29.2 | 0.478 |
| 78f64c167cc4ad2e2fa593985db7c2c5 |
patrickvonplaten/wav2vec2-large-xls-r-300m-turkish-colab | patrickvonplaten | wav2vec2 | 15 | 11 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3864
- Wer: 0.3570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8302 | 3.67 | 400 | 0.6702 | 0.6903 |
| 0.4098 | 7.34 | 800 | 0.4574 | 0.4939 |
| 0.1908 | 11.01 | 1200 | 0.4350 | 0.4557 |
| 0.1279 | 14.68 | 1600 | 0.4204 | 0.4213 |
| 0.0966 | 18.35 | 2000 | 0.4238 | 0.3991 |
| 0.0782 | 22.02 | 2400 | 0.3822 | 0.3906 |
| 0.0613 | 25.69 | 2800 | 0.3982 | 0.3714 |
| 0.0477 | 29.36 | 3200 | 0.3864 | 0.3570 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 7ef74c984e640c499b9d903eb98cf0a8 |
tomekkorbak/kind_torvalds | tomekkorbak | null | 2 | 0 | null | 0 | null | false | false | false | mit | ['en'] | ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,874 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kind_torvalds
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.000286,
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kind_torvalds',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3663m9qy | 7cd1f4141409f5af192379cc0ebe8032 |
deval/distilbert-base-uncased-finetuned-ner | deval | distilbert | 13 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9277
- Recall: 0.9385
- F1: 0.9330
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2454 | 1.0 | 878 | 0.0692 | 0.9106 | 0.9212 | 0.9159 | 0.9809 |
| 0.0517 | 2.0 | 1756 | 0.0616 | 0.9203 | 0.9352 | 0.9277 | 0.9834 |
| 0.0314 | 3.0 | 2634 | 0.0606 | 0.9277 | 0.9385 | 0.9330 | 0.9844 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
| fb8b5760f5c401871cb80cfc6c706a5d |
muhtasham/small-mlm-rotten_tomatoes-custom-tokenizer | muhtasham | bert | 10 | 0 | transformers | 1 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,468 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-rotten_tomatoes-custom-tokenizer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6287 | 0.47 | 500 | 7.2726 |
| 7.0283 | 0.94 | 1000 | 7.0982 |
| 6.7115 | 1.41 | 1500 | 6.9665 |
| 6.695 | 1.87 | 2000 | 7.2285 |
| 6.55 | 2.34 | 2500 | 6.9906 |
| 6.4289 | 2.81 | 3000 | 7.0377 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 540f8ab1fc0ad9f19fc486d86e1b3adc |
arvalinno/distilbert-base-uncased-finetuned-indosquad-v2 | arvalinno | distilbert | 12 | 7 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-indosquad-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9015 | 1.0 | 9676 | 1.5706 |
| 1.6438 | 2.0 | 19352 | 1.5926 |
| 1.4714 | 3.0 | 29028 | 1.5253 |
| 1.3486 | 4.0 | 38704 | 1.6650 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 82466442732bd1064c56e07417dc1e99 |
alk/mt5-small-finetuned-cnn_dailymail-en-es | alk | mt5 | 8 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,647 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alk/mt5-small-finetuned-cnn_dailymail-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9490
- Validation Loss: 1.6920
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 287112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.9445 | 1.9068 | 0 |
| 2.2439 | 1.8106 | 1 |
| 2.1301 | 1.7582 | 2 |
| 2.0643 | 1.7378 | 3 |
| 2.0191 | 1.7181 | 4 |
| 1.9870 | 1.7033 | 5 |
| 1.9646 | 1.7015 | 6 |
| 1.9490 | 1.6920 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| 247770dbe9161cbdd92021a0fc081a35 |
gokuls/distilbert_add_GLUE_Experiment_stsb | gokuls | distilbert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,883 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2770
- Pearson: 0.0450
- Spearmanr: 0.0447
- Combined Score: 0.0448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 4.11 | 1.0 | 23 | 2.2770 | 0.0450 | 0.0447 | 0.0448 |
| 2.2155 | 2.0 | 46 | 2.4336 | 0.0499 | 0.0451 | 0.0475 |
| 2.1634 | 3.0 | 69 | 2.3207 | 0.0729 | 0.0634 | 0.0681 |
| 2.0618 | 4.0 | 92 | 2.6080 | 0.0787 | 0.0783 | 0.0785 |
| 1.8586 | 5.0 | 115 | 2.4988 | 0.1020 | 0.1017 | 0.1018 |
| 1.6977 | 6.0 | 138 | 2.6166 | 0.1187 | 0.1137 | 0.1162 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| 9119dc822e4b692f7fe4848325a4d301 |
OpenMatch/ance-tele_triviaqa_qry-encoder | OpenMatch | bert | 7 | 2 | transformers | 0 | feature-extraction | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 904 | false |
This model is the **query** encoder of ANCE-Tele trained on TriviaQA, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE-Tele.
ANCE-Tele only trains with self-mined negatives (teleportation negatives) without using additional negatives (e.g., BM25, other DR systems) and eliminates the dependency on filtering strategies and distillation modules.
|NQ (Test)|R@5|R@20|R@20|
|:---|:---|:---|:---|
|ANCE-Tele|76.9|83.4|87.3|
```
@inproceedings{sun2022ancetele,
title={Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives},
author={Si Sun, Chenyan Xiong, Yue Yu, Arnold Overwijk, Zhiyuan Liu and Jie Bao},
booktitle={Proceedings of EMNLP 2022},
year={2022}
}
``` | c69de658d979e50bf8324d62ca760835 |
grinman/AIsee | grinman | null | 6 | 0 | null | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,896 | false | Made using highly curated best quality masterful artwork from an ancient indonesian stone carving website, with some help from their independent doodling connoisseur brothers in arms, 3000 pieces of their best work.
Prompt used: aiseeic
aisee_10000.ckpt was made with Anything v.3.
aiseeic_15000.ckpt was made with SD 1.5.
AIsee (Anything) examples
![00043-170289776-Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, facial featur.png](https://s3.amazonaws.com/moonup/production/uploads/1669095647555-637c5f1b9495870ef76f1472.png)
![00042-3508825484-Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, facial featur.png](https://s3.amazonaws.com/moonup/production/uploads/1669095647512-637c5f1b9495870ef76f1472.png)
![00045-2382850738-Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, facial featur.png](https://s3.amazonaws.com/moonup/production/uploads/1669095647489-637c5f1b9495870ef76f1472.png)
AIsee SD examples
![00053-758175738-aiseeic, Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, faci.png](https://s3.amazonaws.com/moonup/production/uploads/1669095834760-637c5f1b9495870ef76f1472.png)
![00039-477119149-Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, facial featur.png](https://s3.amazonaws.com/moonup/production/uploads/1669095834642-637c5f1b9495870ef76f1472.png)
![00057-2487031390-aiseeic, Emma Watson full shot modeling as Jessica Rabbit, (EOS 5DS R, ISO100, f_8, 1_125, 84mm, postprocessed, crisp face, faci.png](https://s3.amazonaws.com/moonup/production/uploads/1669095834700-637c5f1b9495870ef76f1472.png)
I own nothing and I will be happy.
| a8bc2c02a0125c1ffe6d07f78dfe5c9c |
versae/whisper-large-nob-ncc-s | versae | whisper | 27 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['no', 'nb'] | ['NbAiLab/NCC_S'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,576 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Norwegian
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the NbAiLab/NCC_S dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2776
- Wer: 12.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6892 | 0.2 | 1000 | 0.3177 | 15.1035 |
| 0.6782 | 0.4 | 2000 | 0.3033 | 13.4592 |
| 0.6317 | 0.6 | 3000 | 0.2909 | 13.7637 |
| 0.5609 | 0.8 | 4000 | 0.2803 | 12.6675 |
| 0.5726 | 1.0 | 5000 | 0.2776 | 12.5152 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.11.0
| d2855e58ec813c9d00624b5e7a6db4c3 |
rhizomuser/ddpm-butterflies-128 | rhizomuser | null | 11 | 1 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,232 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rhizomuser/ddpm-butterflies-128/tensorboard?#scalars)
| 196e0393073407cad34ea685f9380926 |
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 | DrishtiSharma | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sat'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'sat', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard'] | true | true | true | 1,923 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-sat-a3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8961
- Wer: 0.3976
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.1266 | 33.29 | 100 | 2.8577 | 1.0 |
| 2.1549 | 66.57 | 200 | 1.0799 | 0.5542 |
| 0.5628 | 99.86 | 300 | 0.7973 | 0.4016 |
| 0.0779 | 133.29 | 400 | 0.8424 | 0.4177 |
| 0.0404 | 166.57 | 500 | 0.9048 | 0.4137 |
| 0.0212 | 199.86 | 600 | 0.8961 | 0.3976 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 84277fe6533a06beb0f5a730f7ecde7b |
davidaponte/mikovelliaponte-dog | davidaponte | null | 17 | 6 | diffusers | 0 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | true | true | 785 | false |
# DreamBooth model for the mikovelliaponte concept trained by davidaponte on the davidaponte/dreambooth-hackathon-images-miko dataset.
This is a Stable Diffusion model fine-tuned on the mikovelliaponte concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of mikovelliaponte dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('davidaponte/mikovelliaponte-dog')
image = pipeline().images[0]
image
```
| df0186e5ce46add2996b1bebb2d21377 |
tomekkorbak/quirky_ritchie | tomekkorbak | null | 2 | 0 | null | 0 | null | false | false | false | mit | ['en'] | ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 8,760 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# quirky_ritchie
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'quirky_ritchie',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/xb3x3sd4 | 5321432c5a5126d5767edd0737ab9b4c |
cduncanja/emotion_model | cduncanja | bert | 24 | 2 | transformers | 0 | text-classification | true | false | false | mit | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,447 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_model
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7815
- F1: 0.1455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7968 | 1.0 | 2 | 1.7804 | 0.2286 |
| 1.7918 | 2.0 | 4 | 1.7812 | 0.2286 |
| 1.7867 | 3.0 | 6 | 1.7822 | 0.08 |
| 1.7884 | 4.0 | 8 | 1.7816 | 0.08 |
| 1.7833 | 5.0 | 10 | 1.7815 | 0.1455 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.11.0
| df75601d6e8ebd532fe3e7fa8b11861d |
Geotrend/bert-base-en-fr-zh-ja-vi-cased | Geotrend | bert | 8 | 2 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,319 | false |
# bert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | 084234962dad937d3aea84d70c96b829 |
Geotrend/distilbert-base-en-es-it-cased | Geotrend | distilbert | 6 | 3 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,233 | false |
# distilbert-base-en-es-it-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-it-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | bec0355dc2e3bd0dfd40e755dc3ea9f1 |
fusing/glide-base | fusing | null | 16 | 0 | null | 1 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,891 | false |
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
**Paper**: [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741)
**Abstract**:
*Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing.*
## Usage
```python
# !pip install diffusers
import torch
from diffusers import DiffusionPipeline
import PIL.Image
model_id = "fusing/glide-base"
# load model and scheduler
pipeline = DiffusionPipeline.from_pretrained(model_id)
# run inference (text-conditioned denoising + upscaling)
img = pipeline("a crayon drawing of a corgi")
# process image to PIL
img = img.squeeze(0)
img = ((img + 1)*127.5).round().clamp(0, 255).to(torch.uint8).cpu().numpy()
image_pil = PIL.Image.fromarray(img)
# save image
image_pil.save("test.png")
```
## Samples
1. ![sample_1](https://huggingface.co/datasets/anton-l/images/resolve/main/glide1.png)
2. ![sample_2](https://huggingface.co/datasets/anton-l/images/resolve/main/glide2.png)
3. ![sample_3](https://huggingface.co/datasets/anton-l/images/resolve/main/glide3.png)
| b4e6a980656da4307c2ef54292ca986f |
JulienDespres/whisper-small-fr | JulienDespres | whisper | 7 | 3 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,006 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Fri - Despres Julien
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- training_steps: 6000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.5.2
- Tokenizers 0.12.1
| 5692d83fdc2dfed71e10667ab610ca63 |
ufal/byt5-small-multilexnorm2021-trde | ufal | t5 | 6 | 4 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | ['tr', 'de', 'multilingual'] | ['mc4', 'wikipedia', 'multilexnorm'] | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | ['lexical normalization'] | false | true | true | 2,766 | false |
# Fine-tuned ByT5-small for MultiLexNorm (Turkish-German version)
![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png)
This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
| 4621735d4626bc5220d8c0abb695c485 |
lmqg/mt5-base-frquad-qag | lmqg | mt5 | 13 | 72 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['fr'] | ['lmqg/qag_frquad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['questions and answers generation'] | true | true | true | 4,117 | false |
# Model Card of `lmqg/mt5-base-frquad-qag`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question & answer pair generation task on the [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** fr
- **Training data:** [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="lmqg/mt5-base-frquad-qag")
# model prediction
question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-frquad-qag")
output = pipe("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_frquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-------------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 78.28 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedF1Score (MoverScore) | 51.66 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedPrecision (BERTScore) | 78.36 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedPrecision (MoverScore) | 51.73 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedRecall (BERTScore) | 78.21 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
| QAAlignedRecall (MoverScore) | 51.59 | default | [lmqg/qag_frquad](https://huggingface.co/datasets/lmqg/qag_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_frquad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 256
- epoch: 11
- batch: 8
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 16
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-frquad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 378fe61d00994caabefa7319ae9a2914 |
Helsinki-NLP/opus-mt-sv-ho | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-sv-ho
* source languages: sv
* target languages: ho
* OPUS readme: [sv-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ho/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ho/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ho/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ho | 26.7 | 0.503 |
| da789960dbca84aab4c0336148cde753 |
ejin/bert-base-cased-finetuned-ner | ejin | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0919
- Precision: 0.8940
- Recall: 0.9009
- F1: 0.8974
- Accuracy: 0.9750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1147 | 1.0 | 1756 | 0.0919 | 0.8940 | 0.9009 | 0.8974 | 0.9750 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 77cb7333f27b92b522772a4a3ec4743d |
sentence-transformers/xlm-r-bert-base-nli-mean-tokens | sentence-transformers | xlm-roberta | 13 | 293 | sentence-transformers | 0 | sentence-similarity | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 3,835 | false |
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-bert-base-nli-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-bert-base-nli-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | e55f5d76b5093675ab25ebded54c8cef |
Geotrend/bert-base-pl-cased | Geotrend | bert | 8 | 7 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['pl'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,283 | false |
# bert-base-pl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-pl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. | 511ca290044fc7bf12e4c46b96dc6e52 |
aliprf/Ad-Corre | aliprf | null | 25 | 0 | null | 0 | null | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Ad-Corre', 'facial expression recognition', 'emotion recognition', 'expression recognition', 'computer vision', 'CNN', 'loss', 'IEEE Access', 'Tensor Flow'] | false | true | true | 5,134 | false |
# Ad-Corre
Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ad-corre-adaptive-correlation-based-loss-for/facial-expression-recognition-on-raf-db)](https://paperswithcode.com/sota/facial-expression-recognition-on-raf-db?p=ad-corre-adaptive-correlation-based-loss-for)
<!--
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ad-corre-adaptive-correlation-based-loss-for/facial-expression-recognition-on-affectnet)](https://paperswithcode.com/sota/facial-expression-recognition-on-affectnet?p=ad-corre-adaptive-correlation-based-loss-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ad-corre-adaptive-correlation-based-loss-for/facial-expression-recognition-on-fer2013)](https://paperswithcode.com/sota/facial-expression-recognition-on-fer2013?p=ad-corre-adaptive-correlation-based-loss-for)
-->
#### Link to the paper (open access):
https://ieeexplore.ieee.org/document/9727163
#### Link to the paperswithcode.com:
https://paperswithcode.com/paper/ad-corre-adaptive-correlation-based-loss-for
```
Please cite this work as:
@ARTICLE{9727163,
author={Fard, Ali Pourramezan and Mahoor, Mohammad H.},
journal={IEEE Access},
title={Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/ACCESS.2022.3156598}}
```
## Introduction
Automated Facial Expression Recognition (FER) in the wild using deep neural networks is still challenging due to intra-class variations and inter-class similarities in facial images. Deep Metric Learning (DML) is among the widely used methods to deal with these issues by improving the discriminative power of the learned embedded features. This paper proposes an Adaptive Correlation (Ad-Corre) Loss to guide the network towards generating embedded feature vectors with high correlation for within-class samples and less correlation for between-class samples. Ad-Corre consists of 3 components called Feature Discriminator, Mean Discriminator, and Embedding Discriminator. We design the Feature Discriminator component to guide the network to create the embedded feature vectors to be highly correlated if they belong to a similar class, and less correlated if they belong to different classes. In addition, the Mean Discriminator component leads the network to make the mean embedded feature vectors of different classes to be less similar to each other.We use Xception network as the backbone of our model, and contrary to previous work, we propose an embedding feature space that contains k feature vectors. Then, the Embedding Discriminator component penalizes the network to generate the embedded feature vectors, which are dissimilar.We trained our model using the combination of our proposed loss functions called Ad-Corre Loss jointly with the cross-entropy loss. We achieved a very promising recognition accuracy on AffectNet, RAF-DB, and FER-2013. Our extensive experiments and ablation study indicate the power of our method to cope well with challenging FER tasks in the wild.
## Evaluation and Samples
The following samples are taken from the paper:
![Samples](https://github.com/aliprf/Ad-Corre/blob/main/paper_graphical_items/samples.jpg?raw=true)
----------------------------------------------------------------------------------------------------------------------------------
## Installing the requirements
In order to run the code you need to install python >= 3.5.
The requirements and the libraries needed to run the code can be installed using the following command:
```
pip install -r requirements.txt
```
## Using the pre-trained models
The pretrained models for Affectnet, RafDB, and Fer2013 are provided in the [Trained_Models](https://github.com/aliprf/Ad-Corre/tree/main/Trained_Models) folder. You can use the following code to predict the facial emotionn of a facial image:
```
tester = TestModels(h5_address='./trained_models/AffectNet_6336.h5')
tester.recognize_fer(img_path='./img.jpg')
```
plaese see the following [main.py](https://github.com/aliprf/Ad-Corre/tree/main/main.py) file.
## Training Network from scratch
The information and the code to train the model is provided in train.py .Plaese see the following [main.py](https://github.com/aliprf/Ad-Corre/tree/main/main.py) file:
```
'''training part'''
trainer = TrainModel(dataset_name=DatasetName.affectnet, ds_type=DatasetType.train_7)
trainer.train(arch="xcp", weight_path="./")
```
### Preparing Data
Data needs to be normalized and saved in npy format.
---------------------------------------------------------------
```
Please cite this work as:
@ARTICLE{9727163,
author={Fard, Ali Pourramezan and Mahoor, Mohammad H.},
journal={IEEE Access},
title={Ad-Corre: Adaptive Correlation-Based Loss for Facial Expression Recognition in the Wild},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/ACCESS.2022.3156598}}
```
| 115c4bfed6d214c694295064d53b59e6 |
Filial/distilbert-base-uncased-finetuned-squad | Filial | distilbert | 12 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.218 | 1.0 | 5533 | 1.1630 |
| 0.9616 | 2.0 | 11066 | 1.1310 |
| 0.7547 | 3.0 | 16599 | 1.1581 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| aa0d84e088b733e8e51aa191c4425b21 |
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e12 | theojolliffe | bart | 13 | 3 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,642 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e12
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8658
- Rouge1: 57.2678
- Rouge2: 43.347
- Rougel: 47.0854
- Rougelsum: 55.4167
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2548 | 1.0 | 795 | 0.9154 | 53.4249 | 34.0377 | 36.4396 | 50.9884 | 141.8889 |
| 0.6994 | 2.0 | 1590 | 0.8213 | 54.7613 | 35.9428 | 38.3899 | 51.9527 | 142.0 |
| 0.5272 | 3.0 | 2385 | 0.7703 | 53.8561 | 35.4871 | 38.0502 | 51.131 | 141.8889 |
| 0.3407 | 4.0 | 3180 | 0.7764 | 53.9514 | 35.8553 | 39.1935 | 51.7005 | 142.0 |
| 0.2612 | 5.0 | 3975 | 0.7529 | 54.4056 | 36.2605 | 40.8003 | 52.0424 | 142.0 |
| 0.1702 | 6.0 | 4770 | 0.8105 | 54.2251 | 37.1441 | 41.2472 | 52.2803 | 142.0 |
| 0.1276 | 7.0 | 5565 | 0.8004 | 56.49 | 40.4009 | 44.018 | 54.2404 | 141.5556 |
| 0.0978 | 8.0 | 6360 | 0.7890 | 56.6339 | 40.9867 | 43.9603 | 54.4468 | 142.0 |
| 0.0711 | 9.0 | 7155 | 0.8285 | 56.0469 | 40.7758 | 44.1395 | 53.9668 | 142.0 |
| 0.0649 | 10.0 | 7950 | 0.8498 | 56.9873 | 42.4721 | 46.705 | 55.2188 | 142.0 |
| 0.0471 | 11.0 | 8745 | 0.8547 | 57.7898 | 43.4238 | 46.5868 | 56.0858 | 142.0 |
| 0.0336 | 12.0 | 9540 | 0.8658 | 57.2678 | 43.347 | 47.0854 | 55.4167 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 5ff1480d945c8538c543ce5b6910f35a |
anton-l/wav2vec2-large-xlsr-53-romanian | anton-l | wav2vec2 | 9 | 391 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['ro'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,670 | false |
# Wav2Vec2-Large-XLSR-53-Romanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romanian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ro", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romanian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ro.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-romanian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ro/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/ro/clips/"
def clean_sentence(sent):
sent = sent.lower()
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 24.84 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
| 6ecad321ef52a49a551921f924582909 |
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_rte_96 | gokuls | distilbert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,058 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_rte_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4234
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4577 | 1.0 | 10 | 0.4423 | 0.4729 |
| 0.4372 | 2.0 | 20 | 0.4341 | 0.4729 |
| 0.43 | 3.0 | 30 | 0.4300 | 0.4729 |
| 0.4261 | 4.0 | 40 | 0.4273 | 0.4729 |
| 0.4229 | 5.0 | 50 | 0.4253 | 0.4729 |
| 0.42 | 6.0 | 60 | 0.4241 | 0.4729 |
| 0.4188 | 7.0 | 70 | 0.4236 | 0.4729 |
| 0.4179 | 8.0 | 80 | 0.4234 | 0.4729 |
| 0.4176 | 9.0 | 90 | 0.4235 | 0.4729 |
| 0.4165 | 10.0 | 100 | 0.4235 | 0.4729 |
| 0.418 | 11.0 | 110 | 0.4238 | 0.4729 |
| 0.4174 | 12.0 | 120 | 0.4238 | 0.4729 |
| 0.4171 | 13.0 | 130 | 0.4237 | 0.4729 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1f2a071596e810d9a070673b07da56fb |
gokuls/distilbert_sa_GLUE_Experiment_data_aug_wnli_96 | gokuls | distilbert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,626 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_wnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6935
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.693 | 1.0 | 218 | 0.6935 | 0.5634 |
| 0.6226 | 2.0 | 436 | 1.4150 | 0.1549 |
| 0.5091 | 3.0 | 654 | 1.7966 | 0.1268 |
| 0.4594 | 4.0 | 872 | 2.1812 | 0.1127 |
| 0.4125 | 5.0 | 1090 | 2.6036 | 0.0845 |
| 0.3697 | 6.0 | 1308 | 3.0124 | 0.0704 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| fbd171bc876c4f84b04fab0101474a4c |
Helsinki-NLP/opus-mt-sv-tw | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-sv-tw
* source languages: sv
* target languages: tw
* OPUS readme: [sv-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.tw | 30.7 | 0.509 |
| 68f57a615f861ffa5515b4ddc57abf4b |
cat666/VToooo | cat666 | null | 20 | 44 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers'] | false | true | true | 1,476 | false |
learning_rate 2.5e-6, training with a6000x1, because I am too busy recently, I should not be able to actively do it, and the funds are slightly insufficient
,Forget it, I'm overtraining, take it as an interesting model,(Warning: above 768x832 is recommended, I found that the results below seem to be less than ideal)
Will be uploading actively in the near future
If you need my help or have better suggestions, come to [Discord server](https://discord.gg/BHb4HvTc6t)
[![Discord Server](https://media.discordapp.net/attachments/738013665286160445/1059013462925254676/image.png)](https://discord.gg/BHb4HvTc6t)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 276b40c8f98502947a7f42cac18d7028 |
Helsinki-NLP/opus-mt-taw-en | Helsinki-NLP | marian | 11 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | ['lo', 'th', 'taw', 'en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,102 | false |
### taw-eng
* source group: Tai
* target group: English
* OPUS readme: [taw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md)
* model: transformer
* source language(s): lao tha
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lao-eng.lao.eng | 1.1 | 0.133 |
| Tatoeba-test.multi.eng | 38.9 | 0.572 |
| Tatoeba-test.tha-eng.tha.eng | 40.6 | 0.588 |
### System Info:
- hf_name: taw-eng
- source_languages: taw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lo', 'th', 'taw', 'en']
- src_constituents: {'lao', 'tha'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt
- src_alpha3: taw
- tgt_alpha3: eng
- short_pair: taw-en
- chrF2_score: 0.5720000000000001
- bleu: 38.9
- brevity_penalty: 1.0
- ref_len: 7630.0
- src_name: Tai
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: taw
- tgt_alpha2: en
- prefer_old: False
- long_pair: taw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 5b6b8edc19603c0f7486adf026f45ae9 |
Hoax0930/marian-finetuned-kde4-en-to-ja | Hoax0930 | marian | 16 | 1 | transformers | 0 | translation | true | false | false | apache-2.0 | null | ['kde4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'generated_from_trainer'] | true | true | true | 1,085 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ja
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Bleu: 37.1098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 54e634c96e859fdfd5aa3b482998b3cd |
paola-md/recipe-gauss-2 | paola-md | roberta | 6 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-gauss-2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4204
- Rmse: 0.6484
- Mse: 0.4204
- Mae: 0.4557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|
| 0.4002 | 1.0 | 3029 | 0.4228 | 0.6502 | 0.4228 | 0.4485 |
| 0.3986 | 2.0 | 6058 | 0.4200 | 0.6481 | 0.4200 | 0.4566 |
| 0.3985 | 3.0 | 9087 | 0.4217 | 0.6494 | 0.4217 | 0.4515 |
| 0.3977 | 4.0 | 12116 | 0.4212 | 0.6490 | 0.4212 | 0.4528 |
| 0.397 | 5.0 | 15145 | 0.4251 | 0.6520 | 0.4251 | 0.4461 |
| 0.397 | 6.0 | 18174 | 0.4203 | 0.6483 | 0.4203 | 0.4665 |
| 0.3968 | 7.0 | 21203 | 0.4211 | 0.6489 | 0.4211 | 0.4533 |
| 0.3964 | 8.0 | 24232 | 0.4208 | 0.6487 | 0.4208 | 0.4543 |
| 0.3963 | 9.0 | 27261 | 0.4199 | 0.6480 | 0.4199 | 0.4604 |
| 0.3961 | 10.0 | 30290 | 0.4204 | 0.6484 | 0.4204 | 0.4557 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
| ec7d9002cb62aa9522c162ff415798df |
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned | sentence-transformers | distilbert | 13 | 230 | sentence-transformers | 2 | sentence-similarity | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 3,714 | false |
# sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | 9309359a30dc7181fbb5e4b5affa3f7b |
shila/distilbert-base-uncased-finetuned-squad | shila | distilbert | 14 | 2 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2_loading_script'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,297 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2_loading_script dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 5.4661 |
| No log | 2.0 | 30 | 5.0915 |
| No log | 3.0 | 45 | 4.9348 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 28b032adefa395d8e426ad66174916d4 |
gokuls/distilbert_sa_GLUE_Experiment_cola_256 | gokuls | distilbert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,945 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_cola_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6165
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6103 | 1.0 | 34 | 0.6217 | 0.0 |
| 0.6077 | 2.0 | 68 | 0.6179 | 0.0 |
| 0.606 | 3.0 | 102 | 0.6182 | 0.0 |
| 0.6062 | 4.0 | 136 | 0.6165 | 0.0 |
| 0.5906 | 5.0 | 170 | 0.6183 | 0.0961 |
| 0.5491 | 6.0 | 204 | 0.6250 | 0.0495 |
| 0.512 | 7.0 | 238 | 0.6579 | 0.1173 |
| 0.4877 | 8.0 | 272 | 0.6908 | 0.1043 |
| 0.464 | 9.0 | 306 | 0.6860 | 0.1197 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| f77b9ff2f305f57643374bd17c75d7eb |
PeterBanning71/t5-small-finetuned-eLife | PeterBanning71 | t5 | 14 | 0 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,576 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eLife
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8960
- Rouge1: 14.7239
- Rouge2: 2.8698
- Rougel: 11.0202
- Rougelsum: 13.3642
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3558 | 1.0 | 544 | 2.9587 | 13.7915 | 2.6556 | 10.3265 | 12.5097 | 19.0 |
| 3.1299 | 2.0 | 1088 | 2.9079 | 14.7136 | 2.7492 | 10.836 | 13.3664 | 19.0 |
| 3.0917 | 3.0 | 1632 | 2.8960 | 14.7239 | 2.8698 | 11.0202 | 13.3642 | 19.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 683eb141592b6db8ff0f9b5b0e28d71c |
Jingmiao/whisper-small-zh_tw | Jingmiao | whisper | 30 | 31 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,578 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Chinese (Taiwan)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 zh-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2283
- Wer: 41.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0049 | 6.02 | 1000 | 0.2283 | 41.9652 |
| 0.0008 | 13.02 | 2000 | 0.2556 | 42.0266 |
| 0.0004 | 20.01 | 3000 | 0.2690 | 42.4156 |
| 0.0003 | 27.0 | 4000 | 0.2788 | 42.7840 |
| 0.0002 | 33.02 | 5000 | 0.2826 | 43.0297 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| a8a753097ef44ab09bf5c7f29e453051 |
d0r1h/testt5 | d0r1h | t5 | 13 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,064 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_assets
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8718
- Rouge1: 35.7712
- Rouge2: 15.2129
- Rougel: 25.9007
- Rougelsum: 33.3105
- Gen Len: 64.7175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 422e310037f1786b7e15bbd3cfc0379b |
KarelDO/gpt2.CEBaB_confounding.food_service_positive.absa.5-class.seed_44 | KarelDO | gpt2 | 15 | 2 | transformers | 0 | null | true | false | false | mit | ['en'] | ['OpenTable'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,099 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2.CEBaB_confounding.food_service_positive.absa.5-class.seed_44
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8418
- Accuracy: 0.7528
- Macro-f1: 0.7495
- Weighted-macro-f1: 0.7542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 44
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| fada9c0b9ff0be2541571bb391584faa |
omkarp/vit-base-patch16-224-finetuned-eurosat | omkarp | vit | 24 | 3 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,337 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3905
- Accuracy: 0.4865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6128 | 0.97 | 15 | 1.3905 | 0.4865 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 15243a426221ac1592e33ff56902ca6f |
ImageIN/convnext-tiny-224_finetuned | ImageIN | convnext | 7 | 9 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,230 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224_finetuned
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0895
- Precision: 0.9807
- Recall: 0.9608
- F1: 0.9702
- Accuracy: 0.9776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 46 | 0.3080 | 0.9096 | 0.6852 | 0.7206 | 0.8365 |
| No log | 2.0 | 92 | 0.1644 | 0.9660 | 0.9176 | 0.9386 | 0.9551 |
| No log | 3.0 | 138 | 0.0974 | 0.9742 | 0.9586 | 0.9661 | 0.9744 |
| No log | 4.0 | 184 | 0.0795 | 0.9829 | 0.9670 | 0.9746 | 0.9808 |
| No log | 5.0 | 230 | 0.0838 | 0.9807 | 0.9608 | 0.9702 | 0.9776 |
| No log | 6.0 | 276 | 0.0838 | 0.9807 | 0.9608 | 0.9702 | 0.9776 |
| No log | 7.0 | 322 | 0.0803 | 0.9829 | 0.9670 | 0.9746 | 0.9808 |
| No log | 8.0 | 368 | 0.0869 | 0.9807 | 0.9608 | 0.9702 | 0.9776 |
| No log | 9.0 | 414 | 0.0897 | 0.9807 | 0.9608 | 0.9702 | 0.9776 |
| No log | 10.0 | 460 | 0.0895 | 0.9807 | 0.9608 | 0.9702 | 0.9776 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 81aeca2a363ad57bf36ae8a6996d1142 |
google/multiberts-seed_3-step_40k | google | bert | 8 | 15 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_40k'] | false | true | true | 3,515 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 3, Step 40k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #3, captured at step 40k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_40k')
model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_40k')
model = BertModel.from_pretrained("google/multiberts-seed_3-step_40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| b557bca715a47963bed13bb11a16b3bf |
jmassot/distilbert-base-uncased-jm-distilled-clinc_hub | jmassot | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,800 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-jm-distilled-clinc_hub
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1291
- Accuracy: 0.9426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1473 | 1.0 | 318 | 0.7476 | 0.7529 |
| 0.5789 | 2.0 | 636 | 0.3733 | 0.8858 |
| 0.3175 | 3.0 | 954 | 0.2273 | 0.9194 |
| 0.2106 | 4.0 | 1272 | 0.1733 | 0.9335 |
| 0.1666 | 5.0 | 1590 | 0.1521 | 0.9365 |
| 0.1452 | 6.0 | 1908 | 0.1408 | 0.9416 |
| 0.133 | 7.0 | 2226 | 0.1349 | 0.9432 |
| 0.1257 | 8.0 | 2544 | 0.1316 | 0.9439 |
| 0.1218 | 9.0 | 2862 | 0.1298 | 0.9426 |
| 0.1197 | 10.0 | 3180 | 0.1291 | 0.9426 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.1
| 074535520e9875095ef4cf60a5efdb22 |
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-2 | anas-awadalla | roberta | 17 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 12e8da2198a0d6466841f1a6662737ce |
jonatasgrosman/exp_w2v2t_th_r-wav2vec2_s730 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 462 | false | # exp_w2v2t_th_r-wav2vec2_s730
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| d0527a8a42764aadbb7412bf2cce6440 |
valhalla/t5-base-qg-hl | valhalla | t5 | 10 | 72,604 | transformers | 2 | text2text-generation | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-generation'] | false | true | true | 1,091 | false |
## T5 for question-generation
This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example
`<hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
``` | 8760017ecd5a9ac3c4964a553bd46c8b |
stockmark/bart-base-japanese-news | stockmark | bart | 9 | 1,401 | transformers | 6 | text2text-generation | true | true | false | mit | ['ja'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['ja', 'japanese', 'bart', 'lm', 'nlp'] | false | true | true | 5,054 | false |
# bart-base-japanese-news(base-sized model)
This repository provides a Japanese BART model. The model was trained by [Stockmark Inc.](https://stockmark.co.jp)
An introductory article on the model can be found at the following URL.
[https://tech.stockmark.co.jp/blog/bart-japanese-base-news/](https://tech.stockmark.co.jp/blog/bart-japanese-base-news/)
## Model description
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
# How to use the model
*NOTE:* Since we are using a custom tokenizer, please use `trust_remote_code=True` to initialize the tokenizer.
## Simple use
```python
from transformers import AutoTokenizer, BartModel
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartModel.from_pretrained(model_name)
inputs = tokenizer("今日は良い天気です。", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Sentence Permutation
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
# correct order text is "明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。"
text = "電車は止まる可能性があります。ですから、自宅から働きます。明日は大雨です。"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。
```
## Mask filling
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "今日の天気は<mask>のため、傘が必要でしょう。"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 今日の天気は、雨のため、傘が必要でしょう。
```
## Text generation
*NOTE:* You can use the raw model for text generation. However, the model is mostly meant to be fine-tuned on a supervised dataset.
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、人工知能と言語学の一分野である。「計算言語学」(computational linguistics)との類似もあるが、自然言語処理は工学的な視点からの言語処理をさすのに対して、計算言語学は言語学的視点を重視する手法をさす事が多い。"
inputs = tokenizer([text], max_length=512, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, min_length=0, max_length=40)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、言語学の一分野である。
```
# Training
The model was trained on Japanese News Articles.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script.
# Licenses
The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
*NOTE:* Only tokenization_bart_japanese_news.py is [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). Please see tokenization_bart_japanese_news.py for license details.
# Contact
If you have any questions, please contact us using [our contact form](https://stockmark.co.jp/contact).
# Acknowledgement
This comparison study supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
| 50a503423e8490be6d28e94b04a47095 |
naver-clova-ix/donut-base-finetuned-zhtrainticket | naver-clova-ix | vision-encoder-decoder | 11 | 202 | transformers | 0 | image-to-text | true | false | false | mit | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['donut', 'image-to-text', 'vision'] | false | true | true | 1,973 | false |
# Donut (base-sized model, fine-tuned on ZhTrainTicket)
Donut model fine-tuned on ZhTrainTicket. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg)
## Intended uses & limitations
This model is fine-tuned on ZhTrainTicket, a document parsing dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-15664,
author = {Geewook Kim and
Teakgyu Hong and
Moonbin Yim and
Jinyoung Park and
Jinyeong Yim and
Wonseok Hwang and
Sangdoo Yun and
Dongyoon Han and
Seunghyun Park},
title = {Donut: Document Understanding Transformer without {OCR}},
journal = {CoRR},
volume = {abs/2111.15664},
year = {2021},
url = {https://arxiv.org/abs/2111.15664},
eprinttype = {arXiv},
eprint = {2111.15664},
timestamp = {Thu, 02 Dec 2021 10:50:44 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 26351ddf6bb66b12a5234066f6069247 |
nc33/my_awesome_wnut_model | nc33 | roberta | 16 | 0 | transformers | 0 | token-classification | true | false | false | mit | null | ['wnut_17'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,455 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [facebook/muppet-roberta-base](https://huggingface.co/facebook/muppet-roberta-base) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.5607
- Recall: 0.5097
- F1: 0.5340
- Accuracy: 0.9501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2331 | 0.5333 | 0.4310 | 0.4767 | 0.9459 |
| No log | 2.0 | 426 | 0.2298 | 0.5607 | 0.5097 | 0.5340 | 0.9501 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 7d638da2433a6bf66cb023420e8c885a |
konrad-wesub/roberta-base-iphone-2 | konrad-wesub | xlm-roberta | 9 | 0 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,258 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-iphone-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1359
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 27 | 0.2765 | 0.8333 |
| No log | 2.0 | 54 | 0.1359 | 0.9833 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 5afdb207c8037d5fb8035b9a23084c55 |
hfl/chinese-electra-180g-base-generator | hfl | electra | 10 | 6 | transformers | 0 | fill-mask | true | true | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,918 | false |
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` | 687792b21a34743cd8d89ef7638deee0 |
theta/mbti-career | theta | xlm-roberta | 12 | 38 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,394 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbti-career
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6547 | 0.59 | 100 | 0.6169 |
| 0.5967 | 1.18 | 200 | 0.5943 |
| 0.5872 | 1.76 | 300 | 0.5696 |
| 0.554 | 2.35 | 400 | 0.5287 |
| 0.5041 | 2.94 | 500 | 0.4890 |
| 0.4773 | 3.53 | 600 | 0.4895 |
| 0.4691 | 4.12 | 700 | 0.4840 |
| 0.4253 | 4.71 | 800 | 0.4573 |
| 0.4002 | 5.29 | 900 | 0.4240 |
| 0.3813 | 5.88 | 1000 | 0.4031 |
| 0.3561 | 6.47 | 1100 | 0.3943 |
| 0.3359 | 7.06 | 1200 | 0.3864 |
| 0.3126 | 7.65 | 1300 | 0.3889 |
| 0.2948 | 8.24 | 1400 | 0.3869 |
| 0.2816 | 8.82 | 1500 | 0.3788 |
| 0.2522 | 9.41 | 1600 | 0.3891 |
| 0.2451 | 10.0 | 1700 | 0.3849 |
| 0.2148 | 10.59 | 1800 | 0.3784 |
| 0.2132 | 11.18 | 1900 | 0.3716 |
| 0.1882 | 11.76 | 2000 | 0.3659 |
| 0.1754 | 12.35 | 2100 | 0.3737 |
| 0.169 | 12.94 | 2200 | 0.3711 |
| 0.1559 | 13.53 | 2300 | 0.3672 |
| 0.1537 | 14.12 | 2400 | 0.3391 |
| 0.1427 | 14.71 | 2500 | 0.3516 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 3df8d2eb6fa41dc4cbf474c371a786d8 |
Mallik/distilbert-base-uncased-finetuned-emotion | Mallik | distilbert | 14 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,325 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.925
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8215 | 1.0 | 250 | 0.3033 | 0.9105 | 0.9078 |
| 0.2435 | 2.0 | 500 | 0.2128 | 0.925 | 0.9248 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
| 189f2f525dbfa58a040ed7ba74f55b44 |
mindwrapped/collaborative-filtering-movielens-copy | mindwrapped | null | 10 | 4 | keras | 1 | tabular-classification | false | false | false | ['cc0-1.0'] | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['collaborative-filtering', 'recommender', 'tabular-classification'] | false | true | true | 1,368 | false |
## Model description
This repo contains the model and the notebook on [how to build and train a Keras model for Collaborative Filtering for Movie Recommendations](https://keras.io/examples/structured_data/collaborative_filtering_movielens/).
Full credits to [Siddhartha Banerjee](https://twitter.com/sidd2006).
## Intended uses & limitations
Based on a user and movies they have rated highly in the past, this model outputs the predicted rating a user would give to a movie they haven't seen yet (between 0-1). This information can be used to find out the top recommended movies for this user.
## Training and evaluation data
The dataset consists of user's ratings on specific movies. It also consists of the movie's specific genres.
## Training procedure
The model was trained for 5 epochs with a batch size of 64.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
| Epochs | Train Loss | Validation Loss |
|--- |--- |--- |
| 1| 0.637| 0.619|
| 2| 0.614| 0.616|
| 3| 0.609| 0.611|
| 4| 0.608| 0.61|
| 5| 0.608| 0.609|
## Model Plot
<details>
<summary>View Model Plot</summary>
![Model Image](./model.png)
</details> | b630039a0ea8a2f8d7d9abd358be5d9d |
anas-awadalla/bart-base-finetuned-squad-infilling-lr-3e-5-decay-001 | anas-awadalla | bart | 18 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,066 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad-infilling-lr-3e-5-decay-001
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| 2ab0e4098d030a6bca2a1d57dbd65186 |
horizonial/dogcg | horizonial | null | 18 | 9 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 417 | false | ### dogcg Dreambooth model trained by horizonial with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| e322121ea14f73e753392454a072ea84 |
Pavithra/codeparrot-ds-500sample-gpt-neo-10epoch | Pavithra | gpt_neo | 13 | 3 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,262 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-500sample-gpt-neo-10epoch
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5456
- eval_runtime: 87.6603
- eval_samples_per_second: 149.817
- eval_steps_per_second: 4.689
- epoch: 2.97
- step: 16000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| a24c3861f4466f415ec8e8925c69c0e2 |
lizaboiarchuk/results | lizaboiarchuk | bert | 9 | 11 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,743 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2872
- F1: 0.6095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3356 | 1.0 | 1033 | 0.2558 | 0.3761 |
| 0.2588 | 2.0 | 2066 | 0.2352 | 0.5246 |
| 0.2252 | 3.0 | 3099 | 0.2292 | 0.5996 |
| 0.2044 | 4.0 | 4132 | 0.2417 | 0.5950 |
| 0.189 | 5.0 | 5165 | 0.2433 | 0.6102 |
| 0.1718 | 6.0 | 6198 | 0.2671 | 0.5894 |
| 0.1627 | 7.0 | 7231 | 0.2686 | 0.6319 |
| 0.1513 | 8.0 | 8264 | 0.2779 | 0.6079 |
| 0.1451 | 9.0 | 9297 | 0.2848 | 0.6195 |
| 0.1429 | 10.0 | 10330 | 0.2872 | 0.6095 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1caf4a466cee428344f2407cfdc9190e |
charlemagne/distilbert-base-uncased-new3-cola | charlemagne | distilbert | 13 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,469 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-new3-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 164 | 0.4312 | 0.8747 |
| No log | 2.0 | 328 | 0.2722 | 0.9290 |
| No log | 3.0 | 492 | 0.2424 | 0.9404 |
| 0.4446 | 4.0 | 656 | 0.2189 | 0.9450 |
| 0.4446 | 5.0 | 820 | 0.2224 | 0.9465 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu111
- Datasets 2.1.0
- Tokenizers 0.11.6
| 21210d92a2e0dcbd44b7e611b6dd9544 |
vicl/canine-c-finetuned-cola | vicl | canine | 11 | 113 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,540 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-cola
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6246
- Matthews Correlation: 0.0990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6142 | 1.0 | 535 | 0.6268 | 0.0 |
| 0.607 | 2.0 | 1070 | 0.6234 | 0.0 |
| 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 |
| 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 |
| 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 82f51bd3147df6a80306ffc984bf5efe |
apatidar0/anil_bert-finetuned-ner | apatidar0 | bert | 12 | 12 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,523 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anil_bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9352
- Recall: 0.9517
- F1: 0.9434
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0897 | 1.0 | 1756 | 0.0690 | 0.9246 | 0.9325 | 0.9285 | 0.9820 |
| 0.0329 | 2.0 | 3512 | 0.0629 | 0.9301 | 0.9492 | 0.9395 | 0.9862 |
| 0.0172 | 3.0 | 5268 | 0.0610 | 0.9352 | 0.9517 | 0.9434 | 0.9862 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 3a726a2d8192112ce94cfffc45c7ade3 |
mgoudarz/distilbert-base-uncased-finetunded-emotion | mgoudarz | distilbert | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,422 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetunded-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1584
- Accuracuy: 0.9365
- F1: 0.9365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracuy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| No log | 1.0 | 250 | 0.2735 | 0.9155 | 0.9134 |
| No log | 2.0 | 500 | 0.1727 | 0.932 | 0.9321 |
| No log | 3.0 | 750 | 0.1584 | 0.9365 | 0.9365 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| c12459ab86168b7ab2ab4b63b5ffd970 |
superb/hubert-base-superb-sid | superb | hubert | 5 | 72 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | ['en'] | ['superb'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['speech', 'audio', 'hubert', 'audio-classification'] | false | true | true | 3,038 | false |
# Hubert-Base for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-base-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.8142` | `0.8071` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` | c243f8c4ed9bc892fb2c7fe2f042e87e |
KenP/mt5-small-finetuned-amazon-en-es | KenP | mt5 | 8 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,645 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KenP/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0378
- Validation Loss: 3.3712
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.9112 | 4.3131 | 0 |
| 5.8947 | 3.7701 | 1 |
| 5.1149 | 3.5826 | 2 |
| 4.6940 | 3.5080 | 3 |
| 4.4064 | 3.4388 | 4 |
| 4.2301 | 3.4012 | 5 |
| 4.1037 | 3.3755 | 6 |
| 4.0378 | 3.3712 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| ad5b59e2887f8d064fffeb9371eb0553 |
radev/xlm-roberta-base-finetuned-panx-de | radev | xlm-roberta | 18 | 14 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1807 | 0.8065 |
| 0.2218 | 2.0 | 526 | 0.1365 | 0.8485 |
| 0.2218 | 3.0 | 789 | 0.1345 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| 7c90f9eba006b620e6fade188352dae3 |
ethanyt/guwen-quote | ethanyt | roberta | 7 | 11 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch', 'quotation detection'] | false | true | true | 1,098 | false |
# Guwen Quote
A Classical Chinese Quotation Detector.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> | 3dee1dbfa6b9e1e16627e9cb69fe46a5 |
Hemanth045/wav2vec2-large-xls-r-300m-hindi-colab | Hemanth045 | wav2vec2 | 27 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,369 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3273
- Wer: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.6006 | 44.42 | 400 | 2.3273 | 0.9698 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| ade29f80e72152a9a4e8ce1ea7f43d28 |
gokuls/distilbert_sa_GLUE_Experiment_data_aug_cola_96 | gokuls | distilbert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,734 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_cola_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Matthews Correlation: 0.1072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5845 | 1.0 | 835 | 0.6274 | 0.1072 |
| 0.4862 | 2.0 | 1670 | 0.6843 | 0.1085 |
| 0.4221 | 3.0 | 2505 | 0.7307 | 0.0681 |
| 0.3829 | 4.0 | 3340 | 0.7969 | 0.1046 |
| 0.3557 | 5.0 | 4175 | 0.8648 | 0.0959 |
| 0.3328 | 6.0 | 5010 | 0.8932 | 0.0792 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| a060ee1a107bf184dc2c6762d2ceade3 |
yangwang825/ecapa-tdnn-vox2 | yangwang825 | null | 6 | 1 | speechbrain | 0 | null | true | false | false | apache-2.0 | ['en'] | ['voxceleb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speechbrain', 'embeddings', 'Speaker', 'Verification', 'Identification', 'pytorch', 'ECAPA-TDNN'] | false | true | true | 4,131 | false |
# Speaker Identification with ECAPA-TDNN embeddings on Voxceleb
This repository provides a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. Since we can't find any resource that has SpeechBrain or HuggingFace compatible checkpoints that has only been trained on VoxCeleb2 development data, so we decide to pre-train an ECAPA-TDNN system from scratch.
# Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss.
We use FBank (16kHz, 25ms frame length, 10ms hop length, 80 filter-bank channels) as the input features. It was trained using initial learning rate of 0.001 and batch size of 512 with cyclical learning rate policy (CLR) for 20 epochs on 4 A100 GPUs. We employ additive noises and reverberation from [MUSAN](http://www.openslr.org/17/) and [RIR](http://www.openslr.org/28/) datasets to enrich the supervised information. The pre-training progress takes approximately ten days for the ECAPA-TDNN model.
# Performance
**VoxCeleb1-O** is the original verification test set from VoxCeleb1 consisting of 40 speakers. All speakers with names starting with "E" are reserved for testing. **VoxCeleb1-E** uses the entire VoxCeleb1 dataset, covering 1251 speakers. **VoxCeleb1-H** is a hard version of evaluation set consisting of 552536 pairs with 1190 speakers with the same nationality and gender. There are 18 nationality-gender combinations each with at least 5 individuals.
| Splits | Backend | S-norm | EER(%) | minDCF(0.01) |
|:-------------:|:--------------:|:--------------:|:--------------:|:--------------:|
| VoxCeleb1-O | cosine | no | 1.29 | 0.13 |
| VoxCeleb1-O | cosine | yes | 1.19 | 0.11 |
| VoxCeleb1-E | cosine | no | 1.42 | 0.16 |
| VoxCeleb1-E | cosine | yes | 1.31 | 0.14 |
| VoxCeleb1-H | cosine | no | 2.66 | 0.26 |
| VoxCeleb1-H | cosine | yes | 2.48 | 0.23 |
- VoxCeleb1-O: includes 37611 test pairs with 40 speakers.
- VoxCeleb1-E: includes 579818 test pairs with 1251 speakers.
- VoxCeleb1-H: includes 550894 test pairs with 1190 speakers.
# Compute the speaker embeddings
The system is trained with recordings sampled at 16kHz (single channel).
```python
import torch
import torchaudio
from speechbrain.pretrained.interfaces import Pretrained
from speechbrain.pretrained import EncoderClassifier
class Encoder(Pretrained):
MODULES_NEEDED = [
"compute_features",
"mean_var_norm",
"embedding_model"
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def encode_batch(self, wavs, wav_lens=None, normalize=False):
# Manage single waveforms in input
if len(wavs.shape) == 1:
wavs = wavs.unsqueeze(0)
# Assign full length if wav_lens is not assigned
if wav_lens is None:
wav_lens = torch.ones(wavs.shape[0], device=self.device)
# Storing waveform in the specified device
wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
wavs = wavs.float()
# Computing features and embeddings
feats = self.mods.compute_features(wavs)
feats = self.mods.mean_var_norm(feats, wav_lens)
embeddings = self.mods.embedding_model(feats, wav_lens)
if normalize:
embeddings = self.hparams.mean_var_norm_emb(
embeddings,
torch.ones(embeddings.shape[0], device=self.device)
)
return embeddings
classifier = Encoder.from_hparams(
source="yangwang825/ecapa-tdnn-vox2"
)
signal, fs = torchaudio.load('spk1_snt1.wav')
embeddings = classifier.encode_batch(signal)
>>> torch.Size([1, 1, 192])
```
We will release our training results (models, logs, etc) shortly.
# References
1. Ravanelli et al., SpeechBrain: A General-Purpose Speech Toolkit, 2021
2. Desplanques et al., ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification, 2020 | 40078ea9724044e169dea9a1f055ee84 |
OWG/bert-base-uncased | OWG | bert | 4 | 4 | transformers | 0 | fill-mask | false | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 1,389 | false |
# BERT base model (uncased)
## Model description
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
## Original implementation
Follow [this link](https://huggingface.co/bert-base-uncased) to see the original implementation.
## How to use
Download the model by cloning the repository via `git clone https://huggingface.co/OWG/bert-base-uncased`.
Then you can use the model with the following code:
```python
from onnxruntime import InferenceSession, SessionOptions, GraphOptimizationLevel
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
options = SessionOptions()
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
session = InferenceSession("path/to/model.onnx", sess_options=options)
session.disable_fallback()
text = "Replace me by any text you want to encode."
input_ids = tokenizer(text, return_tensors="pt", return_attention_mask=True)
inputs = {k: v.cpu().detach().numpy() for k, v in input_ids.items()}
outputs_name = session.get_outputs()[0].name
outputs = session.run(output_names=[outputs_name], input_feed=inputs)
```
| d46c38fe5324fa0bc8b6bd94e00c31cd |
sw005320/aidatatang_200zh_conformer | sw005320 | null | 35 | 5 | espnet | 2 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['zh'] | ['aidatatang_200zh'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 22,467 | false |
## ESPnet2 ASR model
### `sw005320/aidatatang_200zh_conformer`
This model was trained by Shinji Watanabe using aidatatang_200zh recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 8ab3d9f2191f250cb62deff222d2e6addb3842dc
pip install -e .
cd egs2/aidatatang_200zh/asr1
./run.sh --skip_data_prep false --skip_train true --download_model sw005320/aidatatang_200zh_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 24 23:34:58 EST 2021`
- python version: `3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.7.1`
- Git hash: `a5bacd349a47889aef795f999563018cf201ae64`
- Commit date: `Wed Dec 22 14:08:29 2021 -0500`
## asr_train_asr_conformer_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|24216|81.5|18.5|0.0|0.0|18.5|18.5|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|48144|79.0|21.0|0.0|0.0|21.0|21.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|234524|96.6|3.0|0.5|0.1|3.6|18.5|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|468933|95.9|3.6|0.4|0.2|4.3|21.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 4000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 我
- 的
- 你
- 么
- 不
- 是
- 了
- 一
- 有
- 天
- 什
- 好
- 在
- 个
- 怎
- 吗
- 话
- 要
- 给
- 电
- 上
- 没
- 人
- 说
- 到
- 啊
- 就
- 这
- 时
- 来
- 下
- 想
- 打
- 点
- 去
- 还
- 看
- 道
- 多
- 明
- 那
- 知
- 以
- 今
- 能
- 会
- 哪
- 都
- 可
- 大
- 吧
- 机
- 样
- 里
- 十
- 现
- 们
- 过
- 吃
- 开
- 家
- 回
- 发
- 中
- 呢
- 听
- 候
- 为
- 也
- 日
- 爱
- 歌
- 三
- 起
- 小
- 二
- 心
- 子
- 手
- 生
- 最
- 儿
- 学
- 放
- 信
- 女
- 号
- 几
- 和
- 老
- 晚
- 少
- 车
- 叫
- 快
- 用
- 自
- 年
- 睡
- 问
- 事
- 后
- 五
- 乐
- 安
- 出
- 找
- 帮
- 意
- 觉
- 气
- 国
- 得
- 情
- 请
- 早
- 地
- 做
- 首
- 真
- 公
- 近
- 对
- 办
- 很
- 行
- 己
- 呀
- 八
- 友
- 如
- 六
- 节
- 喜
- 新
- 欢
- 西
- 间
- 月
- 班
- 他
- 网
- 方
- 分
- 播
- 笑
- 查
- 息
- 名
- 四
- 成
- 东
- 美
- 零
- 市
- 饭
- 世
- 朋
- 玩
- 州
- 果
- 才
- 七
- 别
- 把
- 谁
- 九
- 再
- 平
- 太
- 干
- 思
- 关
- 谢
- 高
- 语
- 理
- 些
- 界
- 着
- 长
- 钱
- 动
- 曲
- 感
- 聊
- 片
- 何
- 面
- 男
- 音
- 工
- 南
- 午
- 本
- 通
- 火
- 经
- 路
- 星
- 唱
- Q
- 业
- 讲
- 英
- 北
- 服
- 短
- 妈
- 海
- 文
- 跟
- 作
- 票
- 只
- 等
- 刚
- 码
- 字
- 影
- 附
- 婆
- 见
- 又
- 祝
- 无
- 该
- 提
- 末
- 让
- 法
- 定
- 买
- 告
- 照
- 体
- 考
- 床
- 醒
- 记
- 前
- 题
- 走
- 加
- 主
- 从
- 视
- 张
- 身
- 两
- 钟
- 京
- 于
- 收
- 阳
- 哈
- 店
- 山
- 院
- 站
- 百
- 宝
- 所
- 诉
- 期
- 之
- 嘛
- 夜
- 第
- 游
- 比
- 系
- 昨
- 费
- 交
- 水
- 应
- 次
- 周
- 亲
- 联
- 全
- 福
- 江
- 孩
- 区
- 广
- 头
- 接
- O
- 校
- 已
- 空
- 门
- 认
- 相
- 度
- 实
- 活
- 色
- 假
- 白
- 算
- 外
- 流
- 啦
- 花
- 然
- 结
- 每
- 休
- 边
- 部
- 位
- 场
- 半
- 王
- 声
- 件
- 力
- 金
- 重
- 识
- 正
- 华
- 光
- 衣
- 载
- 死
- 价
- 翻
- 图
- 城
- 脑
- 同
- 久
- 译
- 特
- 物
- 搜
- 务
- 报
- 线
- 哦
- 卡
- E
- 当
- A
- 爸
- 圣
- 完
- 幺
- 合
- P
- 雨
- 黄
- 种
- 司
- 直
- I
- 她
- 哥
- 书
- 银
- 试
- 解
- 穿
- 酒
- 准
- 换
- 望
- 被
- S
- 原
- 内
- 诞
- 带
- 介
- 口
- 清
- N
- 马
- 习
- 否
- 置
- 啥
- 索
- 戏
- 与
- 懂
- 飞
- 需
- 性
- 错
- 送
- 级
- 器
- 单
- 离
- 远
- 备
- 师
- 课
- 注
- 因
- 难
- 其
- 像
- 元
- 消
- 表
- 便
- 球
- 风
- 教
- 故
- 科
- 李
- 常
- 林
- 龙
- 呵
- 数
- 代
- 总
- 忘
- 商
- 变
- 婚
- 苹
- 红
- 格
- 坐
- 绍
- 答
- 量
- 冷
- 青
- 询
- 春
- 神
- 省
- 蛋
- 姐
- 陪
- 兴
- 利
- 台
- 句
- 万
- 计
- 保
- 刘
- 传
- 深
- 管
- 运
- 德
- 医
- 容
- 品
- 越
- 亮
- 词
- 河
- 化
- 宁
- 始
- 武
- 希
- 洗
- 复
- 设
- 处
- 技
- 房
- T
- 您
- 取
- 眼
- 县
- 笨
- 术
- 温
- 永
- 受
- 更
- 先
- 尔
- 程
- 彩
- 演
- 忙
- 专
- 愿
- 进
- 湖
- 建
- 况
- 伤
- 喝
- 底
- 卖
- 功
- 录
- 改
- H
- 剧
- 预
- 梦
- L
- 达
- 连
- 馆
- 包
- 写
- 客
- C
- 汉
- 条
- G
- 幸
- 民
- 读
- 职
- 目
- 但
- 贝
- 妹
- 资
- 较
- 雪
- 赛
- 除
- 招
- 园
- 住
- 超
- 汽
- 病
- B
- 软
- 反
- 而
- 证
- 员
- 黑
- 庆
- D
- 求
- 排
- 装
- 岁
- 顾
- 产
- 航
- 言
- 斯
- 拨
- 历
- 烦
- 及
- 药
- 入
- 式
- 军
- 餐
- 志
- 至
- 双
- 米
- 版
- 掉
- 千
- 者
- 充
- 微
- 失
- 转
- M
- 亚
- 克
- 座
- 丽
- 络
- 战
- 使
- 猪
- 具
- 闹
- 限
- 址
- 基
- 油
- 漂
- 陈
- Y
- 川
- 强
- 挺
- 奇
- 杰
- 政
- 向
- 速
- 康
- 差
- 贵
- 搞
- 义
- 奖
- 份
- 户
- 楼
- 苏
- 任
- 健
- 易
- 毛
- 型
- 石
- 礼
- 款
- 持
- 卫
- 怕
- 恋
- 邮
- 集
- R
- 铁
- 圳
- 拿
- 云
- 队
- 鱼
- 慢
- 顺
- 害
- 属
- 傻
- 营
- 菜
- 货
- 麻
- 咋
- 坏
- 冒
- 累
- 杨
- 闻
- 治
- 选
- 段
- K
- 香
- 闭
- 兰
- 牌
- 局
- 留
- 舍
- 非
- 推
- 室
- 简
- 拉
- 修
- 终
- 郑
- 切
- U
- 将
- 村
- 沙
- 存
- 帅
- 诗
- 率
- 密
- 巴
- 频
- 士
- 初
- 楚
- 股
- 热
- 古
- 制
- 支
- 肉
- 岛
- 统
- 适
- 肥
- 鸡
- 调
- 街
- 类
- 牛
- 导
- 农
- 值
- 食
- 镇
- 棍
- 移
- 韩
- W
- 嗯
- 订
- 呼
- 命
- V
- 必
- 宿
- 皮
- 升
- 确
- 随
- 步
- 育
- 标
- 唐
- 精
- 决
- 木
- 由
- 弟
- 往
- 肯
- 够
- 或
- 指
- 阿
- 象
- 料
- 念
- 助
- 许
- 共
- 母
- 约
- 罗
- 板
- 秋
- 配
- 魔
- 宜
- 般
- 荐
- 扰
- 舒
- 逼
- 狗
- 嘿
- 博
- 售
- 满
- 疼
- 脸
- 整
- 抱
- 季
- 减
- 养
- 怀
- 免
- 未
- 乘
- F
- 社
- 妇
- 列
- 爷
- 删
- 旦
- 弄
- 概
- 停
- 拜
- 维
- 领
- 示
- 套
- 汇
- 昌
- 晨
- 痛
- 购
- 奥
- 铃
- 案
- 济
- 鬼
- 背
- 港
- 待
- 浪
- 桥
- 血
- 冬
- 烧
- 优
- 拍
- 际
- 急
- 杭
- 称
- 遇
- 赶
- 旅
- 智
- 角
- 财
- 玉
- 团
- 形
- 论
- 静
- 景
- 退
- 普
- 呗
- 乡
- 参
- 胡
- 伦
- 讨
- 艺
- 辈
- 毒
- 此
- 轻
- 苦
- 咱
- 画
- 泰
- 宾
- 雄
- 销
- 奶
- 突
- 波
- 各
- 冰
- 块
- 夏
- 低
- 兵
- 厅
- 羊
- 杀
- 紧
- 泉
- 朝
- 谈
- 足
- 孕
- 夫
- 厂
- 聪
- 续
- 庄
- 诺
- 牙
- 质
- 立
- 依
- 仙
- 跑
- 盘
- 豆
- 它
- 怪
- 猜
- 漫
- 毕
- 兄
- 颜
- 险
- 厦
- 验
- 防
- 登
- 敢
- 乖
- 晓
- 护
- 迎
- 逗
- 摩
- 佳
- 观
- 骗
- 烟
- 细
- 临
- 惠
- 围
- 寞
- 效
- 源
- 寂
- 肚
- 暖
- 饺
- 斗
- 模
- 端
- 疗
- 付
- 绝
- 秘
- 展
- 乎
- 按
- 富
- 靠
- 范
- 规
- 刻
- 折
- 娘
- 厌
- 申
- 章
- 补
- 笔
- 锅
- 破
- 田
- 齐
- 滨
- 皇
- 族
- 典
- 史
- 左
- 蓝
- 灵
- 澡
- 秀
- 诚
- 土
- 测
- 凤
- 剑
- 响
- 倒
- 睛
- 惯
- 乌
- 币
- 扣
- 吴
- 输
- 徐
- 弃
- 纪
- 堂
- 环
- 甲
- 菲
- 缘
- 讯
- 根
- 落
- 启
- 泡
- 饿
- 积
- 府
- 递
- 绩
- 择
- 吉
- 布
- 显
- 童
- 租
- 洋
- 组
- 划
- 编
- 签
- 舞
- 困
- 贴
- 负
- 派
- 裤
- 担
- 桂
- 却
- 丝
- 丰
- 箱
- 赵
- 群
- 序
- 训
- 酸
- 惜
- 圆
- 评
- 压
- 俩
- 状
- 官
- 酷
- 鲁
- 孙
- 草
- 极
- 势
- 斤
- 腾
- 泽
- 素
- 尽
- 姓
- 屏
- 聚
- 莞
- 乱
- 雅
- 尼
- 趣
- 伟
- 肤
- 勇
- 右
- 徽
- 投
- 丹
- 尾
- 托
- 争
- 鸟
- 激
- 印
- 良
- 眠
- 松
- 跳
- 途
- 篮
- 粉
- 脚
- 屁
- 鞋
- 麦
- 则
- 估
- 津
- 努
- 距
- 胸
- 央
- 珍
- 盖
- 哭
- 洲
- 练
- 敏
- 雷
- 曾
- 恩
- 挂
- 据
- 览
- 耳
- 材
- 泪
- 吸
- 味
- 劳
- 父
- 孤
- 玛
- 旁
- 阴
- 态
- 创
- 树
- 脱
- 研
- 驾
- 拾
- 灯
- 虎
- 爆
- 嘉
- 湾
- 躺
- 猫
- 莫
- 昆
- 痘
- 阅
- 射
- 刷
- 卓
- 珠
- 峰
- 胖
- 坚
- 造
- 举
- 棒
- 梅
- 引
- 吵
- 蒙
- 详
- 借
- 瓜
- 池
- 束
- 芳
- 淘
- 寻
- 释
- 沈
- 虑
- 锦
- 胜
- 荣
- 委
- 默
- 另
- 浏
- 并
- 检
- 冠
- 独
- 厉
- 顶
- 钓
- 骂
- 且
- 欧
- 威
- 熟
- 获
- 兽
- 严
- 炎
- 含
- 厕
- 盛
- 翼
- 盟
- 余
- 姨
- 洛
- 映
- 狼
- 谅
- 众
- 宽
- 断
- 止
- 狂
- 凉
- 姑
- 辉
- 若
- 册
- 谷
- 幻
- 篇
- 瓶
- 席
- 恐
- 柔
- 迪
- 供
- 追
- 控
- 爽
- 互
- 嫁
- 宋
- 宫
- 瑞
- 滚
- 增
- 额
- 页
- 刀
- 娱
- 茶
- 钢
- 疯
- 梁
- 承
- 娜
- 须
- 陆
- 燕
- 迟
- 君
- 恶
- 遍
- 纸
- 项
- 丁
- 腿
- 误
- 殊
- 迅
- 锁
- 宇
- 媳
- 培
- 居
- 寄
- 纯
- 嘴
- 浙
- 境
- 搭
- 杯
- 插
- 朱
- 溪
- 甘
- 权
- 窝
- 警
- 糖
- 迷
- 圈
- 凯
- 帝
- 暴
- 逛
- 艳
- 击
- 颗
- 坦
- 杂
- 冲
- 谓
- 救
- 轮
- 晕
- 虽
- 塔
- 叔
- 凰
- 懒
- 议
- 肖
- 郎
- 辛
- 透
- 拥
- 鼠
- 顿
- 批
- 兔
- 尚
- 聘
- 藏
- 赚
- 继
- 享
- 欺
- 潮
- 即
- 甜
- 骨
- 悲
- 幕
- 滴
- 闲
- 液
- 缺
- 琴
- 蜜
- 善
- 暗
- 镜
- 蔡
- 吹
- 核
- 忆
- 键
- 辑
- 岗
- 例
- 涛
- 宏
- 刺
- 郭
- 降
- 秦
- 剩
- 绿
- 桌
- 咖
- 呐
- 叶
- 贸
- 架
- 账
- 亡
- 佛
- 哎
- 乳
- 归
- 忍
- 异
- 侠
- 龄
- 炒
- 洁
- 似
- 虚
- 贷
- 征
- 抽
- 败
- 枪
- 幼
- 丫
- 危
- 慰
- 究
- 婷
- 肃
- 箭
- 灰
- 届
- 律
- 秒
- 淡
- 偷
- 炫
- 鲜
- 浦
- 萨
- 旧
- 硬
- 操
- 混
- 施
- 散
- 咨
- 妻
- 吻
- 榜
- 呆
- 废
- 野
- 糕
- 骑
- 炼
- 震
- 恭
- 悔
- 跨
- 曼
- 啡
- 俊
- 晶
- 胃
- 汤
- 尊
- 貌
- 封
- 羽
- 赞
- 尸
- 隐
- 丢
- 霸
- 醉
- 盗
- 盐
- 浩
- 著
- 档
- 赢
- 幽
- 责
- 鼻
- 辣
- 恒
- 朵
- 慕
- 旗
- 娃
- 饰
- 仁
- 亦
- 竟
- 柳
- 郁
- 唯
- 夕
- 钻
- 均
- 劲
- 庭
- 巧
- 饮
- 涨
- 辆
- 傅
- 企
- 趟
- 避
- 党
- 染
- 扬
- 玲
- 筋
- 烤
- 桃
- 唉
- 慧
- 欲
- 寒
- 闷
- 某
- 恨
- 私
- 淮
- 惊
- 弱
- 弹
- 沉
- 兼
- 弯
- 残
- 偶
- 锋
- 贺
- 咯
- 纳
- 戴
- 抢
- 宗
- 浴
- 宵
- 莲
- 嗨
- 喊
- 奕
- 壁
- 症
- 冻
- 致
- 屋
- 喽
- 伊
- 绵
- 玫
- 固
- 籍
- 监
- 耐
- 井
- 寝
- 露
- 虫
- 盒
- 凡
- 摇
- 傲
- 烈
- 姿
- 陕
- 裸
- 袋
- 帐
- 凌
- 寿
- 茂
- 鹏
- 寓
- 柴
- 妞
- 森
- 既
- 紫
- 萝
- 层
- 苗
- 腊
- 邓
- 宣
- 锡
- 袜
- 陌
- 狮
- 碰
- 晴
- 塘
- 妃
- 祥
- 苍
- 针
- 敌
- 腰
- 犯
- 欠
- 垃
- 卸
- 迹
- 暑
- 祖
- 泳
- 阵
- 熊
- 励
- 澳
- 添
- 拳
- 岳
- 益
- 瘦
- 虹
- 圾
- 植
- 坡
- 攻
- 略
- 墙
- 描
- 遗
- 噢
- 窗
- 吐
- 肌
- 陵
- 逃
- 浮
- 摸
- 戒
- 哟
- 翰
- 勿
- 库
- 涯
- 妖
- 宠
- 脾
- 革
- 探
- 糊
- 采
- 惹
- 衡
- 赤
- 魏
- 羡
- 综
- 舟
- 疆
- 痴
- 催
- 朗
- 坛
- 悠
- 岭
- 驶
- 括
- 嘻
- 辽
- 粥
- 煮
- 灭
- 杜
- 域
- 令
- 替
- 翔
- 坤
- 潘
- 抓
- 铜
- 构
- 卷
- 茫
- 丑
- 涂
- 掌
- 饱
- 肝
- 疾
- 罩
- 谱
- 愚
- 抗
- 琳
- 夸
- 汪
- 墨
- 沟
- 翅
- 肠
- 患
- 柏
- 僵
- 稳
- 延
- 胆
- 伴
- 爬
- 滋
- 歉
- 轩
- 尿
- 铺
- 忠
- 黎
- 膀
- 邯
- 郸
- 愉
- 霉
- 翁
- 妙
- 隆
- 鸭
- 锻
- 涵
- 挣
- 副
- 罪
- 穷
- 恢
- 巨
- 吓
- 眉
- 棉
- 汗
- 溜
- 奏
- 滩
- 愁
- X
- 执
- 霞
- 魂
- 姆
- 摄
- 偏
- 纠
- 瑰
- 洪
- 协
- 牧
- 飘
- 炸
- 悦
- 艾
- 织
- 敬
- 驹
- 欣
- 董
- 邦
- 勒
- 守
- 伙
- 狐
- 税
- 湘
- 遥
- 储
- 脏
- 坊
- 腐
- 横
- 仔
- 仪
- 判
- 忽
- 哇
- 罚
- 爹
- 怖
- 竹
- 孔
- 捡
- 挑
- 肿
- 漠
- 尘
- 焦
- 塞
- 熬
- 谊
- 樱
- 返
- 莉
- 堵
- 捷
- 惑
- 绕
- 蛇
- 竞
- 耍
- 违
- 卧
- 蝶
- J
- 俗
- 滑
- 占
- 怜
- 舅
- 乔
- 泸
- 臭
- 策
- 骚
- 莱
- 岩
- 魅
- 兑
- 姥
- 兆
- 萍
- 烂
- 损
- 述
- 撒
- 烫
- 炮
- 忧
- 遵
- 桑
- 俺
- 彭
- 净
- 胶
- 柯
- 绑
- 碟
- 卜
- 饼
- 船
- 佩
- 妆
- 齿
- 厚
- 娟
- 醋
- 丘
- 恼
- 萧
- 析
- 润
- 潭
- 番
- 鹰
- 葡
- 萄
- 唤
- 胎
- 逊
- 峡
- 舰
- 障
- 伯
- 猴
- 膜
- 访
- 贤
- 耀
- 晒
- 狠
- 豪
- 剪
- 帖
- 幂
- 融
- 诱
- 韶
- 晋
- 拼
- 洞
- 氧
- 察
- 裁
- 寨
- 熙
- 喂
- 拖
- 污
- 乾
- 湿
- 嫌
- 拒
- 蕉
- 哲
- 薇
- 绒
- 婴
- 莎
- 稿
- 瞎
- 寺
- 徒
- 伞
- 碎
- 阜
- 填
- 琪
- 敦
- 柜
- 侣
- 搬
- 孟
- 蓉
- 筒
- 偿
- 献
- 径
- 畅
- 粤
- 悟
- 隔
- 赖
- 慈
- 哄
- 襄
- 扮
- 睁
- 彻
- 陶
- 瓷
- 荷
- 寸
- 牵
- 痒
- 芝
- 繁
- 倍
- 闪
- 梧
- 怒
- 蝴
- 嵩
- 赣
- 嘞
- 狱
- 猛
- 咳
- 媒
- 斌
- 斑
- 奋
- 叉
- 龟
- 贱
- 疑
- 暂
- 靓
- 叹
- 仓
- 撞
- 姜
- 疤
- 矿
- 芬
- 勤
- 纱
- 帆
- 迁
- 囧
- 佑
- 囊
- 侯
- 鼓
- 葛
- 沃
- 莹
- 诊
- 筑
- 酱
- 咬
- 糟
- 拯
- 鹤
- 驴
- 胞
- 枝
- 俄
- 呃
- 鹿
- 磨
- 姚
- 灾
- 扫
- 荡
- 吊
- 犬
- 菊
- 茹
- 链
- 嫉
- 妒
- 旺
- 夺
- 裙
- 湛
- 氏
- 鞍
- 抵
- 娇
- 耶
- 截
- 辞
- 硫
- 禁
- 怡
- 跌
- 刮
- 苑
- 媛
- 摆
- 盾
- 械
- 旋
- 卢
- 霆
- 驰
- 擦
- 符
- 肺
- 谜
- 霍
- 仅
- 迈
- 碗
- 邪
- 曹
- 咪
- 煌
- 疫
- 屠
- 握
- 奔
- Z
- 燃
- 沧
- 谦
- 馨
- 嫖
- 阻
- 冯
- 振
- 雕
- 闯
- 薄
- 宙
- 倾
- 嗽
- 椒
- 墓
- 尤
- 夹
- 潇
- 骤
- 壮
- 屈
- 颖
- 菠
- 吞
- 鸣
- 渴
- 堰
- 厨
- 督
- 驻
- 腹
- 岸
- 蛮
- 翠
- 肾
- 娼
- 券
- 尖
- 丸
- 鸿
- 厘
- 召
- 劝
- 牡
- 韦
- 拔
- 灏
- 弦
- 萌
- 惩
- 倩
- 诸
- 扎
- 庙
- 炉
- 潜
- 措
- 磊
- 脂
- 郊
- 虾
- 霜
- 猎
- 蝎
- 玄
- 钰
- 审
- 蜂
- 巷
- 敷
- 拟
- 钥
- 匙
- 婉
- 纽
- 芜
- 贾
- 串
- 靖
- 抛
- 彼
- 亏
- 挽
- 贼
- 穴
- 授
- 鼎
- 孝
- 玮
- 氓
- 劫
- 俞
- 谎
- 莆
- 隋
- 钠
- 赔
- 谐
- 纶
- 闰
- 昏
- 逆
- 璇
- 樊
- 禽
- 宅
- 碳
- 妮
- 亭
- 杆
- 蠢
- 鄙
- 蜀
- 阶
- 贫
- 辰
- 盼
- 呜
- 芦
- 株
- 腔
- 巾
- 羞
- 堡
- 亿
- 踩
- 憾
- 浓
- 阔
- 塑
- 趋
- 蓄
- 桶
- 葱
- 菇
- 咒
- 蟹
- 肩
- 柿
- 缓
- 漳
- 祸
- 挤
- 巢
- 抚
- 詹
- 豫
- 俱
- 悉
- 溶
- 粒
- 谭
- 诛
- 贡
- 沿
- 躲
- 慌
- 芙
- 蒋
- 乃
- 雀
- 姻
- 岂
- 悄
- 辕
- 斜
- 捕
- 扇
- 割
- 啤
- 纲
- 纤
- 祛
- 躁
- 殖
- 珊
- 氢
- 允
- 丈
- 蹈
- 邀
- 哼
- 坑
- 吾
- 淋
- 扩
- 愤
- 潍
- 尺
- 耗
- 鉴
- 闽
- 乙
- 渭
- 触
- 撑
- 咸
- 灿
- 缩
- 蔬
- 凑
- 渡
- 梭
- 粗
- 袁
- 菌
- 妓
- 稍
- 辐
- 哀
- 浆
- 厢
- 荆
- 踪
- 桐
- 邢
- 蜡
- 奉
- 淑
- 洒
- 扁
- 蕾
- 燥
- 硕
- 牢
- 蛙
- 仍
- 侵
- 稀
- 芒
- 吕
- 跪
- 绪
- 誓
- 旭
- 阁
- 屌
- 凭
- 裹
- 崇
- 纬
- 援
- 怨
- 茄
- 埋
- 棋
- 誉
- 瑜
- 蹲
- 扯
- 跃
- 昧
- 螺
- 毅
- 叮
- 喷
- 壶
- 喉
- 脆
- 瓦
- 碧
- 奴
- 煤
- 伍
- 娶
- 雁
- 骄
- 泣
- 眷
- 屯
- 赏
- 覆
- 揍
- 绯
- 逸
- 屎
- 彦
- 辨
- 攀
- 涉
- 泥
- 廊
- 菱
- 薛
- 衍
- 荒
- 铭
- 沂
- 麟
- 咏
- 扑
- 祈
- 喔
- 磁
- 歇
- 栋
- 沫
- 漏
- 玻
- 璃
- 逝
- 葵
- 溃
- 堆
- 锐
- 楠
- 毫
- 谋
- 勾
- 梯
- 氯
- 杏
- 赌
- 鑫
- 崔
- 颠
- 邱
- 肪
- 掘
- 昭
- 悬
- 奈
- 筷
- 轨
- 诵
- 葫
- 挡
- 梨
- 缠
- 僧
- 抬
- 邻
- 栏
- 饶
- 庚
- 灌
- 呦
- 摊
- 狄
- 汕
- 缴
- 罢
- 瞌
- 腺
- 辖
- 摔
- 棵
- 弗
- 琼
- 揭
- 淀
- 仑
- 粮
- 扔
- 剂
- 邵
- 辅
- 悍
- 袖
- 侨
- 巡
- 仗
- 逢
- 挥
- 翘
- 柱
- 狸
- 赫
- 耽
- 押
- 昂
- 瘤
- 枣
- 癌
- 伏
- 秤
- 脉
- 穹
- 敲
- 贪
- 促
- 拆
- 勉
- 祷
- 弊
- 膏
- 禾
- 契
- 挨
- 纵
- 疲
- 蜘
- 蛛
- 冈
- 雾
- 娄
- 甫
- 裂
- 侦
- 愈
- 臂
- 甩
- 戈
- 钙
- 簿
- 淄
- 蓬
- 夷
- 汁
- 凶
- 匹
- 皆
- 凝
- 仰
- 叛
- 蒲
- 谣
- 砖
- 呈
- 浅
- 瞬
- 丞
- 粘
- 痕
- 癫
- 禺
- 靴
- 尝
- 枫
- 鹅
- 衷
- 暮
- 媚
- 堪
- 臣
- 瑟
- 榕
- 蘑
- 遂
- 舌
- 藤
- 遭
- 芭
- 暧
- 犹
- 砸
- 浇
- 晰
- 矮
- 禹
- 隶
- 蚊
- 塌
- 峪
- 渊
- 摘
- 崩
- 瞧
- 炭
- 瑶
- 纷
- 毁
- 瞒
- 橙
- 渣
- 霹
- 雳
- 粽
- 侧
- 胀
- 捐
- 栈
- 颈
- 伪
- 役
- 予
- 钝
- 菏
- 铠
- 稻
- 赠
- 芽
- 龚
- 幅
- 莓
- 轿
- 炖
- 炬
- 溢
- 扭
- 垂
- 坎
- 嚏
- 枯
- 绣
- 蒸
- 旬
- 迫
- 浒
- 肇
- 庸
- 蒂
- 踏
- 雯
- 埃
- 础
- 狙
- 陷
- 伽
- 滔
- 沦
- 祭
- 唠
- 瀑
- 矛
- 乒
- 乓
- 窍
- 渠
- 泛
- 陇
- 蒜
- 捉
- 扶
- 诀
- 纹
- 踢
- 馋
- 薪
- 坪
- 廉
- 荔
- 骏
- 颁
- 伸
- 贞
- 沾
- 疮
- 兮
- 擎
- 驱
- 馒
- 挖
- 韵
- 姬
- 砍
- 矫
- 巫
- 疙
- 瘩
- 峨
- 抄
- 函
- 歪
- 倚
- 昔
- 涕
- 憨
- 淇
- 宴
- 埠
- 渐
- 胳
- 膊
- 趁
- 擅
- 刑
- 渝
- 噬
- 斋
- 妍
- 债
- 邹
- 嫂
- 娥
- 践
- 禅
- 牲
- 帽
- 吨
- 腻
- 掖
- 榴
- 啸
- 纺
- 鞭
- 豚
- 爵
- 蹄
- 咙
- 澈
- 疹
- 氛
- 抑
- 绸
- 抹
- 奎
- 酬
- 坟
- 诶
- 勋
- 卑
- 沪
- 蚁
- 揉
- 锄
- 泌
- 槽
- 镖
- 卿
- 甸
- 帕
- 镁
- 盲
- 汾
- 携
- 宰
- 虞
- 瓣
- 辩
- 豌
- 樟
- 璐
- 沁
- 钦
- 蔚
- 彬
- 卦
- 轰
- 锈
- 茎
- 蹦
- 拐
- 坝
- 饥
- 捏
- 碑
- 嗓
- 澄
- 惨
- 沽
- 鄂
- 逻
- 谍
- 屿
- 聋
- 憋
- 泼
- 枕
- 盆
- 衫
- 慎
- 黛
- 轶
- 咽
- 匠
- 蚂
- 捶
- 脊
- 蚌
- 剥
- 穆
- 喇
- 叭
- 凳
- 滥
- 撤
- 蓑
- 笠
- 黔
- 诡
- 颐
- 闵
- 稚
- 茨
- 捆
- 芯
- 涩
- 哑
- 盈
- 衰
- 奢
- 贩
- 循
- 韭
- 绘
- 鸳
- 唇
- 恳
- 妥
- 杠
- 刊
- 戚
- 巩
- 胁
- 蜗
- 筝
- 漆
- 劈
- 泄
- 噩
- 椎
- 渔
- 氨
- 橘
- 仲
- 洱
- 绥
- 仿
- 耿
- 蚕
- 倦
- 葬
- 捞
- 拓
- 冤
- 御
- 忌
- 慨
- 弥
- 寡
- 昵
- 撕
- 鲤
- 隧
- 倡
- 臀
- 毙
- 蕊
- 甚
- 睹
- 哒
- 仇
- 栓
- 抒
- 滁
- 讶
- 皱
- 剖
- 闸
- 耻
- 顽
- 茅
- 碱
- 霏
- 坠
- 邑
- 嗦
- 缝
- 枚
- 垫
- 畜
- 侄
- 悴
- 庞
- 鸯
- 俏
- 铅
- 衔
- 浑
- 抖
- 逮
- 犀
- 滕
- 遮
- 淹
- 挪
- 柠
- 檬
- 荨
- 沛
- 喻
- 尹
- 抉
- 爪
- 甄
- 冀
- 蝉
- 汰
- 丧
- 愧
- 畏
- 屑
- 屉
- 娩
- 艰
- 弓
- 炜
- 框
- 娅
- 酵
- 掩
- 宪
- 枉
- 淫
- 糗
- 奸
- 岚
- 诅
- 釜
- 萱
- 窦
- 喆
- 浣
- 庐
- 阑
- 劣
- 窄
- 赈
- 茉
- 帜
- 缸
- 嫩
- 迦
- 憔
- 鸽
- 朴
- 洽
- 榆
- 烹
- 箫
- 荚
- 箍
- 稣
- 肢
- 磷
- 袭
- 橡
- 鸦
- 瞅
- 匡
- 禧
- 痣
- 勃
- 翡
- 篱
- 烽
- 衢
- 讪
- 烛
- 宥
- 铝
- 镯
- 钉
- 披
- 昼
- 跆
- 笈
- 喘
- 惫
- 唧
- 螂
- 涌
- 揣
- 旨
- 袄
- 笼
- 蛔
- 毯
- 凸
- 倪
- 碌
- 懈
- 履
- 鱿
- 菩
- 汝
- 赴
- 焉
- 钛
- 畔
- 掰
- 骆
- 崖
- 髓
- 彪
- 啰
- 撸
- 拌
- 漯
- 犒
- 蔽
- 漱
- 赐
- 饪
- 玖
- 弘
- 卵
- 沭
- 梓
- 禄
- 晖
- 籁
- 熏
- 祠
- 荟
- 伐
- 柄
- 昕
- 琶
- 鞠
- 豹
- 萎
- 裕
- 曰
- 苇
- 沌
- 牺
- 轴
- 薯
- 潞
- 痫
- 曦
- 腋
- 坞
- 隙
- 妊
- 娠
- 蝙
- 蝠
- 赘
- 咧
- 翩
- 棚
- 冕
- 旱
- 棱
- 巍
- 偕
- 杉
- 梵
- 嫦
- 煎
- 泊
- 辟
- 丛
- 艘
- 懦
- 郫
- 搅
- 佬
- 阖
- 焰
- 澜
- 琢
- 挚
- 嫣
- 啧
- 兜
- 趴
- 皂
- 窃
- 嘟
- 崛
- 睿
- 刃
- 绳
- 哗
- 窟
- 嗑
- 吭
- 朔
- 喵
- 粹
- 酶
- 辜
- 诫
- 筹
- 亩
- 椅
- 佐
- 俑
- 狡
- 陛
- 曙
- 攒
- 诈
- 叙
- 杖
- 馅
- 锌
- 矜
- 绮
- 刁
- 阙
- 亢
- 讼
- 驼
- 晃
- 逍
- 仕
- 芋
- 拇
- 掏
- 瘾
- 腕
- 魁
- 鲍
- 殷
- 荤
- 亨
- 凄
- 硝
- 嬛
- 藻
- 诣
- 桔
- 疡
- 氰
- 佰
- 鸠
- 埔
- 皋
- 谚
- 麒
- 廖
- 鳄
- 蹉
- 阎
- 琦
- 丙
- 烯
- 涮
- 絮
- 潢
- 郴
- 遛
- 琵
- 殿
- 蹭
- 笛
- 钾
- 辙
- 炊
- 廷
- 拦
- 哆
- 逐
- 钞
- 赋
- 孽
- 沸
- 龈
- 雌
- 玟
- 麓
- 焊
- 谨
- 衬
- 灸
- 栖
- 卉
- 脐
- 栽
- 扒
- 酚
- 肱
- 闺
- 猥
- 钩
- 羁
- 吱
- 吼
- 蹊
- 跷
- 磕
- 坷
- 蝇
- 唔
- 褶
- 钮
- 鹭
- 咔
- 沐
- 棠
- 锷
- 滞
- 肛
- 糜
- 噜
- 涧
- 儒
- 琅
- 捎
- 泵
- 葩
- 芥
- 轲
- 猾
- 拱
- 墅
- 蕲
- 馁
- 佚
- 渤
- 崎
- 峻
- 赎
- 霄
- 羯
- 缅
- 韧
- 勘
- 皖
- 顷
- 喀
- 忏
- 圭
- 槟
- 榔
- 兹
- 坂
- 镒
- 堕
- 蟒
- 芹
- 浃
- 哉
- 晏
- 绐
- 陀
- 茵
- 倘
- 缆
- 浊
- 碍
- 惰
- 濮
- 杵
- 削
- 裘
- 嗅
- 呕
- 绊
- 哩
- 腩
- 撇
- 郝
- 铿
- 锵
- 赃
- 缪
- 卤
- 吝
- 涟
- 冶
- 匪
- 婿
- 蛳
- 搏
- 圩
- 旷
- 汞
- 鹦
- 茱
- 粪
- 崂
- 陋
- 掐
- 郡
- 哮
- 邸
- 帘
- 柚
- 鬓
- 剃
- 忻
- 羔
- 聆
- 刹
- 嗷
- 罕
- 沥
- 钗
- 尴
- 尬
- 莽
- 捧
- 拽
- 懵
- 噶
- 虐
- 囚
- 囡
- 颓
- 亥
- 傍
- 疏
- 乞
- 丐
- 皓
- 孜
- 愣
- 檐
- 橱
- 绅
- 噻
- 痊
- 鳞
- 瞳
- 衩
- 捂
- 吔
- 螳
- 暇
- 嘎
- 缤
- 镍
- 吟
- 斥
- 饲
- 鲢
- 猩
- 狒
- 腼
- 腆
- 轼
- 梗
- 熨
- 荫
- 糙
- 妾
- 粕
- 烘
- 壹
- 骥
- 秽
- 熔
- 歹
- 谬
- 侈
- 蜈
- 蚣
- 婵
- 渍
- 斩
- 棕
- 辱
- 醇
- 磅
- 礴
- 颊
- 彝
- 庾
- 叠
- 忒
- 稽
- 幢
- 嘱
- 醛
- 砂
- 炳
- 拂
- 殇
- 邬
- 冥
- 擒
- 汶
- 罐
- 镑
- 祁
- 氮
- 怆
- 羌
- 拧
- 芸
- 堀
- 婊
- 暄
- 挎
- 躬
- 噎
- 菅
- 奂
- 龌
- 龊
- 睬
- 燎
- 鲈
- 拢
- 啬
- 脖
- 尧
- 馗
- 皎
- 滤
- 镶
- 椭
- 狈
- 澎
- 阉
- 侃
- 婕
- 脓
- 桨
- 阪
- 湃
- 溏
- 箕
- 蚯
- 蚓
- 呛
- 矩
- 彤
- 惟
- 鹉
- 讽
- 募
- 惦
- 飓
- 抠
- 肮
- 溟
- 膝
- 芗
- 逞
- 娌
- 湮
- 舵
- 挫
- 椰
- 螃
- 绽
- 蟑
- 聂
- 拘
- 萸
- 洼
- 弛
- 澧
- 玺
- 芊
- 枢
- 鲨
- 毋
- 搂
- 跎
- 趾
- 琐
- 徘
- 徊
- 濡
- 咩
- 钏
- 舔
- 烷
- 胺
- 拙
- 溺
- 竖
- 蕴
- 巅
- 魄
- 吖
- 啵
- 庇
- 灼
- 遣
- 怠
- 枭
- 乏
- 缕
- 掂
- 秩
- 蜕
- 泾
- 汀
- 肆
- 倔
- 吒
- 矣
- 豁
- 仨
- 俯
- 嘲
- 瞪
- 唬
- 骋
- 辍
- 曝
- 泻
- 鼾
- 捣
- 妨
- 撵
- 撮
- 猕
- 浜
- 哺
- 睫
- 荧
- 噪
- 栗
- 垣
- 獒
- 冼
- 瞄
- 刍
- 硅
- 翊
- 泓
- 枥
- 凋
- 匣
- 孢
- 飙
- 俭
- 珑
- 嵊
- 佣
- 祟
- 枞
- 蓟
- 斧
- 镕
- 棺
- 痔
- 娴
- 苔
- 笙
- 蔻
- 芮
- 迭
- 暨
- 诏
- 癜
- 芷
- 臧
- 驿
- 珂
- 藕
- 笋
- 竭
- 歼
- 铉
- 恹
- 雇
- 诲
- 漓
- 扳
- 寰
- 颂
- 缈
- 砣
- 戳
- 疣
- 寮
- 甥
- 牦
- 衅
- 湄
- 汨
- 褐
- 腑
- 啼
- 惭
- 痰
- 梳
- 驮
- 阮
- 壳
- 慷
- 牟
- 捺
- 瘁
- 锂
- 狩
- 沱
- 烁
- 摞
- 楷
- 楞
- 瑾
- 饯
- 灶
- 薰
- 伎
- 忐
- 忑
- 煽
- 骁
- 娲
- 赁
- 锑
- 嵌
- 苞
- 咫
- 锴
- 岐
- 蓓
- 毽
- 黏
- 攸
- 恰
- 惶
- 矶
- 簸
- 坨
- 踝
- 掺
- 榨
- 阀
- 婢
- 纨
- 搓
- 闫
- 瘫
- 垢
- 蚀
- 貂
- 壑
- 婧
- 腥
- 兖
- 觅
- 壤
- 珉
- 胭
- 惧
- 僻
- 峥
- 炀
- 蔗
- 铂
- 宛
- 巳
- 氟
- 秸
- 菁
- 鹃
- 疱
- 矢
- 拭
- 缀
- 朦
- 胧
- 筏
- 贯
- 汐
- 蛤
- 蟆
- 迩
- 犁
- 馈
- 叽
- 喳
- 袈
- 裟
- 啃
- 敞
- 踊
- 雏
- 朽
- 撩
- 恙
- 亵
- 淤
- 垦
- 眺
- 熄
- 衲
- 伺
- 墟
- 孚
- 墩
- 猬
- 堤
- 鞘
- 署
- 陂
- 鬟
- 萤
- 悯
- 恃
- 峙
- 咄
- 奠
- 跺
- 笆
- 啄
- 殆
- 赅
- 锭
- 铛
- 枷
- 姗
- 驭
- 嘀
- 煲
- 腚
- 霖
- 孪
- 翟
- 濒
- 邂
- 逅
- 筱
- 霓
- 窈
- 窕
- 眨
- 耸
- 羚
- 尉
- 谀
- 竿
- 蛟
- 籽
- 铲
- 潼
- 匆
- 肽
- 戬
- 岔
- 奚
- 裴
- 嘏
- 玥
- 妯
- 昙
- 烨
- 吏
- 鼹
- 筵
- 崭
- 涪
- 來
- 瘆
- 彰
- 杞
- 疽
- 琥
- A
- 栾
- 庵
- 窘
- 擀
- 痤
- 蟾
- 唾
- 嚼
- 癖
- 蛹
- 浸
- 狭
- 迂
- 脍
- 炙
- 覃
- 悖
- 阆
- 铸
- 洮
- 瑙
- 呷
- 呸
- 谛
- 膨
- 柑
- 眯
- 奘
- 吆
- 孰
- 珈
- 曜
- 拈
- 麝
- 嘘
- 缚
- 徕
- 糸
- 崴
- 藓
- 婺
- 揽
- 溧
- 熠
- 膳
- 犊
- 贬
- 脯
- 剿
- 鼬
- 焕
- 胛
- 拷
- 勺
- 鲫
- 炅
- 卒
- 刨
- 糯
- 瘪
- 雍
- 襟
- 酋
- 胤
- 戟
- 褔
- 惆
- 怅
- 阂
- 扉
- 锚
- 砌
- 祺
- 淅
- 濠
- 匀
- 隍
- 氦
- 绫
- 濑
- 佝
- 偻
- 翎
- 颌
- 咚
- 疖
- 媲
- 祗
- 寅
- 靡
- 稞
- 骝
- 锏
- 焖
- 栀
- 蝗
- 甭
- 罄
- 酪
- 酮
- 嘢
- 钨
- 涎
- 沼
- 嚯
- 阱
- 驸
- 爰
- 酌
- 绛
- 畴
- 辄
- 藜
- 碚
- 馥
- 茧
- 鲛
- 溅
- 浯
- 沮
- 蹿
- 诠
- 姊
- 藉
- 骡
- 褪
- 酞
- 臻
- 靛
- 譬
- 粼
- 肘
- 孺
- 苟
- 瓯
- 蕨
- 冉
- 稠
- 蒿
- 锤
- 焙
- 蜃
- 淌
- 瘸
- 汲
- 噼
- 啪
- 橇
- 虔
- 裳
- 煞
- 淳
- 锟
- 摧
- 篷
- 癞
- 凹
- 汹
- 樵
- 睐
- 叁
- 飒
- 舶
- 驷
- 嘚
- 垮
- 妩
- 焚
- 扪
- 溥
- 鹊
- 鹄
- 汴
- 妁
- 廓
- 谙
- 苛
- 喏
- 嬉
- 裆
- 谔
- 哝
- 岑
- 喧
- 咆
- 茁
- 霎
- 泷
- 笃
- 沣
- 戮
- 蓦
- 滢
- 碜
- 滇
- 妤
- 盯
- 眶
- 婶
- 侍
- 崽
- 辘
- 轳
- 斓
- 郢
- 泞
- 窖
- 镭
- 痹
- 缉
- 镐
- 膛
- 睦
- 歧
- 扦
- 筛
- 嵘
- 茗
- 戎
- 萦
- 柒
- 咀
- 诋
- 搁
- 婪
- 漾
- 瀚
- 绎
- 盏
- 庹
- 吩
- 咐
- 堇
- 矾
- 茯
- 苓
- 潦
- 嘁
- 噫
- 窑
- 鳗
- 孵
- 彷
- 徨
- 耕
- 晗
- 撂
- 猿
- 昊
- 淼
- 驯
- 垒
- 铤
- 胱
- 桦
- 铮
- 坳
- 厥
- 叨
- 烙
- 苷
- 殴
- 鸥
- 蜥
- 蜴
- 湟
- 衙
- 敖
- 阐
- 穗
- 攥
- 俾
- 锥
- 粱
- 绰
- 漕
- 钕
- 硼
- 蚤
- 铢
- 疚
- 挟
- 昱
- 栅
- 煦
- 鳝
- 枸
- 锯
- 茜
- 悼
- 跤
- 犍
- 衿
- 筐
- 恪
- 琛
- 砝
- 秆
- 歆
- 晾
- 慑
- 蜍
- 诃
- 盔
- 寇
- 璧
- 鹩
- 恤
- 匿
- 踉
- 焗
- 戍
- 憎
- 桓
- 裔
- 梢
- 蝼
- 贿
- 诽
- 橄
- 榄
- 蔺
- 鲅
- 鳖
- 荞
- 槐
- 砚
- 癣
- 胚
- 沅
- 菀
- 荀
- 亳
- 铵
- 垌
- 釉
- 摁
- 瑕
- 疵
- 泗
- 逵
- 饵
- 旌
- 磺
- 彗
- 娣
- 晟
- 惘
- 棘
- 屹
- 逾
- 淞
- 逑
- 茴
- 楹
- 珀
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 23f3f6d8b0b60152408dceddf2111f17 |