repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
susnato/bert-base-uncased-issues-128 | susnato | bert | 10 | 0 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1003 | 1.0 | 291 | 1.6578 |
| 1.6211 | 2.0 | 582 | 1.4140 |
| 1.4964 | 3.0 | 873 | 1.3040 |
| 1.41 | 4.0 | 1164 | 1.3011 |
| 1.336 | 5.0 | 1455 | 1.3095 |
| 1.2862 | 6.0 | 1746 | 1.3739 |
| 1.2271 | 7.0 | 2037 | 1.2743 |
| 1.2043 | 8.0 | 2328 | 1.2019 |
| 1.1701 | 9.0 | 2619 | 1.2696 |
| 1.1498 | 10.0 | 2910 | 1.2507 |
| 1.1194 | 11.0 | 3201 | 1.1398 |
| 1.1094 | 12.0 | 3492 | 1.1309 |
| 1.0913 | 13.0 | 3783 | 1.0740 |
| 1.0683 | 14.0 | 4074 | 1.1201 |
| 1.0607 | 15.0 | 4365 | 1.1690 |
| 1.0558 | 16.0 | 4656 | 1.0940 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
| 0c36919bc38928cbfe31f404bdbe6315 |
FUXI/yuyan-11b | FUXI | null | 9 | 0 | null | 1 | text-generation | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-generation', 'story-generation', 'pytorch', 'inference acceleration', 'gpt2', 'gpt3'] | false | true | true | 4,725 | false | # YuYan: Pre-training of Language Models for Story Generation
YuYan is a series of Chinese language models with different size, developed by Fuxi AI lab, Netease.Inc. They are trained on a large Chinese novel dataset of high quality.
YuYan is in the same family of decoder-only models like [GPT2 and GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
Because the training data is mainly the novel, the model is good at generating the next plot given the story context.
## Model Inference Acceleration
As the model size increases, the model inference time increases and more computational resources are required.
Therefore, we developed our own transformer model inference acceleration framework, [EET](https://github.com/NetEase-FuXi/EET.git). More details are in [Easy and Efficient Transformer: Scalable Inference Solution For Large NLP Model](https://aclanthology.org/2022.naacl-industry.8/).
We combine our language model with the EET inference framework to provide industrial-grade inference reasoning performance.
## How to use
Our model is trained based on the [fairseq](https://github.com/facebookresearch/fairseq). As a result, the inference and finetuning depend on it.
For inference, we modify some parts of the original fairseq codes. Mainly
> fairseq-0.12.2/fairseq/sequence_generator.py
We integrate the EET with sequence_generator. We replace the eos token to a token unlikely to be sampled to ensure the generated text length. The repetition penalty trick is also modified. You can change the penalty strength by adjusting the value of `self.ban_weight`.
Then, to keep the eos token in the final generated text, we change the line 75 `include_eos=False` to `include_eos=True` in
> fairseq-0.12.2/fairseq/data/dictionary.py
Finally, to pass in parameters in python scripts, we remove the line 67 ~ line 69 in
>fairseq-0.12.2/fairseq/dataclass/utils.py
Below are the install tutorial.
```
# install pytorch
pip install torch==1.8.1 # install pytorch
# install fairseq
unzip fairseq-0.12.2.zip
cd fairseq-0.12.2
pip install.
# install EET
git clone https://github.com/NetEase-FuXi/EET.git
cd EET
pip install .
# install transformers (EET requirements)
pip install transformers==4.23
# make a folder, move the dictionary file and model file into it.
mkdir transformer_lm_gpt2_xxl
mv dict.txt transformer_lm_gpt2_xxl/
mv checkpoint_best_part_*.pt transformer_lm_gpt2_xxl/
```
`inference.py` is a script to provide a interface to initialize the EET object and sequence_generator. In addition, It includes some pre-process and post-process functions for text input and output. You can modify the script according to your needs.
After the environment is ready, several lines of codes can realize the inference.
``` python
from inference import Inference
model_path = "transformer_lm_gpt2_xxl/checkpoint_best.pt"
data_path = "transformer_lm_gpt2_xxl"
eet_batch_size = 10 # max inference batch size, adjust according to cuda memory, 40GB memory is necessary
inference = Inference(model_path, data_path, eet_batch_size)
inp = "田园一听这话,轻挑的嘴角放了下来,两腿叉开,踱着方步,跨过汤婆子,一屁股坐在了老人面前。</s>刘萌和健军一左一右站在他身旁,像是王朝、马汉护着包公断案。"
text = inference([inp] * 10, append_right_eos=True)
```
This interface supports batch inputs, so if you need to generate multiple results for one input, you can copy the input multiple times. The interface supports results generated for multiple different inputs, e.g.
```python
text = inference(["四个月后,正是草长花秾的暮春季节。</s>令狐冲和盈盈新婚燕尔,携手共赴华山。","院子中传来急促的脚步声,他停下手中的招式,将开元刀插入刀鞘。"])
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- https://aclanthology.org/2022.naacl-industry.8/
```
@inproceedings{li-etal-2022-easy,
title = "Easy and Efficient Transformer: Scalable Inference Solution For Large {NLP} Model",
author = "Li, Gongzheng and
Xi, Yadong and
Ding, Jingzhen and
Wang, Duan and
Luo, Ziyang and
Zhang, Rongsheng and
Liu, Bai and
Fan, Changjie and
Mao, Xiaoxi and
Zhao, Zeng",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track",
month = jul,
year = "2022",
address = "Hybrid: Seattle, Washington + Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-industry.8",
doi = "10.18653/v1/2022.naacl-industry.8",
pages = "62--68"
}
```
## Contact Us
You can also contact us by email:
[email protected], [email protected]
| e0e9f748f8efa87c14cb74d862837ee0 |
sanchit-gandhi/whisper-medium-es-4k-1e-7-bs-32 | sanchit-gandhi | whisper | 15 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['facebook/multilingual_librispeech'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,557 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Es - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1694
- Wer: 7.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.7733 | 0.25 | 1000 | 0.6193 | 17.9946 |
| 0.2991 | 0.5 | 2000 | 0.3162 | 14.2555 |
| 0.2929 | 0.75 | 3000 | 0.1799 | 7.7752 |
| 0.3099 | 1.0 | 4000 | 0.1694 | 7.3696 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0
- Datasets 2.6.2.dev0
- Tokenizers 0.12.1
| 7430e5605e8ff17d8bcb630a664acd5b |
sd-concepts-library/venice | sd-concepts-library | null | 13 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,390 | false | ### venice on Stable Diffusion
This is the `<venice>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<venice> 0](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/1.jpeg)
![<venice> 1](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/5.jpeg)
![<venice> 2](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/7.jpeg)
![<venice> 3](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/3.jpeg)
![<venice> 4](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/2.jpeg)
![<venice> 5](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/6.jpeg)
![<venice> 6](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/0.jpeg)
![<venice> 7](https://huggingface.co/sd-concepts-library/venice/resolve/main/concept_images/4.jpeg)
| ad6d3718973b8a107b34fa7c3e95a842 |
Krishadow/biobert-finetuned-ner | Krishadow | bert | 8 | 3 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,531 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Krishadow/biobert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0450
- Validation Loss: 0.0593
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 678, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1443 | 0.0597 | 0 |
| 0.0450 | 0.0593 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 695315c94aaa7e69fc27cf0665f4099a |
adsjklfsd/distilbert-base-uncased-finetuned-emotion | adsjklfsd | distilbert | 12 | 6 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8222 | 1.0 | 250 | 0.3162 | 0.9085 | 0.9063 |
| 0.2501 | 2.0 | 500 | 0.2226 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.1+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
| 981ca25bc509683a4189970673e54e90 |
hkunlp/instructor-xl | hkunlp | t5 | 14 | 279 | sentence-transformers | 12 | sentence-similarity | true | false | false | apache-2.0 | ['en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-embedding', 'embeddings', 'information-retrieval', 'beir', 'text-classification', 'language-model', 'text-clustering', 'text-semantic-similarity', 'text-evaluation', 'prompt-retrieval', 'text-reranking', 'sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers', 't5', 'English', 'Sentence Similarity', 'natural_questions', 'ms_marco', 'fever', 'hotpot_qa', 'mteb'] | true | true | true | 6,303 | false |
# hkunlp/instructor-xl
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 01/21: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-xl) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-xl) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-xl')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
``` | 23c06254f67b8ad723affed51716e3f4 |
mchochowski/test-model | mchochowski | null | 4 | 14 | transformers | 0 | image-classification | false | false | false | apache-2.0 | null | ['imagenet'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'resnet'] | false | true | true | 4,180 | false |
### Model Description
The ***ResNet50 v1.5*** model is a modified version of the [original ResNet50 v1 model](https://arxiv.org/abs/1512.03385).
The difference between v1 and v1.5 is that, in the bottleneck blocks which requires
downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution.
This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a smallperformance drawback (\~5% imgs/sec).
The model is initialized as described in [Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification](https://arxiv.org/pdf/1502.01852.pdf)
This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results over 2x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
Note that the ResNet50 v1.5 model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/NVIDIA/trtis-inference-server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnet_for_triton_from_pytorch)
### Example
In the example below we will use the pretrained ***ResNet50 v1.5*** model to perform inference on ***image*** and present the result.
To run the example you need some extra python packages installed. These are needed for preprocessing images and visualization.
```python
!pip install validators matplotlib
```
```python
import torch
from PIL import Image
import torchvision.transforms as transforms
import numpy as np
import json
import requests
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(f'Using {device} for inference')
```
Load the model pretrained on IMAGENET dataset.
```python
resnet50 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resnet50', pretrained=True)
utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils')
resnet50.eval().to(device)
```
Prepare sample input data.
```python
uris = [
'http://images.cocodataset.org/test-stuff2017/000000024309.jpg',
'http://images.cocodataset.org/test-stuff2017/000000028117.jpg',
'http://images.cocodataset.org/test-stuff2017/000000006149.jpg',
'http://images.cocodataset.org/test-stuff2017/000000004954.jpg',
]
batch = torch.cat(
[utils.prepare_input_from_uri(uri) for uri in uris]
).to(device)
```
Run inference. Use `pick_n_best(predictions=output, n=topN)` helepr function to pick N most probably hypothesis according to the model.
```python
with torch.no_grad():
output = torch.nn.functional.softmax(resnet50(batch), dim=1)
results = utils.pick_n_best(predictions=output, n=5)
```
Display the result.
```python
for uri, result in zip(uris, results):
img = Image.open(requests.get(uri, stream=True).raw)
img.thumbnail((256,256), Image.ANTIALIAS)
plt.imshow(img)
plt.show()
print(result)
```
### Details
For detailed information on model input and output, training recipies, inference and performance visit:
[github](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5)
and/or [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnet_50_v1_5_for_pytorch)
### References
- [Original ResNet50 v1 paper](https://arxiv.org/abs/1512.03385)
- [Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification](https://arxiv.org/pdf/1502.01852.pdf)
- [model on github](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5)
- [model on NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnet_50_v1_5_for_pytorch)
- [pretrained model on NGC](https://ngc.nvidia.com/catalog/models/nvidia:resnet50_pyt_amp)
```python
```
| 0da925f8cdec0ab06a770d8ae3e5a813 |
abishanth/crpf_analysis_trail_1 | abishanth | distilbert | 15 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,037 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# crpf_analysis_trail_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0301
- Accuracy: 0.9935
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 7294f44b996a90b477e343f6ae32cbc2 |
DrishtiSharma/whisper-large-v2-ml-700-steps | DrishtiSharma | whisper | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ml'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,323 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2 Malayalam- Drishti Sharma
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3159
- Wer: 28.2886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0002 | 12.96 | 700 | 0.3159 | 28.2886 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| e1ae60f04f4fe80440a79b9a17d03737 |
vishwasgautam/HuBERT-base-libriSpeech-demo-colab | vishwasgautam | hubert | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,359 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuBERT-base-libriSpeech-demo-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1456
- Wer: 0.2443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.6395 | 13.51 | 500 | 3.1933 | 0.9930 |
| 2.5994 | 27.03 | 1000 | 0.1456 | 0.2443 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 508ce17af20be0246419d3a38f6463d1 |
wietsedv/xlm-roberta-base-ft-udpos28-cy | wietsedv | xlm-roberta | 8 | 13 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['cy'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['part-of-speech', 'token-classification'] | true | true | true | 565 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Welsh
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cy")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-cy")
```
| 8aa568fb46e55a8e84c605896eb885de |
kingabzpro/wav2vec2-large-xls-r-300m-Indonesian | kingabzpro | wav2vec2 | 16 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event'] | true | true | true | 1,927 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Wer: 0.2461
- Cer: 0.0666
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.0788 | 4.26 | 200 | 2.9389 | 1.0 | 1.0 |
| 2.8288 | 8.51 | 400 | 2.2535 | 1.0 | 0.8004 |
| 0.907 | 12.77 | 600 | 0.4558 | 0.4243 | 0.1095 |
| 0.4071 | 17.02 | 800 | 0.4013 | 0.3468 | 0.0913 |
| 0.3 | 21.28 | 1000 | 0.4167 | 0.3075 | 0.0816 |
| 0.2544 | 25.53 | 1200 | 0.4132 | 0.2835 | 0.0762 |
| 0.2145 | 29.79 | 1400 | 0.3878 | 0.2693 | 0.0729 |
| 0.1923 | 34.04 | 1600 | 0.4023 | 0.2623 | 0.0702 |
| 0.1681 | 38.3 | 1800 | 0.3984 | 0.2581 | 0.0686 |
| 0.1598 | 42.55 | 2000 | 0.3982 | 0.2493 | 0.0663 |
| 0.1464 | 46.81 | 2200 | 0.4087 | 0.2461 | 0.0666 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| 2248a0490959902fdcd159557f83cf7e |
AlekseyKorshuk/1.3b-dalio-principles-book | AlekseyKorshuk | opt | 13 | 2 | transformers | 0 | text-generation | true | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,115 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1.3b-dalio-principles-book
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4512
- Accuracy: 0.4741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6914 | 0.14 | 1 | 2.6895 | 0.4477 |
| 2.6897 | 0.29 | 2 | 2.6895 | 0.4477 |
| 2.668 | 0.43 | 3 | 2.7031 | 0.4403 |
| 2.7434 | 0.57 | 4 | 2.5918 | 0.4533 |
| 2.6265 | 0.71 | 5 | 2.5410 | 0.4618 |
| 2.5259 | 0.86 | 6 | 2.5156 | 0.4641 |
| 2.5566 | 1.0 | 7 | 2.4902 | 0.4667 |
| 2.2317 | 1.14 | 8 | 2.4766 | 0.4707 |
| 2.2397 | 1.29 | 9 | 2.4727 | 0.4705 |
| 2.0162 | 1.43 | 10 | 2.4766 | 0.4690 |
| 2.0537 | 1.57 | 11 | 2.4805 | 0.4707 |
| 2.1432 | 1.71 | 12 | 2.4707 | 0.4714 |
| 2.0822 | 1.86 | 13 | 2.4570 | 0.4724 |
| 1.9056 | 2.0 | 14 | 2.4512 | 0.4741 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| d325a95dafb4aba8931b374c744c2d02 |
sasuke/bert-base-uncased-finetuned-sst2 | sasuke | bert | 29 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,463 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1817 | 1.0 | 4210 | 0.2920 | 0.9186 |
| 0.1297 | 2.0 | 8420 | 0.3069 | 0.9209 |
| 0.0978 | 3.0 | 12630 | 0.2982 | 0.9323 |
| 0.062 | 4.0 | 16840 | 0.3278 | 0.9312 |
| 0.0303 | 5.0 | 21050 | 0.3642 | 0.9323 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| f51121dce2febf1abda52abda04cce20 |
jgammack/MTL-roberta-base | jgammack | roberta | 22 | 4 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,884 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8338 | 1.0 | 98 | 1.6750 |
| 1.7732 | 2.0 | 196 | 1.6229 |
| 1.7208 | 3.0 | 294 | 1.6131 |
| 1.6917 | 4.0 | 392 | 1.5936 |
| 1.6579 | 5.0 | 490 | 1.6183 |
| 1.6246 | 6.0 | 588 | 1.6015 |
| 1.6215 | 7.0 | 686 | 1.5248 |
| 1.5743 | 8.0 | 784 | 1.5454 |
| 1.5621 | 9.0 | 882 | 1.5925 |
| 1.5652 | 10.0 | 980 | 1.5213 |
| 1.5615 | 11.0 | 1078 | 1.4845 |
| 1.5349 | 12.0 | 1176 | 1.5443 |
| 1.5165 | 13.0 | 1274 | 1.5304 |
| 1.5164 | 14.0 | 1372 | 1.4773 |
| 1.5293 | 15.0 | 1470 | 1.5537 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1a42fb08f72de6f194875a6f0d6a7d67 |
jonatasgrosman/exp_w2v2t_th_wav2vec2_s664 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 459 | false | # exp_w2v2t_th_wav2vec2_s664
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f0b8c7bfbd1733bdfc143aec9d70d3f6 |
StarwingDigital/Oldjourney | StarwingDigital | null | 25 | 0 | diffusers | 6 | null | false | false | false | creativeml-openrail-m | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Text-to-image', 'Diffusers', 'stable-diffusion'] | false | true | true | 6,858 | false | <b>Oldjourney</b>
Oldjourney is a finetuned Stable Diffusion 2.1 model trained on images from Midjourney 3 using Dreambooth. That older version of Midjourney was often messy and imprecise, but had a great artistic style. These two versions of Oldjourney can recreate the essence of that art style with added details, precision, and quality.
The two models, Oldjourney Ultra and Oldjourney Lite, are very similar, but they have different strengths. Ultra is better at people, while Lite is better at painterly style images.
Use the keyword <b>Oldjourney</b> to trigger the style, and set the resolution to 768 x 768 or greater. Examples and sample prompts below.
This is a model for Stable Diffusion 2.1, so make sure to download the yaml files.
<b>Rendered with Oldjourney Lite</b>
![Oldjourney Lite.png](https://s3.amazonaws.com/moonup/production/uploads/1673363360976-6362b8dc2a84d82a8c91145c.png)
<b>Rendered with Oldjourney Ultra</b>
![Oldjourney Ultra.png](https://s3.amazonaws.com/moonup/production/uploads/1673363412363-6362b8dc2a84d82a8c91145c.png)
<b>Sample Prompts for Oldjourney Lite</b>
<b>Sample 1</b>
Oldjourney the legendary dream vortex and a dreamer, a boy laying on a bed in front of a vortex, ultrafine detailed painting, psychedelic art, watching the stars at night, pulled into the spiral vortex, videogame cover art, ... if only i could sleep, discord profile picture, time travel machine, photoshop render
<b>Negative prompt:</b> pink, ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 810775161, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample 2</b>
Oldjourney an image of a wizard with a glowing staff turned to the side, black background, light art, full of colors and rich detail, color grunge, profile picture 1024px, glowing liquid, high detailed colors, colorful fire, an old man, blacklight, discord profile picture
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2371590421, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample 3</b>
Oldjourney a dog with a tiny top hat and steampunk goggles on its head and a steampunk collar, matte painting, insanely detailed, ultrafine details, hyperrealism
<b>Negative prompt:</b> (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3142299054, Size: 768x768, Model: Oldjourney Lite, ENSD: 1</i>
<b>Sample Prompts for Oldjourney Ultra</b>
<b>Sample 4</b>
Oldjourney A woman facing the camera dancing aura of cosmic energy vortex of sparkling blue sand and glowing embers ((grunge)) smoke magical eerie noir lighting stars in the sky ethereal dream sandman surreal rembrandt artstation dark atmosphere 8k highly detailed atmospheric
<b>Negative prompt:</b> ugly, tiling, (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), out of frame, extra limbs, less than two arms, less than two legs, disfigured, deformed, body out of frame, blurry, (bad anatomy:1.2), blurred, grainy, cut off, draft, (overexposure:1.2), (high contrast:1.2),(cropped:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2676530026, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
<b>Sample 5</b>
Oldjourney your fate revealed inside a crystal ball, crystal ball with swirling otherworldly fog reveals your fate, insanely detailed masterpiece Trending on Artstation 8k ray traced volumetric lighting ambient occlusion ultrafine details digital art painting
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2555061923, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
<b>Sample 6</b>
Oldjourney cosmic queen, ethereal woman with a crown on her head, head and shoulders portrait, fantasy art, star sky, star sky, face illuminated, sparkle, stars, cosmos, paticles
<b>Negative prompt:</b> ugly, tiling, out of frame, body out of frame, blurry, blurred, grainy, cut off, draft, (cropped:1.2),(overexposure:1.2), (high contrast:1.2), (poorly drawn hands:1.2), (poorly drawn feet:1.2), (poorly drawn face:1.2), (too long neck:1:2), (extra limbs:1.2), (less than two arms:1.2), (less than two legs:1.2), disfigured, deformed,(bad anatomy:1.2), (watermark:1.2), (logo:1.2), (barcode:1.2), (UI:1.2), (signature:1.2), (text:1.2), (label:1.5), (error:1.2), (title:1.2), stickers, markings, speech bubbles, lines, cropped, low res, low quality, artifacts, low quality, worst quality, bad quality
<i>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 868461039, Face restoration: GFPGAN, Size: 768x768, Model: Oldjourney Ultra, ENSD: 1</i>
| ce1c44f3f6793ffa1be68e8dd8177758 |
spacy/fr_core_news_lg | spacy | null | 28 | 30 | spacy | 1 | token-classification | false | false | false | lgpl-lr | ['fr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 11,828 | false | ### Details: https://spacy.io/models/fr#fr_core_news_lg
French pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `fr_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | [UD French Sequoia v2.8](https://github.com/UniversalDependencies/UD_French-Sequoia) (Candito, Marie; Seddah, Djamé; Perrier, Guy; Guillaume, Bruno)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[spaCy lookups data](https://github.com/explosion/spacy-lookups-data) (Explosion)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) |
| **License** | `LGPL-LR` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (237 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|Poss=Yes`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|Number=Plur\|POS=ADP\|PronType=Art`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|PronType=Int`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=DET\|Poss=Yes`, `POS=AUX\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `POS=PRON\|Person=3\|Reflex=Yes`, `Gender=Masc\|POS=NOUN`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PRON\|Person=3`, `Number=Plur\|POS=NOUN`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=3`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes`, `Gender=Masc\|POS=PRON`, `POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=PRON`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=PRON`, `POS=NUM`, `Gender=Fem\|POS=NOUN`, `POS=SPACE`, `Gender=Fem\|Number=Plur\|POS=PRON`, `Number=Plur\|POS=PRON\|Person=3`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Number=Plur\|POS=PRON\|Person=2`, `NumType=Card\|POS=PRON`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NOUN`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|POS=PRON`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `POS=SYM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `POS=DET`, `Gender=Masc\|Number=Plur\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=1\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Masc\|NumType=Card\|POS=NUM` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux:pass`, `aux:tense`, `case`, `cc`, `ccomp`, `conj`, `cop`, `dep`, `det`, `expl:comp`, `expl:pass`, `expl:subj`, `fixed`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl:agent`, `obl:arg`, `obl:mod`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.80 |
| `TOKEN_P` | 98.44 |
| `TOKEN_R` | 98.96 |
| `TOKEN_F` | 98.70 |
| `POS_ACC` | 97.34 |
| `MORPH_ACC` | 96.74 |
| `MORPH_MICRO_P` | 98.91 |
| `MORPH_MICRO_R` | 98.17 |
| `MORPH_MICRO_F` | 98.54 |
| `SENTS_P` | 85.92 |
| `SENTS_R` | 89.26 |
| `SENTS_F` | 87.35 |
| `DEP_UAS` | 90.29 |
| `DEP_LAS` | 86.54 |
| `TAG_ACC` | 94.47 |
| `LEMMA_ACC` | 91.36 |
| `ENTS_P` | 83.99 |
| `ENTS_R` | 83.87 |
| `ENTS_F` | 83.93 | | 5e2a6466e7c2ed2a03fe988d87d3486f |
heziiiii/ddpm-butterflies-128 | heziiiii | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,230 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/heziiiii/ddpm-butterflies-128/tensorboard?#scalars)
| 1d9ac641223f45476fb771d4848b4b72 |
TheSkinnyRat/TI-EMB_elaina | TheSkinnyRat | null | 23 | 0 | null | 4 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion'] | false | true | true | 1,933 | false |
# Info
> Trainer: [TheSkinnyRat](https://huggingface.co/TheSkinnyRat)\
> Type: Textual Inversion Embeddings
# Description
> Elaina (イレイナ, Ireina) is the main protagonist of the Wandering Witch series.
> She is a witch with the witch name of The Ashen Witch.
> ([Fandom](https://wandering-witch.fandom.com/wiki/Elaina))
# Download
> [Model download](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/tree/main)
# Training
> Model Used: [nai-wd.ckpt](https://huggingface.co/andite/training_models/tree/main)\
> Dataset: 13 images\
> Size: 512x512\
> Steps: 7500\
> N Steps: 500
# Preview
> **Model:** [anything-v4.5-pruned.ckpt](https://huggingface.co/andite/anything-v4.0/tree/main)\
> **Model VAE:** [anything-v4.0.vae.pt](https://huggingface.co/andite/anything-v4.0/tree/main)\
> **Prompt:** masterpiece, best quality, EMB_elaina-7500\
> **Negative Prompt:** obese, (ugly:1.3), (duplicate:1.3), (morbid), (mutilated), out of frame, extra fingers, mutated hands, (poorly drawn hands), (poorly drawn face), (mutation:1.3), (deformed:1.3), (amputee:1.3), blurry, bad anatomy, bad proportions, (extra limbs), cloned face, (disfigured:1.3), gross proportions, (malformed limbs), (missing arms), (missing legs), (extra arms), (extra legs), mutated hands, (fused fingers), (too many fingers), (long neck:1.3), lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, black and white, monochrome, censored,empty
![Preview1](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/resolve/main/preview/1.png)
![Preview2](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/resolve/main/preview/2.png)
![Preview3](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/resolve/main/preview/3.png)
![Preview4](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/resolve/main/preview/4.png)
![Preview5](https://huggingface.co/TheSkinnyRat/TI-EMB_elaina/resolve/main/preview/5.png) | a6af81dbf17d0ce280f1fc326c07c815 |
annahaz/xlm-roberta-base-misogyny-sexism-tweets | annahaz | xlm-roberta | 10 | 139 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,864 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-tweets
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5009
- Accuracy: 0.796
- F1: 0.8132
- Precision: 0.75
- Recall: 0.888
- Mae: 0.204
- Tn: 352
- Fp: 148
- Fn: 56
- Tp: 444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:---:|:--:|:---:|
| 0.4947 | 1.0 | 1646 | 0.4683 | 0.765 | 0.7866 | 0.7205 | 0.866 | 0.235 | 332 | 168 | 67 | 433 |
| 0.4285 | 2.0 | 3292 | 0.4514 | 0.779 | 0.8004 | 0.7298 | 0.886 | 0.221 | 336 | 164 | 57 | 443 |
| 0.3721 | 3.0 | 4938 | 0.4430 | 0.781 | 0.8060 | 0.7234 | 0.91 | 0.219 | 326 | 174 | 45 | 455 |
| 0.3127 | 4.0 | 6584 | 0.5009 | 0.796 | 0.8132 | 0.75 | 0.888 | 0.204 | 352 | 148 | 56 | 444 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 904eb9c1e6f7faf20262fc071b839843 |
mattthew/technicolor-50s-diffusion | mattthew | null | 9 | 0 | null | 0 | null | false | false | false | cc-by-sa-4.0 | null | null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | [] | false | true | true | 2,283 | false | # 🌈 Technicolor-50s Diffusion
## Style Description
- highly-saturated postcard-like colors, flat high-key lighting, strong rim-lighting, 40s and 50s lifestyle
## Sample Output (Raw Output)
![Asian woman](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00006-1638627547-tchnclr%20style.png)
<sub>tchnclr style, a closeup portrait of Brenda Song, happy beaming content, glitter, glittery
Negative prompt: b&w, lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ugly
Steps: 40, Sampler: Euler, CFG scale: 7, Seed: 1638627547, Size: 512x512, Model hash: ed87e89c, Variation seed: 3476746822, Variation seed strength: 0.2</sub>
![White man loves dog](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00001-2257021426-closeup%20portr.png)
<sub>Use PNG block tool to view the prompts and settings used to product these images</sub>
![Dapper Japanese man](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00003-706122643-tchnclr%20style%2C.png)
![Black sci-fi woman](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00000-1612917422-a%20closeup%20por.png)
![Man in glittery outfit](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00005-2202944893-tchnclr%20style.png)
![White woman with laptop](https://huggingface.co/mattthew/technicolor-50s-diffusion/resolve/main/00002-117811130-tchnclr%20style%2C.png)
## Recommended Usage
- Your prompt must include "tchnclr style"
- Use CFG of 7 or 8 for best results
- The model was trained with and excels at closeup portraits of men and women
- Try including "glitter" in your prompt!
- Putting "b&w" as a negative prompt will help ensure color image
## Known Limitations
- It strongly tries to insert 40s and 50s hairstyles, clothing, and scenery
- As you can see from the examples, you can insert some modernity and blend with other styles. But if your prompt insists on modern elements, the technicolor effect may disappear.
- The model tends to turn men into women. It also likes to add hats!
## Training Process
20 images from movies filmed in technicolor, 200 photo-like classifiers, 6000 steps, using the Dreambooth Extension for Automatic1111. | 484c28e3c6b7e38859a451398a2bbbec |
Zlikwid/zlikwidv2 | Zlikwid | null | 18 | 3 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 418 | false | ### ZlikwidV2 Dreambooth model trained by Zlikwid with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| 2b49a1e7bf066806e97791b1feec57df |
mrsteyk/openchatgpt-neo-125m | mrsteyk | gpt_neo | 11 | 0 | transformers | 0 | text-generation | true | false | false | mit | ['en'] | null | null | 1 | 0 | 1 | 0 | 3 | 0 | 3 | ['generated_from_trainer', 'text generation', 'pytorch', 'casual-lm'] | true | true | true | 2,955 | false |
# --- Disclaimer ---
# "Neo is an incredibly cursed codebase, it should not be used by anyone" (C) co-founder of EleutherAI - Connor Leahy
# !!! USE [openchatgpt-neox-125m](https://huggingface.co/mrsteyk/openchatgpt-neox-125m) INSTEAD !!!
# --- Archived ---
# openchatgpt-neo-r1
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the openchatgpt safe-r1 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2156
- Accuracy: 0.8338
## Model description
Finetune based on the inner workings of ChatGPT. I won't elaborate on that. You must have a faint idea of how prompt is made for it to spit anything that's not garbled mess.
This is effectively a schizophrenic idea that met the light of day. Practically a collab of 3 students in a virtual shed.
## Intended uses & limitations
Intended uses & limitations fall in line with OpenAI's. Dataset used consists of safe texts (i.e. not highly sexual/erotica type stuff). NSFW version of the dataset is not planned to exist at the moment.
Keep in mind that this is a 125m version of GPT-Neo. My 1050Ti Mobile couldn't even handle that without gradient thingmabobs. If anyone knows how to effectively finetune larger models on free colabs - feel free to let me know. Pile tokenizer also has one downside compared to native GPT-2/3 - `Assistant`.
## Training and evaluation data
Data was split in ratio of 95%/5%. Preproccess included removing mentions of OpenAI wherever it was not deemed appropriete (GPT-2 has one of the appropriete mentions). Whole dataset consists of just shy off 3k input-output pairs. One input has multiple outputs (read as: one message has multiple variants of an answer). <<<1% (3 total) are curated lines (i.e. a huge mistake was spotted that needed corrections).
Heavy bias on IT.
## Training procedure
Input and output were straight up concatenated due to the nature of how ChatGPT works. Padding chosen was the same as the separator token, if that's not effective - please let me know as I am new to this stuff.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.9203 | 1.0 | 1378 | 5.1668 | 0.7274 |
| 4.1368 | 2.0 | 2756 | 4.3841 | 0.7563 |
| 3.4554 | 3.0 | 4134 | 3.8068 | 0.7875 |
| 2.7598 | 4.0 | 5512 | 3.3097 | 0.8303 |
| 2.5879 | 5.0 | 6890 | 3.2156 | 0.8338 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 363a38baa41437aac5164884c2768719 |
Siddu0406/gpt-2-model-2 | Siddu0406 | gpt2 | 13 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,026 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-2-model-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| f086e22089fd1f2875c22a8e19035a33 |
cammy/bart-large-cnn-100-lit-evalMA-NOpad2 | cammy | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,552 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-lit-evalMA-NOpad2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2126
- Rouge1: 25.6196
- Rouge2: 7.2753
- Rougel: 18.0987
- Rougelsum: 20.8416
- Gen Len: 67.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.0890 | 23.5493 | 8.9875 | 17.1471 | 20.1643 | 67.8 |
| No log | 2.0 | 200 | 1.2126 | 25.6196 | 7.2753 | 18.0987 | 20.8416 | 67.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
| 2904c5d06d606c80b84944cc19b49c00 |
Mehtap/whisper-base | Mehtap | whisper | 25 | 20 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 2,002 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base Turkish Whisper (BTW)
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ermetal Meetings dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Wer: 0.0
- Cer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.8786 | 6.63 | 100 | 1.3510 | 0.7866 | 0.6649 |
| 0.4559 | 13.32 | 200 | 0.3395 | 0.3590 | 0.2157 |
| 0.0793 | 19.95 | 300 | 0.0564 | 0.0996 | 0.0531 |
| 0.0137 | 26.63 | 400 | 0.0120 | 0.0017 | 0.0017 |
| 0.0042 | 33.32 | 500 | 0.0032 | 0.0 | 0.0 |
| 0.0021 | 39.95 | 600 | 0.0018 | 0.0 | 0.0 |
| 0.0014 | 46.63 | 700 | 0.0013 | 0.0 | 0.0 |
| 0.0012 | 53.32 | 800 | 0.0011 | 0.0 | 0.0 |
| 0.001 | 59.95 | 900 | 0.0010 | 0.0 | 0.0 |
| 0.001 | 66.63 | 1000 | 0.0009 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.9.1+cu111
- Datasets 2.7.1
- Tokenizers 0.13.2
| 4de42494bf32c06be6b744bc961bde5e |
fathyshalab/all-roberta-large-v1-meta-6-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,507 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-meta-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4797
- Accuracy: 0.28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 |
| 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 |
| 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 |
| 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 |
| 1.4612 | 5.0 | 5 | 2.4797 | 0.28 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| f263b49c8773eee9a7f13bdb4ac4bad4 |
Helsinki-NLP/opus-mt-fi-xh | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-fi-xh
* source languages: fi
* target languages: xh
* OPUS readme: [fi-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.xh | 25.3 | 0.554 |
| 033987db784f4f88bd650c5730a9f0ac |
lgris/bp500-base100k_voxpopuli | lgris | wav2vec2 | 9 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['common_voice', 'mls', 'cetuc', 'lapsbm', 'voxforge', 'tedx', 'sid'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | true | true | 12,406 | false |
# bp500-base100k_voxpopuli: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94.0h | -- | 5.4h |
| Common Voice | 37.8h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 148.9h | -- | 1.8h |
| SID | 7.2h | -- | 1.0h |
| VoxForge | 3.9h | -- | 0.1h |
| Total | 453.6h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/10iESR5AQxuxF5F7w3wLbpc_9YMsYbY9H/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500-base100k_voxpopuli (demonstration below) | 0.142 | 0.201 | 0.052 | 0.224 | 0.102 | 0.317 | 0.048 | 0.155 |
| bp\_500-base100k_voxpopuli + 4-gram (demonstration below) | 0.099 | 0.149 | 0.047 | 0.192 | 0.115 | 0.371 | 0.127 | 0.157 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|qual o instagram dele|**qualo** **está** **gramedele**|
|o capitão foi expulso do exército porque era doido|o **capitãl** foi **exposo** do exército porque era doido|
|também por que não|também **porque** não|
|não existe tempo como o presente|não existe tempo como *o* presente|
|eu pulei para salvar rachel|eu pulei para salvar **haquel**|
|augusto cezar passos marinho|augusto **cesa** **passoesmarinho**|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-base100k_voxpopuli"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
with torch.no_grad():
logits = self.model(input_values).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.1419179499917191
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.20079950312040154
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.052780934343434324
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.22413887199364113
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1019041538671034
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.31711268778273327
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.04826433982683982
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.099518615112877
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.1488912889506362
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.047080176767676764
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.19220291966887196
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.11535498771650306
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.3707890073539895
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.12682088744588746
| dfb57b0c3b85eeeade2b272e16bf346b |
birgermoell/psst-fairseq-larger-rir | birgermoell | wav2vec2 | 5 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition'] | false | true | true | 410 | false |
This model is trained on the PSST Challenge data, with a subset of TIMIT that was augmented using Room Impulse Response (RIR). A file containing the list of TIMIT IDs is in the repository (`timit-ids.txt`)
The model was finetuned on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec), and the results on the validation set were **PER:** 21\.0%, **FER:** 9\.2%.
| 330b8788ffa202e8ee922d131a574a4e |
Mizuiro-sakura/deberta-v2-base-japanese-finetuned-ner | Mizuiro-sakura | deberta-v2 | 12 | 10 | transformers | 0 | token-classification | true | false | false | mit | ['ja'] | ['wikipedia', 'cc100', 'oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'deberta', 'deberta-v2', 'named entity recognition', 'named-entity-recognition', 'ner'] | false | true | true | 2,101 | false |
# このモデルはdeberta-v2-base-japaneseをファインチューニングして固有表現抽出(NER)に用いれるようにしたものです。
このモデルはdeberta-v2-base-japaneseを Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
# This model is fine-tuned model for Named Entity Recognition (NER) which is based on deberta-v2-base-japanese
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# How to use 使い方
transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。
以下のコードを実行することで、固有表現抽出タスクを解かせることができます。 please execute this code.
```python
from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-base-japanese-finetuned-ner')
model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/deberta-v2-base-japanese-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# モデルの精度 accuracy of model
precision recall f1-score support
その他の組織名 0.73 0.75 0.74 238
イベント名 0.81 0.81 0.81 215
人名 0.84 0.87 0.85 547
地名 0.83 0.83 0.83 446
政治的組織名 0.82 0.85 0.83 263
施設名 0.74 0.86 0.80 241
法人名 0.81 0.82 0.82 487
製品名 0.68 0.73 0.71 252
micro avg 0.79 0.82 0.81 2689
macro avg 0.78 0.81 0.80 2689
weighted avg 0.79 0.82 0.81 2689
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
| 08052a58606c65d69b357d8b908b6794 |
stanfordnlp/stanza-swl | stanfordnlp | null | 6 | 4 | stanza | 0 | token-classification | false | false | false | apache-2.0 | ['swl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stanza', 'token-classification'] | false | true | true | 595 | false | # Stanza model for Swedish_Sign_Language (swl)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-09-25 02:05:14.693
| afd409c3b660a0f11d7a3218304d7115 |
muhtasham/small-mlm-glue-wnli | muhtasham | bert | 12 | 0 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,025 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-wnli
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7452 | 6.25 | 500 | 1.2770 |
| 0.9127 | 12.5 | 1000 | 0.8006 |
| 0.6024 | 18.75 | 1500 | 0.5714 |
| 0.3967 | 25.0 | 2000 | 0.6533 |
| 0.3443 | 31.25 | 2500 | 0.3623 |
| 0.2739 | 37.5 | 3000 | 0.3035 |
| 0.2326 | 43.75 | 3500 | 0.2767 |
| 0.1942 | 50.0 | 4000 | 0.1730 |
| 0.1666 | 56.25 | 4500 | 0.1674 |
| 0.1688 | 62.5 | 5000 | 0.1459 |
| 0.1378 | 68.75 | 5500 | 0.2353 |
| 0.1344 | 75.0 | 6000 | 0.1074 |
| 0.1259 | 81.25 | 6500 | 0.1757 |
| 0.1176 | 87.5 | 7000 | 0.0720 |
| 0.1114 | 93.75 | 7500 | 0.1377 |
| 0.0993 | 100.0 | 8000 | 0.1752 |
| 0.0992 | 106.25 | 8500 | 0.1284 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 6c6e33522eb1b94005eca3344ade1a12 |
Supreeth/roberta-base-MLM | Supreeth | roberta | 17 | 15 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,010 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-MLM
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2449
- Accuracy: 0.7842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0a0+936e930
- Datasets 2.8.0
- Tokenizers 0.13.2
| 53bd78deedd074ccc37b59ec68d52802 |
Hoax0930/kyoto_marian_mod_2_0 | Hoax0930 | marian | 14 | 1 | transformers | 0 | translation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'generated_from_trainer'] | true | true | true | 1,068 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kyoto_marian_mod_3
This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2](https://huggingface.co/Hoax0930/kyoto_marian_mod_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2477
- Bleu: 19.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 3d8541aaf4120f20a2306374088add81 |
lucio/xls-r-uyghur-cv7 | lucio | wav2vec2 | 25 | 12 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ug'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'ug', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 4,457 | false |
# XLS-R-300M Uyghur CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1772
- Wer: 0.2589
## Model description
For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
The model vocabulary consists of the alphabetic characters of the [Perso-Arabic script for the Uyghur language](https://omniglot.com/writing/uyghur.htm), with punctuation removed.
## Intended uses & limitations
This model is expected to be of some utility for low-fidelity use cases such as:
- Draft video captions
- Indexing of recorded broadcasts
The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
## Training and evaluation data
The combination of `train` and `dev` of common voice official splits were used as training data. The official `test` split was used as validation data as well as for final evaluation.
## Training procedure
The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Uyghur CV7 example sentences. A ramped learning rate is used with an initial warmup phase of 2000 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 18500 steps (100 epochs).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3043 | 2.73 | 500 | 3.2415 | 1.0 |
| 3.0482 | 5.46 | 1000 | 2.9591 | 1.0 |
| 1.4767 | 8.2 | 1500 | 0.4779 | 0.5777 |
| 1.3152 | 10.93 | 2000 | 0.3697 | 0.4938 |
| 1.2246 | 13.66 | 2500 | 0.3084 | 0.4459 |
| 1.1781 | 16.39 | 3000 | 0.2842 | 0.4154 |
| 1.1351 | 19.13 | 3500 | 0.2615 | 0.3929 |
| 1.1052 | 21.86 | 4000 | 0.2462 | 0.3747 |
| 1.0711 | 24.59 | 4500 | 0.2366 | 0.3652 |
| 1.035 | 27.32 | 5000 | 0.2268 | 0.3557 |
| 1.0277 | 30.05 | 5500 | 0.2243 | 0.3450 |
| 1.002 | 32.79 | 6000 | 0.2204 | 0.3389 |
| 0.9837 | 35.52 | 6500 | 0.2156 | 0.3349 |
| 0.9773 | 38.25 | 7000 | 0.2127 | 0.3289 |
| 0.9807 | 40.98 | 7500 | 0.2142 | 0.3274 |
| 0.9582 | 43.72 | 8000 | 0.2004 | 0.3142 |
| 0.9548 | 46.45 | 8500 | 0.2022 | 0.3050 |
| 0.9251 | 49.18 | 9000 | 0.2019 | 0.3035 |
| 0.9103 | 51.91 | 9500 | 0.1964 | 0.3021 |
| 0.915 | 54.64 | 10000 | 0.1970 | 0.3032 |
| 0.8962 | 57.38 | 10500 | 0.2007 | 0.3046 |
| 0.8729 | 60.11 | 11000 | 0.1967 | 0.2942 |
| 0.8744 | 62.84 | 11500 | 0.1952 | 0.2885 |
| 0.874 | 65.57 | 12000 | 0.1894 | 0.2895 |
| 0.8457 | 68.31 | 12500 | 0.1895 | 0.2828 |
| 0.8519 | 71.04 | 13000 | 0.1912 | 0.2875 |
| 0.8301 | 73.77 | 13500 | 0.1878 | 0.2760 |
| 0.8226 | 76.5 | 14000 | 0.1808 | 0.2701 |
| 0.8071 | 79.23 | 14500 | 0.1849 | 0.2741 |
| 0.7999 | 81.97 | 15000 | 0.1808 | 0.2717 |
| 0.7947 | 84.7 | 15500 | 0.1821 | 0.2716 |
| 0.7783 | 87.43 | 16000 | 0.1824 | 0.2661 |
| 0.7729 | 90.16 | 16500 | 0.1773 | 0.2639 |
| 0.7759 | 92.9 | 17000 | 0.1767 | 0.2629 |
| 0.7713 | 95.63 | 17500 | 0.1780 | 0.2621 |
| 0.7628 | 98.36 | 18000 | 0.1773 | 0.2594 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| ebb1e0eb12296c4ba49c65c2380fa740 |
anas-awadalla/roberta-large-houlsby-few-shot-k-64-finetuned-squad-seed-2 | anas-awadalla | null | 19 | 0 | null | 0 | null | false | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,096 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-64-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 128
- seed: 2
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 75
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 0a2e72399a33eee619c751acdb2dbcc9 |
bigmorning/distilbert_oscarth_0040 | bigmorning | distilbert | 4 | 2 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,787 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_oscarth_0040
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2890
- Validation Loss: 1.2296
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1327 | 2.9983 | 0 |
| 2.7813 | 2.4562 | 1 |
| 2.4194 | 2.2066 | 2 |
| 2.2231 | 2.0562 | 3 |
| 2.0894 | 1.9450 | 4 |
| 1.9905 | 1.8621 | 5 |
| 1.9148 | 1.7941 | 6 |
| 1.8508 | 1.7363 | 7 |
| 1.7976 | 1.6909 | 8 |
| 1.7509 | 1.6488 | 9 |
| 1.7126 | 1.6124 | 10 |
| 1.6764 | 1.5835 | 11 |
| 1.6450 | 1.5521 | 12 |
| 1.6175 | 1.5282 | 13 |
| 1.5919 | 1.5045 | 14 |
| 1.5679 | 1.4833 | 15 |
| 1.5476 | 1.4627 | 16 |
| 1.5271 | 1.4498 | 17 |
| 1.5098 | 1.4270 | 18 |
| 1.4909 | 1.4161 | 19 |
| 1.4760 | 1.3995 | 20 |
| 1.4609 | 1.3864 | 21 |
| 1.4475 | 1.3717 | 22 |
| 1.4333 | 1.3590 | 23 |
| 1.4203 | 1.3478 | 24 |
| 1.4093 | 1.3403 | 25 |
| 1.3980 | 1.3296 | 26 |
| 1.3875 | 1.3176 | 27 |
| 1.3773 | 1.3094 | 28 |
| 1.3674 | 1.3011 | 29 |
| 1.3579 | 1.2920 | 30 |
| 1.3497 | 1.2826 | 31 |
| 1.3400 | 1.2764 | 32 |
| 1.3326 | 1.2694 | 33 |
| 1.3236 | 1.2635 | 34 |
| 1.3169 | 1.2536 | 35 |
| 1.3096 | 1.2477 | 36 |
| 1.3024 | 1.2408 | 37 |
| 1.2957 | 1.2364 | 38 |
| 1.2890 | 1.2296 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 9162d8213ad4568c7d48c661d2348e86 |
Helsinki-NLP/opus-mt-pap-en | Helsinki-NLP | marian | 10 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 812 | false |
### opus-mt-pap-en
* source languages: pap
* target languages: en
* OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.en | 47.3 | 0.634 |
| Tatoeba.pap.en | 63.2 | 0.684 |
| 74baa4e53dc7b66e8979c9e6f10ec015 |
meongracun/nmt-mpst-id-en-lr_1e-4-ep_10-seq_128_bs-64 | meongracun | t5 | 9 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,858 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-4-ep_10-seq_128_bs-64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4108
- Bleu: 5.8803
- Meteor: 0.1857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 101 | 2.8898 | 2.8643 | 0.1158 |
| No log | 2.0 | 202 | 2.7574 | 3.5561 | 0.1355 |
| No log | 3.0 | 303 | 2.6672 | 4.1558 | 0.1509 |
| No log | 4.0 | 404 | 2.5927 | 4.5156 | 0.1593 |
| 2.9931 | 5.0 | 505 | 2.5319 | 4.9528 | 0.1673 |
| 2.9931 | 6.0 | 606 | 2.4832 | 5.2665 | 0.1728 |
| 2.9931 | 7.0 | 707 | 2.4505 | 5.4822 | 0.1778 |
| 2.9931 | 8.0 | 808 | 2.4290 | 5.7456 | 0.1829 |
| 2.9931 | 9.0 | 909 | 2.4147 | 5.8499 | 0.185 |
| 2.6176 | 10.0 | 1010 | 2.4108 | 5.8803 | 0.1857 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 66704caee828d2a6dc50091d7b5584cb |
facebook/s2t-small-covost2-es-en-st | facebook | speech_to_text | 11 | 18 | transformers | 0 | automatic-speech-recognition | true | true | false | mit | ['es', 'en'] | ['covost2'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech-translation', 'automatic-speech-recognition'] | false | true | true | 4,012 | false |
# S2T-SMALL-COVOST2-ES-EN-ST
`s2t-small-covost2-es-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end Spanish speech to English text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-es-en-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-es-en-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-es-en-st is trained on Spanish-English subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for es-en (BLEU score): 22.31
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
| 30f7e1591924e0561b42b7d062cc9c36 |
echarlaix/bart-base-cnn-r2-19.4-d35-hybrid | echarlaix | bart | 111 | 6 | transformers | 0 | summarization | true | false | false | apache-2.0 | ['en'] | ['cnn_dailymail'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['summarization'] | false | true | true | 1,187 | false |
## facebook/bart-base model fine-tuned on CNN/DailyMail
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **35%** of the original weights.
The model contains **53%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
<div class="graph"><script src="/echarlaix/bart-base-cnn-r2-19.4-d35-hybrid/raw/main/model_card/density_info.js" id="c0afb977-b30c-485d-ac75-afc874392380"></script></div>
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/facebook/bart-base).
A side-effect of the block pruning is that some of the attention heads are completely removed: 38 heads were removed on a total of 216 (17.6%).
## Details of the CNN/DailyMail dataset
| Dataset | Split | # samples |
| ------------- | ----- | --------- |
| CNN/DailyMail | train | 287K |
| CNN/DailyMail | eval | 13K |
### Results
| Metric | # Value |
| ----------- | --------- |
| **Rouge 1** | **42.18** |
| **Rouge 2** | **19.44** |
| **Rouge L** | **39.17** |
| 429401019a8e860f1645bad606d37ac9 |
silviacamplani/distilbert-finetuned-dapt_tapt-ner-music | silviacamplani | distilbert | 18 | 8 | transformers | 1 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,776 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-dapt_tapt-ner-music
This model is a fine-tuned version of [silviacamplani/distilbert-finetuned-dapt_tapt-lm-ai](https://huggingface.co/silviacamplani/distilbert-finetuned-dapt_tapt-lm-ai) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6073
- Validation Loss: 0.7078
- Train Precision: 0.5337
- Train Recall: 0.5986
- Train F1: 0.5643
- Train Accuracy: 0.8344
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 370, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.6231 | 2.0072 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 |
| 1.7195 | 1.5337 | 0.1905 | 0.0072 | 0.0139 | 0.5597 | 1 |
| 1.3447 | 1.2423 | 0.3073 | 0.3510 | 0.3277 | 0.6910 | 2 |
| 1.1065 | 1.0569 | 0.4162 | 0.4536 | 0.4341 | 0.7195 | 3 |
| 0.9326 | 0.9225 | 0.5050 | 0.5473 | 0.5253 | 0.7689 | 4 |
| 0.8061 | 0.8345 | 0.5306 | 0.5770 | 0.5528 | 0.8011 | 5 |
| 0.7118 | 0.7749 | 0.5292 | 0.5878 | 0.5569 | 0.8176 | 6 |
| 0.6636 | 0.7366 | 0.5314 | 0.5950 | 0.5614 | 0.8242 | 7 |
| 0.6284 | 0.7158 | 0.5330 | 0.5968 | 0.5631 | 0.8321 | 8 |
| 0.6073 | 0.7078 | 0.5337 | 0.5986 | 0.5643 | 0.8344 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| 9f8c1991a45776d203ae7a8f5b70bfaf |
arnepeine/mona_speech | arnepeine | whisper | 21 | 29 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,495 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mona Speech Model (Trained on ICU Data)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Mona Speech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6949
- Wer: 114.5294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0001 | 31.25 | 1000 | 0.6152 | 109.7314 |
| 0.0001 | 62.5 | 2000 | 0.6619 | 111.6657 |
| 0.0 | 93.75 | 3000 | 0.6838 | 114.1096 |
| 0.0 | 125.0 | 4000 | 0.6949 | 114.5294 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| a088061f3539b6bb08e55cefd08d35c0 |
AFreud/bert-base-romanian-ner-finetuned-ner | AFreud | bert | 13 | 18 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,397 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-romanian-ner-finetuned-ner
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-ner](https://huggingface.co/dumitrescustefan/bert-base-romanian-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0539
- Precision: 0.9662
- Recall: 0.9758
- F1: 0.9710
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0538 | 1.0 | 5500 | 0.0539 | 0.9662 | 0.9758 | 0.9710 | 0.9861 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| d9b7f864644eeec17d45b54bc1c63fae |
fanpu/model_output_sorted_by_upvotes_positive_subreddit-wallstreetbets_1 | fanpu | gpt2 | 11 | 4 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,711 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_sorted_by_upvotes_positive_subreddit-wallstreetbets_1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7551 | 1.07 | 1000 | 3.7881 |
| 3.5181 | 2.13 | 2000 | 3.7335 |
| 3.3476 | 3.2 | 3000 | 3.7369 |
| 3.212 | 4.27 | 4000 | 3.7678 |
| 3.0517 | 5.34 | 5000 | 3.8142 |
| 2.899 | 6.4 | 6000 | 3.8666 |
| 2.7874 | 7.47 | 7000 | 3.9208 |
| 2.7247 | 8.54 | 8000 | 3.9636 |
| 2.6566 | 9.6 | 9000 | 3.9814 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| bc184a76933f92c482bb29673b4d8cd6 |
superb/hubert-large-superb-sid | superb | hubert | 5 | 56 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | ['en'] | ['superb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speech', 'audio', 'hubert', 'audio-classification'] | false | true | true | 3,044 | false |
# Hubert-Large for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-large-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9033` | `0.9035` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` | f1039ff28fd12e58f92cf042a1262050 |
dominguesm/legal-bert-base-cased-ptbr | dominguesm | bert | 11 | 5 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['pt'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 4,036 | false |
## (BERT base) Language modeling in the legal domain in Portuguese
**legal-bert-base-cased-ptbr** is a Language Model in the legal domain in Portuguese based on the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) by using a MASK objective.
The model is intended to assist NLP research in the legal field, computer law and legal technology applications. Several legal texts in Portuguese were used (more information below).
**Large version of the model will be available soon**.
## Pre-training corpora
The pre-training corpora of **legal-bert-base-cased-ptbr** include:
* 61309 - Documentos juridicos diversos | (Miscellaneous legal documents)
* 751 - Petições (Recurso Extraordinário JEC) | (Petitions)
* 682 - Sentenças | (Sentences)
* 498 - Acordãos 2º Instancia | (2nd Instance Accords)
* 469 - Agravos Recurso extraordinário | (RE grievances)
* 411 - Despacho de Admissibilidade | (Admissibility Order)
The data used was provided by the BRAZILIAN SUPREME FEDERAL TRIBUNAL, through the terms of use: [LREC 2020](https://ailab.unb.br/victor/lrec2020).
The results of this project do not imply in any way the position of the BRAZILIAN SUPREME FEDERAL TRIBUNAL, all being the sole and exclusive responsibility of the author of the model.
## Load Pretrained Model
````python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("dominguesm/legal-bert-base-cased-ptbr")
model = AutoModel.from_pretrained("dominguesm/legal-bert-base-cased-ptbr")
# OR
from transformers import pipeline
pipe = pipeline('fill-mask', "dominguesm/legal-bert-base-cased-ptbr")
````
## Use **legal-bert-base-cased-ptbr** variants as Language Models
| Text | Masked token | Predictions |
| ---------------------------------- | ------------ | ------------ |
| De ordem, a Secretaria Judiciária do Supremo Tribunal Federal INTIMA a parte abaixo identificada, ou quem as suas vezes fizer, do inteiro teor do(a) despacho/decisão presente nos autos (art. 270 do Código de Processo [MASK] e art 5º da Lei 11.419/2006). | Civil | ('Civil', 0.9999), ('civil', 0.0001), ('Penal', 0.0000), ('eletrônico', 0.0000), ('2015', 0.0000) |
| 2. INTIMAÇÃO da Autarquia: 2.2 Para que apresente em Juízo, com a contestação, cópia do processo administrativo referente ao benefício [MASK] em discussão na lide | previdenciário | ('ora', 0.9424), ('administrativo', 0.0202), ('doença', 0.0117), ('acidente', 0.0037), ('posto', 0.0036) |
| Certifico que, nesta data, os presentes autos foram remetidos ao [MASK] para processar e julgar recurso (Agravo de Instrumento). | STF | ('Tribunal', 0.4278), ('Supremo', 0.1657), ('origem', 0.1538), ('arquivo', 0.1415), ('sistema', 0.0216) |
| TEMA: 810. Validade da correção monetária e dos juros moratórios [MASK] sobre as condenações impostas à Fazenda Pública, conforme previstos no art. 1º-F da Lei 9.494/1997, com a redação dada pela Lei 11.960/2009. | incidentes | ('incidentes', 0.9979), ('incidente', 0.0021), ('aplicados', 0.0000), (',', 0.0000), ('aplicada', 0.0000) |
## Training results
````
Num examples = 353435
Num Epochs = 3
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 33135
TRAIN RESULTS
"epoch": 3.0
"train_loss": 0.6107781137512769
"train_runtime": 10192.1545
"train_samples": 353435
"train_samples_per_second": 104.031
"train_steps_per_second": 3.251
EVAL RESULTS
"epoch": 3.0
"eval_loss": 0.47251805663108826
"eval_runtime": 126.3026
"eval_samples": 17878
"eval_samples_per_second": 141.549
"eval_steps_per_second": 4.426
"perplexity": 1.604028145934512
````
## Citation
```
@misc{domingues2022legal-bert-base-cased-ptbr,
author = {Domingues, Maicon}
title = {Language Model in the legal domain in Portuguese},
year={2022},
howpublished= {\url{https://huggingface.co/dominguesm/legal-bert-base-cased-ptbr/}}
}
```
| 8708b6361ef06b03e38e3ba4a5062059 |
kejian/fanatic-conditional | kejian | gpt2 | 25 | 7 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['kejian/codeparrot-train-more-filter-3.3b-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,565 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fanatic-conditional
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 384,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 384,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520'},
'num_additional_tokens': 2,
'path_or_name': 'kejian/mighty-mle'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'kejian/mighty-mle',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'fanatic-conditional',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1yrt0b3f | 4dee306ed1689d8705723053da2f3606 |
Helsinki-NLP/opus-mt-tc-big-en-hu | Helsinki-NLP | marian | 13 | 59 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['en', 'hu'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 5,415 | false | # opus-mt-tc-big-en-hu
Neural machine translation model for translating from English (en) to Hungarian (hu).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): eng
* target language(s): hun
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT eng-hun README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hun/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"I wish I hadn't seen such a horrible film.",
"She's at school."
]
model_name = "pytorch-models/opus-mt-tc-big-en-hu"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Bárcsak ne láttam volna ilyen szörnyű filmet.
# Iskolában van.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-hu")
print(pipe("I wish I hadn't seen such a horrible film."))
# expected output: Bárcsak ne láttam volna ilyen szörnyű filmet.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hun/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-hun | tatoeba-test-v2021-08-07 | 0.62096 | 38.7 | 13037 | 79562 |
| eng-hun | flores101-devtest | 0.60159 | 29.6 | 1012 | 22183 |
| eng-hun | newssyscomb2009 | 0.51918 | 20.6 | 502 | 9733 |
| eng-hun | newstest2009 | 0.50973 | 20.3 | 2525 | 54965 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:21:20 EEST 2022
* port machine: LM0-400-22516.local
| b510a7de99048f86f2568c36c837d730 |
apurik-parv/ilayaraja | apurik-parv | null | 56 | 5 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,208 | false | ### ilayaraja on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by apurik-parv
This is the Stable Diffusion model fine-tuned to the art style of Elayaraja, taught to Stable Diffusion with Dreambooth. S Elayaraja is a famous artist known for his oil paintings. He has embarked on a renowned place in the world of art. His paintings of Dravidian women are been an inspiration for many artists. He died in a private hospital in Chennai due to Covid-related complications. I hope this is a homage to him and his art will live through time.
(எஸ். இளையராஜா (பிறப்பு: ஏப்ரல் 4, 1979 - இறப்பு: சூன் 6, 2021) என்பவர் தமிழக ஓவியர்களுள் ஒருவர்.[1] இவர் தமிழ்நாட்டில் உயிரோவியப் பாணி ஓவியங்களை வரைவதில் முன்னணி ஓவியராக இருந்தார்.)
https://ta.wikipedia.org/wiki/%E0%AE%8E%E0%AE%B8%E0%AF%8D._%E0%AE%87%E0%AE%B3%E0%AF%88%E0%AE%AF%E0%AE%B0%E0%AE%BE%E0%AE%9C%E0%AE%BE
It can be used by modifying the `instance_prompt(s)`: **iraja**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
iraja
![iraja 0](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(36).jpg)
![iraja 1](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(35).jpg)
![iraja 2](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(33).jpg)
![iraja 3](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(32).jpg)
![iraja 4](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(31).jpg)
![iraja 5](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(30).jpg)
![iraja 6](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(29).jpg)
![iraja 7](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(28).jpg)
![iraja 8](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(27).jpg)
![iraja 9](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(26).jpg)
![iraja 10](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(25).jpg)
![iraja 11](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(24).jpg)
![iraja 12](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(23).jpg)
![iraja 13](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(21).jpg)
![iraja 14](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(20).jpg)
![iraja 15](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(19).jpg)
![iraja 16](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(18).jpg)
![iraja 17](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(17).jpg)
![iraja 18](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(16).jpg)
![iraja 19](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(15).jpg)
![iraja 20](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(14).jpg)
![iraja 21](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(13).jpg)
![iraja 22](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(12).jpg)
![iraja 23](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(11).jpg)
![iraja 24](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(10).jpg)
![iraja 25](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(9).jpg)
![iraja 26](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(8).jpg)
![iraja 27](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(7).jpg)
![iraja 28](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(5).jpg)
![iraja 29](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(4).jpg)
![iraja 30](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(3).jpg)
![iraja 31](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(2).jpg)
![iraja 32](https://huggingface.co/apurik-parv/ilayaraja/resolve/main/concept_images/iraja_(1).jpg)
| f77e0fdb468008764576c3c6fff3b671 |
emre/wav2vec2-xls-r-300m-Turkish-Tr-med | emre | wav2vec2 | 14 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'robust-speech-event'] | true | true | true | 2,146 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-med
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 0.4677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8093 | 4.21 | 400 | 2.7831 | 1.0 |
| 0.9881 | 8.42 | 800 | 0.5088 | 0.6681 |
| 0.3519 | 12.63 | 1200 | 0.4496 | 0.6007 |
| 0.2436 | 16.84 | 1600 | 0.4993 | 0.5654 |
| 0.1874 | 21.05 | 2000 | 0.4793 | 0.5530 |
| 0.1561 | 25.26 | 2400 | 0.5187 | 0.5589 |
| 0.1336 | 29.47 | 2800 | 0.5135 | 0.5311 |
| 0.1163 | 33.68 | 3200 | 0.4960 | 0.5143 |
| 0.1056 | 37.89 | 3600 | 0.4795 | 0.5045 |
| 0.0959 | 42.11 | 4000 | 0.4883 | 0.4987 |
| 0.0819 | 46.32 | 4400 | 0.4799 | 0.4903 |
| 0.0756 | 50.53 | 4800 | 0.4822 | 0.4831 |
| 0.0692 | 54.74 | 5200 | 0.4621 | 0.4762 |
| 0.062 | 58.95 | 5600 | 0.4727 | 0.4677 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| a90ccf8bf246b69ab3ca67022e6fd20f |
ultra-coder54732/3-way-detection-prop-16 | ultra-coder54732 | roberta | 21 | 0 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 946 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3-way-detection-prop-16
This model is a fine-tuned version of [ultra-coder54732/3-way-detection-prop-16](https://huggingface.co/ultra-coder54732/3-way-detection-prop-16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 51a60d34820a32524b63e45b0287f16b |
Helsinki-NLP/opus-mt-de-mt | Helsinki-NLP | marian | 10 | 16 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-de-mt
* source languages: de
* target languages: mt
* OPUS readme: [de-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.mt | 25.0 | 0.436 |
| 1c7073186681db613bb1d9955c938beb |
Jinchen/t5-small-finetuned-xsum | Jinchen | t5 | 7 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['xsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,283 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8115 | 1.0 | 3188 | 2.5273 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+rocm4.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 5b7496eee01ae1b0b3d44410e846225f |
eugenesiow/rcan-bam | eugenesiow | RCAN | 6 | 103 | transformers | 0 | null | false | false | false | apache-2.0 | null | ['eugenesiow/Div2k', 'eugenesiow/Set5', 'eugenesiow/Set14', 'eugenesiow/BSD100', 'eugenesiow/Urban100'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['super-image', 'image-super-resolution'] | false | true | true | 9,182 | false | # Residual Channel Attention Networks (RCAN)
RCAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Image Super-Resolution Using Very Deep Residual Channel Attention Networks](https://arxiv.org/abs/1807.02758) by Zhang et al. (2018) and first released in [this repository](https://github.com/yulunzhang/RCAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/rcan_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4")
## Model description
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import RcanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = RcanModel.from_pretrained('eugenesiow/rcan-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, RcanModel, RcanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = RcanConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = RcanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |rcan-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**** |
|Set5 |3x |30.39/0.8678 |**** |
|Set5 |4x |28.42/0.8101 |**30.8/0.8701** |
|Set14 |2x |30.22/0.8683 |**** |
|Set14 |3x |27.53/0.7737 |**** |
|Set14 |4x |25.99/0.7023 |**27.91/0.7648** |
|BSD100 |2x |29.55/0.8425 |**** |
|BSD100 |3x |27.20/0.7382 |**** |
|BSD100 |4x |25.96/0.6672 |**27.91/0.7477** |
|Urban100 |2x |26.66/0.8408 |**** |
|Urban100 |3x | |**** |
|Urban100 |4x |23.14/0.6573 |**24.75/0.7346** |
![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/rcan_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2")
You can find a notebook to easily run evaluation on pretrained models below:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@misc{zhang2018image,
title={Image Super-Resolution Using Very Deep Residual Channel Attention Networks},
author={Yulun Zhang and Kunpeng Li and Kai Li and Lichen Wang and Bineng Zhong and Yun Fu},
year={2018},
eprint={1807.02758},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` | 413d8c48f26da67263267fb42c976de5 |
emiyasstar/ch-w2v-conformer-norelpos | emiyasstar | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,667 | false | The ch-w2v-conformer model uses following datasets to pretrain:
ISML datasets (6 languages,70k hours): internal dataset contains 40k hours Chinese, Cantonese, Tibetan, Inner Mongolian, Inner Kazakh, Uighur.
Babel datasets (17 languages, 2k hours): Assamese, Bengali, Cantonese, Cebuano, Georgian, Haitian, Kazakh, Kurmanji, Lao, Pashto, Swahili, Tagalog, Tamil, Tok, Turkish, Vietnamese, Zulu
After pretraining, we build ASR system based on CTC-Attention structure. In very low resource task, we find that if too many initialization network structures are constructed in the upper layer of pre-training conformer encoder, the migration performance of the pre-training model will be destroyed, so we only build a single-layer transformer decoder for joint training.
pretrained model link:
## constrained-plus Task Performance
* Languages: Cantonese,mongolian,kazakh
* config: conf/train_conformer_large_10h.yaml
* Feature info: using mfcc feature, with dither 1.0, without cmvn
* Training info: lr 0.001, batch size 10, 4 gpus on V100, acc_grad 1, 80 epochs
* Decoding info: ctc_weight 0.5, average_num 35
dev set results trained only with 10 hours training set
## w2v-Conformer
| decoding_method | Cantonese(CER) | mongolian(WER) |
|:-------------------:|:----:|:----:|
| ctc_greedy_search | 31.46 | 53.64 |
| ctc_prefix_search | 31.47 | 53.50 |
| attention_rescoring | 31.45 | 52.96 |
## Conformer (train from scartch)
| decoding_method | Cantonese(CER) | mongolian(WER) |
|:-------------------:|----:|:----:|
| ctc_greedy_search | 61.43 | 89.38 |
| ctc_prefix_search | 61.37 | 89.53|
| attention_rescoring | 60.61 | 89.60| | 573d0a65fa405d0a581842859c6ffafa |
wiem87/swin-tiny-patch4-window7-224-finetuned-eurosat | wiem87 | swin | 9 | 1 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,492 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0454
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2137 | 1.0 | 190 | 0.0981 | 0.9681 |
| 0.1487 | 2.0 | 380 | 0.0517 | 0.9830 |
| 0.1398 | 3.0 | 570 | 0.0454 | 0.9826 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 26e51391e1991330390c8754a74e18ed |
Helsinki-NLP/opus-mt-yap-fr | Helsinki-NLP | marian | 10 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-yap-fr
* source languages: yap
* target languages: fr
* OPUS readme: [yap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.fr | 22.2 | 0.381 |
| 8a3136e7ac181328f771c9cb59533aa6 |
dominguesm/pt_core_news_trf | dominguesm | null | 27 | 16 | spacy | 1 | token-classification | false | false | false | cc-by-sa-4.0 | ['pt'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 43,344 | false |
Portuguese transformer pipeline ([neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)). Components: transformer, morphologizer, parser, ner, attribute_ruler, lemmatizer (trainable_lemmatizer).
| Feature | Description |
| --- | --- |
| **Name** | `pt_core_news_trf` |
| **Version** | `3.4.0` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Components** | `transformer`, `ner`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Portuguese Bosque v2.8](https://github.com/UniversalDependencies/UD_Portuguese-Bosque) (Rademaker, Alexandre; Freitas, Cláudia; de Souza, Elvis; Silveira, Aline; Cavalcanti, Tatiana; Evelyn, Wograine; Rocha, Luisa; Soares-Bastos, Isabela; Bick, Eckhard; Chalub, Fabricio; Paulino-Passos, Guilherme; Real, Livy; de Paiva, Valeria; Zeman, Daniel; Popel, Martin; Mareček, David; Silveira, Natalia; Martins, André)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Maicon Domingues](http://nlp.rocks) |
### Label Scheme
<details>
<summary>View label scheme (742 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
| **`tagger`** | `ADJ`, `ADJ_ADJ`, `ADJ_NOUN`, `ADP`, `ADP_ADV`, `ADP_DET`, `ADP_NUM`, `ADP_PRON`, `ADP_PROPN`, `ADV`, `ADV_PRON`, `AUX`, `AUX_PRON`, `CCONJ`, `CCONJ_PRON`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PART_NOUN`, `PART_NUM`, `PRON`, `PROPN`, `PROPN_PROPN`, `PUNCT`, `SCONJ`, `SCONJ_DET`, `SCONJ_PRON`, `SYM`, `VERB`, `VERB_PRON`, `VERB_PRON_PRON`, `VERB_SCONJ`, `X` |
| **`morphologizer`** | `Gender=Masc\|Number=Sing\|POS=PROPN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ\|Typo=Yes`, `POS=PUNCT`, `POS=VERB\|VerbForm=Ger`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `POS=SYM`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=CCONJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=SCONJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|VerbForm=Inf`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=AUX\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=CCONJ\|POS=ADV`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=AUX\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=VERB\|VerbForm=Part`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `ExtPos=NOUN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADP\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `ExtPos=CCONJ\|POS=CCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=AUX\|VerbForm=Part`, `Number=Plur\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=INTJ\|POS=AUX`, `Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|Number=Sing\|POS=NUM`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Case=Acc\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Dem`, `ExtPos=SCONJ\|POS=ADV`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=PROPN\|Number=Sing\|POS=PROPN`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Number=Sing\|POS=AUX\|Person=3\|VerbForm=Inf`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Art`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|POS=PRON\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Definite=Def\|ExtPos=ADV\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `ExtPos=CCONJ\|POS=VERB\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=ADV\|POS=ADP`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `ExtPos=SCONJ\|POS=SCONJ`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Inf`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `ExtPos=PROPN\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `ExtPos=SCONJ\|POS=ADP`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN\|PronType=Art`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `ExtPos=NOUN\|POS=ADP`, `Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `ExtPos=ADV\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `ExtPos=NOUN\|POS=X`, `POS=X`, `ExtPos=NOUN\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Dem`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET`, `ExtPos=ADP\|POS=ADP`, `POS=NOUN`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `ExtPos=AUX\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Typo=Yes\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Ind`, `POS=ADV\|Typo=Yes`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `POS=PRON\|PronType=Rel`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Definite=Def\|ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|Voice=Pass`, `Number=Sing\|POS=ADJ`, `ExtPos=ADV\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET`, `Case=Acc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `POS=INTJ`, `Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=CCONJ\|POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=X`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `POS=SCONJ\|VerbForm=Ger`, `Abbr=Yes\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=NUM`, `Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=PROPN`, `ExtPos=AUX\|POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `ExtPos=ADJ\|POS=X`, `Gender=Fem\|Number=Sing\|POS=X`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Number=Sing\|POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art\|Typo=Yes`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|POS=PRON\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art\|Typo=Yes`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art\|Typo=Yes`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `ExtPos=PROPN\|Gender=Masc\|POS=PROPN`, `Gender=Masc\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET`, `ExtPos=ADJ\|POS=ADP`, `ExtPos=ADJ\|POS=ADV`, `Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Typo=Yes`, `ExtPos=ADP\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=NOUN\|POS=ADV`, `Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `ExtPos=NOUN\|Gender=Fem\|Number=Plur\|POS=NOUN`, `ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=1\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|POS=X`, `Gender=Masc\|Number=Sing\|POS=X`, `POS=NUM`, `ExtPos=NOUN\|NumType=Ord\|POS=NUM`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `ExtPos=AUX\|POS=VERB\|VerbForm=Ger`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `ExtPos=NOUN\|NumType=Card\|POS=PART`, `ExtPos=NUM\|Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `ExtPos=NOUN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=CCONJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Gender=Fem\|Number=Plur\|POS=NOUN`, `NumType=Card\|POS=ADP`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|ExtPos=ADV\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `NumType=Ord\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=SCONJ\|Person=3\|PronType=Prs`, `ExtPos=PROPN\|POS=X`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `ExtPos=NOUN\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Tot`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|POS=VERB\|VerbForm=Part`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADP\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=ADJ`, `Definite=Def\|POS=ADP\|PronType=Art`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|POS=SCONJ\|PronType=Art`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|POS=PRON\|PronType=Ind`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=NOUN\|Voice=Pass`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `ExtPos=AUX\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PART`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Card\|POS=DET`, `Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `POS=PRON\|PronType=Prs`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=PROPN\|PronType=Art`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Prs`, `POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|Typo=Yes`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=SCONJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `ExtPos=ADP\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADP`, `ExtPos=AUX\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=DET`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Ind`, `Definite=Def\|ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `ExtPos=PROPN\|POS=ADV`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=PROPN\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=2\|PronType=Prs\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `NumType=Card\|POS=DET\|PronType=Art`, `Gender=Fem,Masc\|Number=Sing\|POS=PROPN`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `POS=PRON\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `ExtPos=AUX\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `ExtPos=ADJ\|Gender=Fem\|Number=Sing\|POS=X`, `Gender=Fem\|Number=Plur\|POS=NUM`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Case=Dat\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Sets\|Number=Sing\|POS=NUM`, `POS=ADV\|PronType=Rel`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Foreign=Yes\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|POS=AUX\|PronType=Prs\|VerbForm=Inf`, `ExtPos=INTJ\|POS=ADV\|Polarity=Neg`, `POS=AUX`, `Gender=Masc\|Number=Plur\|POS=NUM`, `Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Int`, `Abbr=Yes\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Ind`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art\|Typo=Yes`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `ExtPos=NUM\|NumType=Mult\|POS=NUM`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `NumType=Card\|Number=Plur\|POS=NUM`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `ExtPos=NUM\|NumType=Card\|POS=NUM`, `POS=VERB`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Rel`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf`, `ExtPos=NOUN\|Number=Sing\|POS=PROPN`, `ExtPos=ADP\|POS=DET`, `ExtPos=ADP\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Abbr=Yes\|ExtPos=PROPN\|Number=Sing\|POS=PROPN`, `ExtPos=AUX\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `ExtPos=PROPN\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=DET\|PronType=Tot`, `NumType=Range\|POS=NUM`, `Case=Dat\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Definite=Def\|ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=X`, `ExtPos=NOUN\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Dem`, `ExtPos=SCONJ\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `NumType=Frac\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Ind`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Rel`, `ExtPos=NOUN\|NumType=Card\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind\|Typo=Yes`, `Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.84 |
| `ENTS_P` | 92.75 |
| `ENTS_R` | 92.94 |
| `TAG_ACC` | 97.82 |
| `POS_ACC` | 97.81 |
| `MORPH_ACC` | 96.11 |
| `LEMMA_ACC` | 97.35 |
| `DEP_UAS` | 92.84 |
| `DEP_LAS` | 89.66 |
| `SENTS_P` | 93.49 |
| `SENTS_R` | 94.28 |
| `SENTS_F` | 93.88 | | 60d1f8cc178c5c1bd8a9459591a5da32 |
Fhrozen/test_an4 | Fhrozen | null | 31 | 1 | espnet | 0 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['en'] | ['an4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 7,699 | false |
## ESPnet2 ASR model
### `Fhrozen/test_an4`
This model was trained by Fhrozen using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b8df4c928e132acff78d196988bdb68a66987952
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Fhrozen/test_an4
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 20 00:00:46 JST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `b8df4c928e132acff78d196988bdb68a66987952`
- Commit date: `Tue Oct 19 07:48:11 2021 -0400`
## asr_train_raw_en_bpe30
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|773|4.0|22.3|73.7|0.1|96.1|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|591|2.7|21.8|75.5|0.0|97.3|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2565|17.2|16.4|66.4|1.0|83.8|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|1915|15.5|16.4|68.1|0.9|85.5|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/test|130|2695|21.1|15.6|63.3|0.9|79.9|100.0|
|inference_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.best/train_dev|100|2015|19.4|15.6|65.0|0.9|81.5|100.0|
## ASR config
<details><summary>expand</summary>
```
config: null
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_raw_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30/train/speech_shape
- exp/asr_stats_raw_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30/valid/speech_shape
- exp/asr_stats_raw_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf: {}
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf: {}
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_train_lm_en_bpe30
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 40
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 256
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_en_bpe30/train/text_shape.bpe
valid_shape_file:
- exp/lm_stats_en_bpe30/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
lm: seq_rnn
lm_conf:
unit: 650
nlayers: 2
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
| 6fa244cd2fe0524b9b46c9c149349166 |
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 498 | false | # exp_w2v2r_en_vp-100k_gender_male-8_female-2_s859
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 4c59ea635d5a80ba0a6f5f36b5f7e61e |
Sercan/whisper-small-tr | Sercan | whisper | 28 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper', 'generated_from_trainer'] | true | true | true | 1,642 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Turkish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 tr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2799
- Wer: 17.2753
- Cer: 4.5335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| 0.1044 | 1.07 | 1000 | 0.2777 | 18.4046 | 4.8810 |
| 0.0469 | 3.02 | 2000 | 0.2799 | 17.2753 | 4.5335 |
| 0.014 | 4.09 | 3000 | 0.3202 | 18.0800 | 4.9039 |
| 0.0039 | 6.04 | 4000 | 0.3326 | 18.2964 | 5.0192 |
| 0.0022 | 7.11 | 5000 | 0.3453 | 18.0307 | 4.9470 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 7830a334a4304178ad0a98dfacd10674 |
gossminn/predict-perception-bertino-focus-object | gossminn | distilbert | 12 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,970 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-focus-object
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- R2: 0.5460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4798 | 1.0 | 14 | 0.4519 | 0.2581 |
| 0.2481 | 2.0 | 28 | 0.3042 | 0.5007 |
| 0.12 | 3.0 | 42 | 0.3746 | 0.3851 |
| 0.0969 | 4.0 | 56 | 0.3186 | 0.4770 |
| 0.0907 | 5.0 | 70 | 0.3727 | 0.3882 |
| 0.0673 | 6.0 | 84 | 0.2847 | 0.5327 |
| 0.0457 | 7.0 | 98 | 0.3141 | 0.4844 |
| 0.0431 | 8.0 | 112 | 0.3369 | 0.4470 |
| 0.028 | 9.0 | 126 | 0.3039 | 0.5012 |
| 0.0244 | 10.0 | 140 | 0.2964 | 0.5135 |
| 0.0201 | 11.0 | 154 | 0.3072 | 0.4958 |
| 0.0153 | 12.0 | 168 | 0.3049 | 0.4995 |
| 0.0155 | 13.0 | 182 | 0.2924 | 0.5201 |
| 0.015 | 14.0 | 196 | 0.2585 | 0.5757 |
| 0.0181 | 15.0 | 210 | 0.3258 | 0.4652 |
| 0.0136 | 16.0 | 224 | 0.3142 | 0.4842 |
| 0.0105 | 17.0 | 238 | 0.2536 | 0.5837 |
| 0.0104 | 18.0 | 252 | 0.2407 | 0.6050 |
| 0.0107 | 19.0 | 266 | 0.2727 | 0.5524 |
| 0.0084 | 20.0 | 280 | 0.3117 | 0.4883 |
| 0.0102 | 21.0 | 294 | 0.2999 | 0.5078 |
| 0.0074 | 22.0 | 308 | 0.3018 | 0.5047 |
| 0.0068 | 23.0 | 322 | 0.2826 | 0.5361 |
| 0.0054 | 24.0 | 336 | 0.2804 | 0.5398 |
| 0.0044 | 25.0 | 350 | 0.2912 | 0.5220 |
| 0.0048 | 26.0 | 364 | 0.2813 | 0.5382 |
| 0.005 | 27.0 | 378 | 0.2933 | 0.5186 |
| 0.0046 | 28.0 | 392 | 0.2820 | 0.5371 |
| 0.004 | 29.0 | 406 | 0.2717 | 0.5541 |
| 0.0054 | 30.0 | 420 | 0.2717 | 0.5540 |
| 0.0042 | 31.0 | 434 | 0.2699 | 0.5570 |
| 0.0033 | 32.0 | 448 | 0.2630 | 0.5684 |
| 0.0038 | 33.0 | 462 | 0.2578 | 0.5767 |
| 0.0032 | 34.0 | 476 | 0.2687 | 0.5589 |
| 0.004 | 35.0 | 490 | 0.2737 | 0.5507 |
| 0.0031 | 36.0 | 504 | 0.2753 | 0.5481 |
| 0.0037 | 37.0 | 518 | 0.2819 | 0.5373 |
| 0.0034 | 38.0 | 532 | 0.2759 | 0.5471 |
| 0.0034 | 39.0 | 546 | 0.2835 | 0.5347 |
| 0.0029 | 40.0 | 560 | 0.2814 | 0.5381 |
| 0.0033 | 41.0 | 574 | 0.2801 | 0.5403 |
| 0.0025 | 42.0 | 588 | 0.2759 | 0.5472 |
| 0.0029 | 43.0 | 602 | 0.2790 | 0.5421 |
| 0.0028 | 44.0 | 616 | 0.2801 | 0.5401 |
| 0.003 | 45.0 | 630 | 0.2772 | 0.5451 |
| 0.0028 | 46.0 | 644 | 0.2764 | 0.5463 |
| 0.0026 | 47.0 | 658 | 0.2766 | 0.5460 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| 01f35b10b8456e0c84c21e4d2755e9a0 |
sd-concepts-library/kaleido | sd-concepts-library | null | 10 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,100 | false | ### kaleido on Stable Diffusion
This is the `<kaleido>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<kaleido> 0](https://huggingface.co/sd-concepts-library/kaleido/resolve/main/concept_images/3.jpeg)
![<kaleido> 1](https://huggingface.co/sd-concepts-library/kaleido/resolve/main/concept_images/0.jpeg)
![<kaleido> 2](https://huggingface.co/sd-concepts-library/kaleido/resolve/main/concept_images/2.jpeg)
![<kaleido> 3](https://huggingface.co/sd-concepts-library/kaleido/resolve/main/concept_images/1.jpeg)
![<kaleido> 4](https://huggingface.co/sd-concepts-library/kaleido/resolve/main/concept_images/4.jpeg)
| 4f37b5f7a89dc83181324353952275ad |
jonatasgrosman/exp_w2v2t_pt_vp-100k_s69 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pt'] | false | true | true | 474 | false | # exp_w2v2t_pt_vp-100k_s69
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 08f9f0d9d54585b72584d1536e9aa872 |
juancavallotti/t5-base-gec | juancavallotti | t5 | 52 | 2 | transformers | 2 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 889 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-gec
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| bca2ae06405e129a9134222dadaaef78 |
Helsinki-NLP/opus-mt-fi-hil | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-fi-hil
* source languages: fi
* target languages: hil
* OPUS readme: [fi-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-hil/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.hil | 38.7 | 0.610 |
| 0e1a3e6abfa9684a9a6b438b97c3811b |
misterbrainley/ddpm-butterflies-128 | misterbrainley | null | 13 | 3 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,236 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/misterbrainley/ddpm-butterflies-128/tensorboard?#scalars)
| b8bbc6564e441ece0aaa4ae229682375 |
cmarkea/distilcamembert-base-nli | cmarkea | camembert | 9 | 1,125 | transformers | 7 | zero-shot-classification | true | true | false | mit | ['fr'] | ['flue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['zero-shot-classification', 'sentence-similarity', 'nli'] | false | true | true | 7,982 | false |
DistilCamemBERT-NLI
===================
We present DistilCamemBERT-NLI, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset, which determines whether a premise entails, contradicts or neither entails or contradicts a hypothesis.
This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue especially in the context of cross-encoding like this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power, thanks to DistilCamemBERT.
Dataset
-------
The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) comprises 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). Sentence A is called *premise*, and sentence B is called *hypothesis*, then the goal of modelization is determined as follows:
$$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$
Evaluation results
------------------
| **class** | **precision (%)** | **f1-score (%)** | **support** |
| :----------------: | :---------------: | :--------------: | :---------: |
| **global** | 77.70 | 77.45 | 5,010 |
| **contradiction** | 78.00 | 79.54 | 1,670 |
| **entailment** | 82.90 | 78.87 | 1,670 |
| **neutral** | 72.18 | 74.04 | 1,670 |
Benchmark
---------
We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on the french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances, the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) were used. We used an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** for mean inference time measure.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** |
Zero-shot classification
------------------------
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
$$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$
For this part, we use two datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset comprises two classes: "positif" and "négatif" appreciation of movie reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels.
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 |
The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. In this aim, we aggregate sub-topics and select a few of them. We use the articles summary part to predict their topics. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science".
| **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** |
| :--------------: | :-----------: | :--------------: | :------------: |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 |
How to use DistilCamemBERT-NLI
------------------------------
```python
from transformers import pipeline
classifier = pipeline(
task='zero-shot-classification',
model="cmarkea/distilcamembert-base-nli",
tokenizer="cmarkea/distilcamembert-base-nli"
)
result = classifier (
sequences="Le style très cinéphile de Quentin Tarantino "
"se reconnaît entre autres par sa narration postmoderne "
"et non linéaire, ses dialogues travaillés souvent "
"émaillés de références à la culture populaire, et ses "
"scènes hautement esthétiques mais d'une violence "
"extrême, inspirées de films d'exploitation, d'arts "
"martiaux ou de western spaghetti.",
candidate_labels="cinéma, technologie, littérature, politique",
hypothesis_template="Ce texte parle de {}."
)
result
{"labels": ["cinéma",
"littérature",
"technologie",
"politique"],
"scores": [0.7164115309715271,
0.12878799438476562,
0.1092301607131958,
0.0455702543258667]}
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-nli"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
``` | 1eb211c15a29465f564fbdf4b4e27eac |
speechbrain/asr-transformer-transformerlm-librispeech | speechbrain | null | 9 | 767 | speechbrain | 4 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['librispeech'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'CTC', 'Attention', 'Transformer', 'pytorch', 'speechbrain', 'hf-asr-leaderboard'] | true | true | true | 4,029 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Transformer for LibriSpeech (with Transformer LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test clean WER | Test other WER | GPUs |
|:-------------:|:--------------:|:--------------:|:--------:|
| 24-03-22 | 2.27 | 5.53 | 4xV100 32GB |
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
- Neural language model (Transformer LM) trained on the full 10M words dataset.
- Acoustic model made of a transformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech")
asr_model.transcribe_file("speechbrain/asr-transformer-transformerlm-librispeech/example.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: 'f73fcc35').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LibriSpeech/ASR/transformer
python train.py hparams/transformer.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Nv1OLbHLqVeShyZ8LY9gjhYGE1DBFzFf?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
| f80b26f950cadc6cd5ff925d1ff4cd34 |
bobber/terrier-dog | bobber | null | 17 | 24 | diffusers | 0 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | true | true | 2,028 | false |
# DreamBooth model for the terrier concept trained by bobber on the bobber/Terrier-images dataset.
This is a Stable Diffusion model fine-tuned on the terrier concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of terrier dog**
This model was created as part of the DreamBooth Hackathon 🔥. My daughter helped me selecting 18 images about Terriers from petfind. Hope you enjoy it. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Examples
<table>
<tr>
<td>Generated Image of "a photo of terrier dog <br>in space suit walking in the mars"</td>
<td>Generated Image of "a photo of terrier dog <br>in the background of chinese new year"</td>
<td>Generated Image of "a photo of terrier dog <br>swimming in the pool"</td>
</tr>
<tr>
<td align="center"><img src="https://i.imgur.com/YW483rm.jpg" style="height:200px"> </td>
<td align="center"><img src="https://i.imgur.com/4m5Fv86.jpg" style="height:200px"> </td>
<td align="center"><img src="https://i.imgur.com/ZCdapRU.jpg" style="height:200px"> </td>
</tr>
<tr>
<td>Generated Image of "a photo of terrier dog <br>walking in Paris by Van Gogh"</td>
<td>Generated Image of "a photo of terrier dog <br>with The Great Wave by Katsushika Hokusai"</td>
<td>Generated Image of "a photo of terrier dog <br>by Leonardo da Vinci"</td>
</tr>
<tr>
<td align="center"><img src="https://i.imgur.com/uzYLctu.jpg" style="height:200px"> </td>
<td align="center"><img src="https://i.imgur.com/9wxxyD4.jpg" style="height:200px"> </td>
<td align="center"><img src="https://i.imgur.com/xufDxxD.jpg" style="height:200px"> </td>
</tr>
</table>
## Description
This is a Stable Diffusion model fine-tuned on 18 Terrier `dog` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('bobber/terrier-dog')
image = pipeline().images[0]
image
```
| d386bd115ec46b4400991998874ee6ce |
izumi-lab/electra-small-paper-japanese-fin-discriminator | izumi-lab | electra | 7 | 12 | transformers | 0 | null | true | false | false | cc-by-sa-4.0 | ['ja'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['finance'] | false | true | true | 2,101 | false |
# ELECTRA small Japanese finance discriminator
This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language.
The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0).
## Model architecture
The model architecture is the same as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 12 layers, 256 dimensions of hidden states, and 4 attention heads.
## Training Data
The models are trained on the Japanese version of Wikipedia.
The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021.
The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences.
The financial corpus consists of 2 corpora:
- Summaries of financial results from October 9, 2012, to December 31, 2020
- Securities reports from February 8, 2018, to December 31, 2020
The financial corpus file is 5.2GB, consisting of approximately 27M sentences.
## Tokenization
The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32768.
## Training
The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555); 128 tokens per instance, 128 instances per batch, and 1M training steps.
## Citation
```
@article{Suzuki-etal-2023-ipm,
title = {Constructing and analyzing domain-specific language model for financial text mining}
author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
journal = {Information Processing & Management},
volume = {60},
number = {2},
pages = {103194},
year = {2023},
doi = {10.1016/j.ipm.2022.103194}
}
```
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP21K12010.
| b93352fe1b1880cc1187ad75cbc1b5c6 |
DOOGLAK/Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['tagged_one50v7_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,539 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v7_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6441
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 13 | 0.7609 | 0.0 | 0.0 | 0.0 | 0.7783 |
| No log | 2.0 | 26 | 0.6742 | 0.0 | 0.0 | 0.0 | 0.7783 |
| No log | 3.0 | 39 | 0.6441 | 0.0 | 0.0 | 0.0 | 0.7785 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| ffcac8683c3ae04284552f2844322b5f |
nvidia/stt_fr_conformer_ctc_large | nvidia | null | 3 | 35 | nemo | 4 | automatic-speech-recognition | true | false | false | cc-by-4.0 | ['fr'] | ['multilingual_librispeech', 'mozilla-foundation/common_voice_7_0', 'VoxPopuli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard', 'Riva'] | true | true | true | 6,361 | false |
# NVIDIA Conformer-CTC Large (fr)
<style>
img {
display: inline;
}
</style>
| [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--CTC-lightgrey#model-badge)](#model-architecture)
| [![Model size](https://img.shields.io/badge/Params-120M-lightgrey#model-badge)](#model-architecture)
| [![Language](https://img.shields.io/badge/Language-fr-lightgrey#model-badge)](#datasets)
| [![Riva Compatible](https://img.shields.io/badge/NVIDIA%20Riva-compatible-brightgreen#model-badge)](#deployment-with-nvidia-riva) |
This model was trained on a composite dataset comprising of over 1500 hours of French speech.
It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_fr_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_fr_conformer_ctc_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
The checkpoint of the language model used for rescoring can be found [here]( https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_fr_conformer_ctc_large). You may find more info on how to train and use language models for ASR models here: [ASR Language Modeling](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html)
## Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of over a thousand hours of French speech:
- MozillaCommonVoice 7.0 - 356 hours
- Multilingual LibriSpeech - 1036 hours
- VoxPopuli - 182 hours
Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training.
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The latest model obtains the following greedy scores on the following evaluation datasets
- 8.35 % on MCV7.0 dev
- 9.63 % on MCV7.0 test
- 5.88 % on MLS dev
- 4.91 % on MLS test
With 128 beam search and 4gram KenLM model:
- 7.95 % on MCV7.0 dev
- 9.16 % on MCV7.0 test
- 5.57 % on MLS dev
- 4.66 % on MLS test
Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe.
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
Further, since portions of the training set contain text from both pre- and post- 1990 orthographic reform, regularity of punctuation may vary between the two styles.
For downstream tasks requiring more consistency, finetuning or downstream processing may be required. If exact orthography is not necessary, then using secondary model is advised.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
| b26f45ff093948e65542c79aadd8cbff |
tanviraumi/bert-base-uncased-issues-128 | tanviraumi | bert | 10 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,932 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3389 | 1.0 | 73 | 1.7400 |
| 1.8014 | 2.0 | 146 | 1.4690 |
| 1.634 | 3.0 | 219 | 1.4783 |
| 1.5461 | 4.0 | 292 | 1.3912 |
| 1.4706 | 5.0 | 365 | 1.3109 |
| 1.4161 | 6.0 | 438 | 1.3405 |
| 1.3664 | 7.0 | 511 | 1.3459 |
| 1.332 | 8.0 | 584 | 1.2745 |
| 1.3029 | 9.0 | 657 | 1.2633 |
| 1.2871 | 10.0 | 730 | 1.2336 |
| 1.2807 | 11.0 | 803 | 1.2966 |
| 1.2569 | 12.0 | 876 | 1.1508 |
| 1.2392 | 13.0 | 949 | 1.2530 |
| 1.237 | 14.0 | 1022 | 1.2485 |
| 1.2169 | 15.0 | 1095 | 1.2592 |
| 1.2272 | 16.0 | 1168 | 1.2337 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.12.0.dev20220513+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 5f641c56cbb10eac37364ff54d0ec6e1 |
muhtasham/small-mlm-squad | muhtasham | bert | 12 | 1 | transformers | 1 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,350 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-squad-plain_text
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9733 | 0.4 | 500 | 2.9009 |
| 2.6978 | 0.8 | 1000 | 2.9560 |
| 2.5783 | 1.2 | 1500 | 2.9081 |
| 2.4382 | 1.6 | 2000 | 3.0085 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| c60604a61b32b6b6b65e8efb7f13476e |
kadirnar/SORT | kadirnar | null | 2 | 0 | null | 0 | object-detection | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'computer-vision', 'sort', 'tracker', 'ocsort'] | false | true | true | 1,030 | false |
### Model Description
[Sort](https://arxiv.org/abs/1602.00763): A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences<img src="https://raw.githubusercontent.com/noahcao/OC_SORT/master/assets/teaser.png" width="600"/>
### Installation
```
pip install sort-track
```
### Tracker
```python
from sort.tracker import SortTracker
tracker = SortTracker(args)
for image in images:
dets = detector(image)
online_targets = tracker.update(dets)
```
### BibTeX Entry and Citation Info
```
@inproceedings{Bewley2016_sort,
author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
title={Simple online and realtime tracking},
year={2016},
pages={3464-3468},
keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
doi={10.1109/ICIP.2016.7533003}
}
``` | 98bbb20e4c5651a0b7a23ff32d01fae4 |
tftransformers/albert-base-v1 | tftransformers | null | 6 | 3 | null | 0 | null | false | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 6,683 | false |
# ALBERT Base v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import AlbertModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')
model = AlbertModel.from_pretrained("albert-base-v1")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-base-v1">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> | 075a0a207182f0cbf24f6c04da030697 |
espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw | espnet | null | 15 | 66 | espnet | 0 | audio-to-audio | false | false | false | cc-by-4.0 | null | ['chime4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'audio-to-audio'] | false | true | true | 5,642 | false |
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_beamformer_mvdr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_beamformer_mvdr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35841
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
init: xavier_uniform
model_conf:
loss_type: mask_mse
mask_type: PSM^2
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
hop_length: 128
separator: wpe_beamformer
separator_conf:
num_spk: 1
loss_type: mask_mse
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 300
wprojs: 320
wdropout_rate: 0.0
taps: 5
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 3
use_noise_mask: true
beamformer_type: mvdr_souden
bdropout_rate: 0.0
decoder: stft
decoder_conf:
n_fft: 512
hop_length: 128
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
| 81b12e20a472652e7535d1c7ab207231 |
EnsarEmirali/distilbert-base-uncased-finetuned-emotion | EnsarEmirali | distilbert | 12 | 6 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9265
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 |
| 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| bff1edd388a301def284f8d29f09be75 |
pulkitkumar13/dark-bert-finetuned-ner1 | pulkitkumar13 | bert | 10 | 7 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dark-bert-finetuned-ner1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0833
- Precision: 0.9337
- Recall: 0.9487
- F1: 0.9411
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0358 | 1.0 | 1756 | 0.0780 | 0.9283 | 0.9409 | 0.9346 | 0.9844 |
| 0.0172 | 2.0 | 3512 | 0.0708 | 0.9375 | 0.9488 | 0.9431 | 0.9860 |
| 0.0056 | 3.0 | 5268 | 0.0833 | 0.9337 | 0.9487 | 0.9411 | 0.9861 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
| c88e1abbead437968f5bfdd017830df9 |
liyijing024/swin-base-patch4-window7-224-in22k-finetuned | liyijing024 | swin | 11 | 11 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,508 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-finetuned
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0253 | 1.0 | 889 | 0.0060 | 0.9980 |
| 0.0134 | 2.0 | 1778 | 0.0031 | 0.9989 |
| 0.0118 | 3.0 | 2667 | 0.0021 | 0.9993 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.0+cu111
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
| d6a2dbf67949a0edb35e6b538738d9a5 |
google/multiberts-seed_2-step_1800k | google | bert | 8 | 14 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1800k'] | false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1800k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #2, captured at step 1800k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1800k')
model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1800k')
model = BertModel.from_pretrained("google/multiberts-seed_2-step_1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 7a32a55bd042884075cef2c8c959df03 |
l-tran/distilroberta-base-OLID-MLM | l-tran | roberta | 9 | 16 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,256 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-OLID-MLM
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 398 | 0.0143 |
| 1.0511 | 2.0 | 796 | 0.0031 |
| 0.0256 | 3.0 | 1194 | 0.0021 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 8607875ac3eb8d6171fa685c1789e722 |
domdomreloaded/bert-base-uncased-finetuned-swag | domdomreloaded | bert | 22 | 5 | transformers | 0 | multiple-choice | true | false | false | apache-2.0 | null | ['swag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,272 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6045
- Accuracy: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 |
| 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| d306d34bc39b5903f59d54a462aa471c |
cardiffnlp/twitter-roberta-base-sep2021 | cardiffnlp | roberta | 9 | 5 | transformers | 0 | fill-mask | true | false | false | mit | ['en'] | ['twitter-api'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['timelms', 'twitter'] | false | true | true | 4,657 | false |
# Twitter September 2021 (RoBERTa-base, 120M)
This is a RoBERTa-base model trained on 119.66M tweets until the end of September 2021.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split():
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.39329 fully
2) 0.26694 getting
3) 0.17438 not
4) 0.03422 still
5) 0.01845 all
------------------------------
I keep forgetting to bring a <mask>.
1) 0.06773 mask
2) 0.04548 book
3) 0.03826 charger
4) 0.03506 backpack
5) 0.02997 bag
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.63009 the
2) 0.16154 The
3) 0.02110 this
4) 0.01903 End
5) 0.00810 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.99022 The movie was great
2) 0.96274 Just finished reading 'Embeddings in NLP'
3) 0.96006 I just ordered fried chicken 🐣
4) 0.95725 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-sep2021"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
``` | 2ba18a0d275b347413a624d09766471d |
CompVis/stable-diffusion-v1-3 | CompVis | null | 18 | 848 | diffusers | 24 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 5 | 3 | 2 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 12,886 | false |
# Stable Diffusion v1-3 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with D🧨iffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-3** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
For more information, please refer to [Training](#training).
This weights here are intended to be used with the D🧨iffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
```bash
pip install --upgrade diffusers transformers scipy
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-3"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-3"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
### Training Data
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
### Training Procedure
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
### Training details
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 3f41400f83f4634a329b0fea07df24ab |
facebook/regnet-x-120 | facebook | regnet | 6 | 11 | transformers | 0 | image-classification | true | true | false | apache-2.0 | null | ['imagenet-1k'] | null | 2 | 0 | 1 | 1 | 0 | 0 | 0 | ['vision', 'image-classification'] | false | true | true | 1,893 | false |
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). | a2097c1f1a800116809b7b8148c121d0 |
samitizerxu/wav2vec2-xls-r-300m-fr | samitizerxu | wav2vec2 | 23 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'common_voice', 'fr', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event'] | true | true | true | 1,936 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-cls-r-300m-fr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6521
- Wer: 0.4330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.6773 | 0.8 | 500 | 1.3907 | 0.9864 |
| 0.9526 | 1.6 | 1000 | 0.7760 | 0.6448 |
| 0.6418 | 2.4 | 1500 | 0.7605 | 0.6194 |
| 0.5028 | 3.2 | 2000 | 0.6516 | 0.5322 |
| 0.4133 | 4.0 | 2500 | 0.6303 | 0.5097 |
| 0.3285 | 4.8 | 3000 | 0.6422 | 0.5062 |
| 0.2764 | 5.6 | 3500 | 0.5936 | 0.4748 |
| 0.2361 | 6.4 | 4000 | 0.6486 | 0.4683 |
| 0.2049 | 7.2 | 4500 | 0.6321 | 0.4532 |
| 0.176 | 8.0 | 5000 | 0.6230 | 0.4482 |
| 0.1393 | 8.8 | 5500 | 0.6595 | 0.4403 |
| 0.1141 | 9.6 | 6000 | 0.6552 | 0.4348 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| cdb8b09f9948d844a5c47024c7bff8de |
muhtasham/tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab | muhtasham | bert | 12 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,683 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mrpc-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.1957 | 1.09 | 500 | 5.5172 |
| 5.5021 | 2.18 | 1000 | 5.1265 |
| 5.2379 | 3.27 | 1500 | 5.0413 |
| 5.1491 | 4.36 | 2000 | 4.9136 |
| 5.014 | 5.45 | 2500 | 4.8558 |
| 4.9507 | 6.54 | 3000 | 4.7338 |
| 4.7924 | 7.63 | 3500 | 4.6922 |
| 4.7739 | 8.71 | 4000 | 4.6100 |
| 4.6749 | 9.8 | 4500 | 4.6575 |
| 4.6135 | 10.89 | 5000 | 4.4922 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
| 4ead2fe9b148442afddca8cfa02581ac |
amitjohn007/second-mobil-bert-finetuned-squad | amitjohn007 | mobilebert | 8 | 4 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,369 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amitjohn007/second-mobil-bert-finetuned-squad
This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4587
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16599, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.6441 | 0 |
| 0.5349 | 1 |
| 0.4587 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
| fa88bd5d6bb1a247d9f2e21664589f71 |
hfl/chinese-pert-large-mrc | hfl | bert | 8 | 35 | transformers | 3 | question-answering | true | true | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,449 | false |
## A Chinese MRC model built on Chinese PERT-large
**Please use `BertForQuestionAnswering` to load this model!**
This is a Chinese machine reading comprehension (MRC) model built on PERT-large and fine-tuned on a mixture of Chinese MRC datasets.
PERT is a pre-trained model based on permuted language model (PerLM) to learn text semantic information in a self-supervised manner without introducing the mask tokens [MASK]. It yields competitive results on in tasks such as reading comprehension and sequence labeling.
Results on Chinese MRC datasets (EM/F1):
(We report the checkpoint that has the best AVG score)
| | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG |
| :-------: | :-----------: | :-------: | :------------------------: | :-------: |
| PERT-large | 73.5/90.8 | 91.2/95.7 | 63.0/79.3 | 75.9/88.6 |
Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
You may also be interested in,
Chinese Minority Languages CINO: https://github.com/ymcui/Chinese-Minority-PLM
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
| 232f029f8e82f311a60ab165999d23f8 |
dipteshkanojia/hing-roberta-CM-run-5 | dipteshkanojia | xlm-roberta | 9 | 4 | transformers | 0 | text-classification | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,101 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-roberta-CM-run-5
This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6447
- Accuracy: 0.7525
- Precision: 0.7030
- Recall: 0.7120
- F1: 0.7064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9492 | 1.0 | 497 | 0.7476 | 0.6157 | 0.6060 | 0.6070 | 0.5171 |
| 0.7013 | 2.0 | 994 | 0.7093 | 0.6982 | 0.6716 | 0.6864 | 0.6663 |
| 0.4871 | 3.0 | 1491 | 0.8294 | 0.7284 | 0.6714 | 0.6867 | 0.6723 |
| 0.3838 | 4.0 | 1988 | 1.1275 | 0.7505 | 0.6969 | 0.7025 | 0.6994 |
| 0.254 | 5.0 | 2485 | 1.3831 | 0.7264 | 0.6781 | 0.6975 | 0.6850 |
| 0.1765 | 6.0 | 2982 | 2.0625 | 0.7384 | 0.7068 | 0.6948 | 0.6896 |
| 0.1127 | 7.0 | 3479 | 1.9691 | 0.7425 | 0.6925 | 0.7065 | 0.6982 |
| 0.0757 | 8.0 | 3976 | 2.3871 | 0.7425 | 0.7183 | 0.6926 | 0.6924 |
| 0.0572 | 9.0 | 4473 | 2.4037 | 0.7344 | 0.6916 | 0.6929 | 0.6882 |
| 0.0458 | 10.0 | 4970 | 2.3062 | 0.7586 | 0.7174 | 0.7219 | 0.7164 |
| 0.0405 | 11.0 | 5467 | 2.5591 | 0.7445 | 0.6925 | 0.6964 | 0.6942 |
| 0.0292 | 12.0 | 5964 | 2.5215 | 0.7384 | 0.6875 | 0.6998 | 0.6917 |
| 0.0264 | 13.0 | 6461 | 2.7551 | 0.7586 | 0.7122 | 0.7035 | 0.7037 |
| 0.0299 | 14.0 | 6958 | 2.6536 | 0.7465 | 0.7114 | 0.7088 | 0.7035 |
| 0.0208 | 15.0 | 7455 | 2.5190 | 0.7505 | 0.6989 | 0.7083 | 0.7030 |
| 0.0263 | 16.0 | 7952 | 2.7092 | 0.7485 | 0.7076 | 0.6998 | 0.6962 |
| 0.0077 | 17.0 | 8449 | 2.5933 | 0.7525 | 0.7042 | 0.7143 | 0.7081 |
| 0.009 | 18.0 | 8946 | 2.5831 | 0.7485 | 0.6991 | 0.7152 | 0.7050 |
| 0.0108 | 19.0 | 9443 | 2.6360 | 0.7545 | 0.7050 | 0.7167 | 0.7098 |
| 0.0077 | 20.0 | 9940 | 2.6447 | 0.7525 | 0.7030 | 0.7120 | 0.7064 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 9bb33454676308d2da58f87817db1be4 |
Shashidhar/distilbert-base-uncased-finetuned-squad | Shashidhar | distilbert | 29 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,179 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1205 | 1.0 | 5533 | 1.1080 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 36e2a2fd479c5c1590bec511e3938606 |
KoichiYasuoka/deberta-large-chinese-erlangshen-ud-goeswith | KoichiYasuoka | deberta-v2 | 9 | 16 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['zh'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['chinese', 'token-classification', 'pos', 'dependency-parsing'] | false | true | true | 2,799 | false |
# deberta-large-chinese-erlangshen-ud-goeswith
## Model Description
This is a DeBERTa(V2) model pre-trained on Chinese texts (both simplified and traditional) for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-large-chinese-erlangshen-upos](https://huggingface.co/KoichiYasuoka/deberta-large-chinese-erlangshen-upos).
## How to Use
```py
class UDgoeswith(object):
def __init__(self,bert):
from transformers import AutoTokenizer,AutoModelForTokenClassification
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForTokenClassification.from_pretrained(bert)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=self.tokenizer(text,return_offsets_mapping=True)
v=w["input_ids"]
x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)]
with torch.no_grad():
e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:]
r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())]
e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan)
g=self.model.config.label2id["X|_|goeswith"]
r=numpy.tri(e.shape[0])
for i in range(e.shape[0]):
for j in range(i+2,e.shape[1]):
r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1
e[:,:,g]+=numpy.where(r==0,0,numpy.nan)
m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan)
m[1:,1:]=numpy.nanmax(e,axis=2).transpose()
p=numpy.zeros(m.shape)
p[1:,1:]=numpy.nanargmax(e,axis=2).transpose()
for i in range(1,m.shape[0]):
m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan)
m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text+"\n"
v=[(s,e) for s,e in w["offset_mapping"] if s<e]
for i,(s,e) in enumerate(v,1):
q=self.model.config.id2label[p[i,h[i]]].split("|")
u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=UDgoeswith("KoichiYasuoka/deberta-large-chinese-erlangshen-ud-goeswith")
print(nlp("我把这本书看完了"))
```
with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/).
Or without ufal.chu-liu-edmonds:
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-large-chinese-erlangshen-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("我把这本书看完了"))
```
| 3b144bbede853b9fbfd941507a30b20c |
jonatasgrosman/exp_w2v2r_es_xls-r_age_teens-10_sixties-0_s900 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 476 | false | # exp_w2v2r_es_xls-r_age_teens-10_sixties-0_s900
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 358b0ca6113b8380cecf85130e9c2997 |
KoichiYasuoka/chinese-roberta-large-upos | KoichiYasuoka | bert | 8 | 8 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['zh'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing'] | false | true | true | 902 | false |
# chinese-roberta-large-upos
## Model Description
This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| 542314d3d10d3a038776c33db49274f8 |