repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
facebook/wmt21-dense-24-wide-en-x | facebook | m2m_100 | 9 | 736 | transformers | 17 | translation | true | false | false | mit | ['multilingual', 'ha', 'is', 'ja', 'cs', 'ru', 'zh', 'de', 'en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['translation', 'wmt21'] | false | true | true | 2,414 | false |
# WMT 21 En-X
WMT 21 En-X is a 4.7B multilingual encoder-decoder (seq-to-seq) model trained for one-to-many multilingual translation.
It was introduced in this [paper](https://arxiv.org/abs/2108.03265) and first released in [this](https://github.com/pytorch/fairseq/tree/main/examples/wmt21) repository.
The model can directly translate English text into 7 other languages: Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de).
To translate into a target language, the target language id is forced as the first generated token.
To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
*Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.*
To install `sentencepiece` run `pip install sentencepiece`
Since the model was trained with domain tags, you should prepend them to the input as well.
* "wmtdata newsdomain": Use for sentences in the news domain
* "wmtdata otherdomain": Use for sentences in all other domain
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt21-dense-24-wide-en-x")
tokenizer = AutoTokenizer.from_pretrained("facebook/wmt21-dense-24-wide-en-x")
inputs = tokenizer("wmtdata newsdomain One model for many languages.", return_tensors="pt")
# translate English to German
generated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.get_lang_id("de"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Ein Modell für viele Sprachen."
# translate English to Icelandic
generated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.get_lang_id("is"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Ein fyrirmynd fyrir mörg tungumál."
```
See the [model hub](https://huggingface.co/models?filter=wmt21) to look for more fine-tuned versions.
## Languages covered
English (en), Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de)
## BibTeX entry and citation info
```
@inproceedings{tran2021facebook
title={Facebook AI’s WMT21 News Translation Task Submission},
author={Chau Tran and Shruti Bhosale and James Cross and Philipp Koehn and Sergey Edunov and Angela Fan},
booktitle={Proc. of WMT},
year={2021},
}
``` | e6801e52b08ba47b4a091ac3bd5b73ed |
IMSyPP/hate_speech_it | IMSyPP | bert | 6 | 83 | transformers | 0 | text-classification | true | false | false | mit | ['it'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 654 | false |
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | 2d246f7625b8a4e4c498e31cb2d0e93c |
kadirnar/AnimeSR_Paper_Model | kadirnar | null | 3 | 0 | null | 0 | object-detection | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'computer-vision', 'gan', 'animegan'] | false | true | true | 621 | false |
### Model Description
[AnimeSR](https://arxiv.org/abs/2206.07038): Learning Real-World Super-Resolution Models for Animation Videos
### Installation
```
pip install animesr
```
### Anime GAN
```python
from animesr.inference_animesr_video import main
main(source='test.mp4', 'kadirnar/AnimeSR_Paper_Model')
```
### BibTeX Entry and Citation Info
```
@InProceedings{wu2022animesr,
author={Wu, Yanze and Wang, Xintao and Li, Gen and Shan, Ying},
title={AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos},
booktitle={Advances in Neural Information Processing Systems},
year={2022}
}
``` | 95d5089787beeaa0cd245e516b052863 |
chuchun9/distilbert-base-uncased-finetuned-squad | chuchun9 | distilbert | 25 | 6 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,379 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5227 | 1.0 | 1107 | 2.0485 |
| 1.7555 | 2.0 | 2214 | 1.7443 |
| 1.4567 | 3.0 | 3321 | 1.6511 |
| 1.2107 | 4.0 | 4428 | 1.6496 |
| 1.083 | 5.0 | 5535 | 1.6727 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 959278c3725aed02c22b53080c87593a |
shivi/sd-album-covers | shivi | null | 57 | 20 | diffusers | 2 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 7,072 | false | ### sd-album-covers Dreambooth model trained by shivi with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
Taylor Swift Red Album Cover (use that on your prompt)
Santana Africa Speaks Album Cover (use that on your prompt)
Beatles Abbey Road Album Cover (use that on your prompt)
Led Zepellin Celebration Day album cover (use that on your prompt)
Maroon5 band Overexposed music album cover (use that on your prompt)
Metallica Harvester of Sorrow music album cover (use that on your prompt)
Linkin Park band logo (use that on your prompt)
![Linkin Park band logo 0](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%281%29.jpg)![Linkin Park band logo 1](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%282%29.jpg)![Linkin Park band logo 2](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%283%29.jpg)![Linkin Park band logo 3](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%284%29.jpg)![Linkin Park band logo 4](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%285%29.jpg)![Linkin Park band logo 5](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Linkin%20Park%20band%20logo_%286%29.jpg)![Metallica Harvester of Sorrow music album cover 6](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Metallica%20Harvester%20of%20Sorrow%20music%20album%20cover_%281%29.jpg)![Metallica Harvester of Sorrow music album cover 7](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Metallica%20Harvester%20of%20Sorrow%20music%20album%20cover_%282%29.jpg)![Metallica Harvester of Sorrow music album cover 8](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Metallica%20Harvester%20of%20Sorrow%20music%20album%20cover_%283%29.jpg)![Metallica Harvester of Sorrow music album cover 9](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Metallica%20Harvester%20of%20Sorrow%20music%20album%20cover_%284%29.jpg)![Metallica Harvester of Sorrow music album cover 10](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Metallica%20Harvester%20of%20Sorrow%20music%20album%20cover_%285%29.jpg)![Maroon5 band Overexposed music album cover 11](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Maroon5%20band%20Overexposed%20music%20album%20cover_%281%29.jpg)![Maroon5 band Overexposed music album cover 12](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Maroon5%20band%20Overexposed%20music%20album%20cover_%282%29.jpg)![Maroon5 band Overexposed music album cover 13](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Maroon5%20band%20Overexposed%20music%20album%20cover_%283%29.jpg)![Maroon5 band Overexposed music album cover 14](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Maroon5%20band%20Overexposed%20music%20album%20cover_%284%29.jpg)![Maroon5 band Overexposed music album cover 15](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Maroon5%20band%20Overexposed%20music%20album%20cover_%285%29.jpg)![Led Zepellin Celebration Day album cover 16](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Led%20Zepellin%20Celebration%20Day%20album%20cover_%281%29.jpg)![Led Zepellin Celebration Day album cover 17](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Led%20Zepellin%20Celebration%20Day%20album%20cover_%282%29.jpg)![Led Zepellin Celebration Day album cover 18](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Led%20Zepellin%20Celebration%20Day%20album%20cover_%283%29.jpg)![Led Zepellin Celebration Day album cover 19](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Led%20Zepellin%20Celebration%20Day%20album%20cover_%284%29.jpg)![Led Zepellin Celebration Day album cover 20](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Led%20Zepellin%20Celebration%20Day%20album%20cover_%285%29.jpg)![Beatles Abbey Road Album Cover 21](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Beatles%20Abbey%20Road%20Album%20Cover_%281%29.jpg)![Beatles Abbey Road Album Cover 22](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Beatles%20Abbey%20Road%20Album%20Cover_%282%29.jpg)![Beatles Abbey Road Album Cover 23](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Beatles%20Abbey%20Road%20Album%20Cover_%283%29.jpg)![Beatles Abbey Road Album Cover 24](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Beatles%20Abbey%20Road%20Album%20Cover_%284%29.jpg)![Beatles Abbey Road Album Cover 25](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Beatles%20Abbey%20Road%20Album%20Cover_%285%29.jpg)![Santana Africa Speaks Album Cover 26](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Santana%20Africa%20Speaks%20Album%20Cover_%281%29.jpg)![Santana Africa Speaks Album Cover 27](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Santana%20Africa%20Speaks%20Album%20Cover_%282%29.jpg)![Santana Africa Speaks Album Cover 28](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Santana%20Africa%20Speaks%20Album%20Cover_%283%29.jpg)![Santana Africa Speaks Album Cover 29](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Santana%20Africa%20Speaks%20Album%20Cover_%284%29.jpg)![Santana Africa Speaks Album Cover 30](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Santana%20Africa%20Speaks%20Album%20Cover_%285%29.jpg)![Taylor Swift Red Album Cover 31](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Taylor%20Swift%20Red%20Album%20Cover_%281%29.jpg)![Taylor Swift Red Album Cover 32](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Taylor%20Swift%20Red%20Album%20Cover_%282%29.jpg)![Taylor Swift Red Album Cover 33](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Taylor%20Swift%20Red%20Album%20Cover_%283%29.jpg)![Taylor Swift Red Album Cover 34](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Taylor%20Swift%20Red%20Album%20Cover_%284%29.jpg)![Taylor Swift Red Album Cover 35](https://huggingface.co/shivi/sd-album-covers/resolve/main/concept_images/Taylor%20Swift%20Red%20Album%20Cover_%285%29.jpg)
| 0518e74ccd4f7fedf2899045b2a4c9f0 |
espnet/wanchichen_fleurs_asr_conformer_scctc | espnet | null | 29 | 0 | espnet | 0 | null | false | false | false | cc-by-4.0 | ['en'] | ['google/fleurs'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'speech-recognition'] | false | true | true | 1,404 | false |
## ESPnet2 ASR model
### `espnet/wanchichen_fleurs_asr_conformer_sctctc`
This model was trained by William Chen using the fleurs recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/fleurs/asr1
./run.sh
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Oct 22 14:55:21 EDT 2022`
- python version: `3.8.6 (default, Dec 17 2020, 16:57:01) [GCC 10.2.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `e534106b837ff6cdd29977a52983c022ff1afb0f`
- Commit date: `Sun Sep 11 22:31:23 2022 -0400`
## asr_train_asr_xlsr_conformer_scctc_raw_all_bpe6500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|1592160|70.5|26.1|3.4|3.4|32.9|97.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|10235271|92.2|4.7|3.1|2.6|10.4|97.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|9622352|91.3|5.6|3.1|2.7|11.4|97.0|
| 20def58ff860df51c9ffa070273edc4f |
JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | JorisCos | null | 3 | 5,590 | asteroid | 5 | audio-to-audio | true | false | false | cc-by-sa-4.0 | null | ['Libri1Mix', 'enh_single'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['asteroid', 'audio', 'DCCRNet', 'audio-to-audio', 'speech-enhancement'] | false | true | true | 1,598 | false |
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | fa8a452225fe528568f044161cc0c6ab |
flax-community/gpt-neo-1.3B-apps | flax-community | gpt_neo | 12 | 5 | transformers | 3 | text-generation | true | false | true | mit | ['en', 'python'] | ['apps'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['gpt_neo', 'code_synthesis'] | false | true | true | 5,174 | false |
# GPT-Neo-1.3B-APPS
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
GPT-Neo-1.3B-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks.
## Training data
The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each.
This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps).
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py).
Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script:
```bash
python run_clm_apps.py \
--output_dir $HOME/gpt-neo-1.3B-apps \
--model_name_or_path EleutherAI/gpt-neo-1.3B \
--dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \
--dataset_config_name formatted \
--do_train --do_eval \
--block_size="1024" \
--per_device_train_batch_size="3" \
--per_device_eval_batch_size="3" \
--preprocessing_num_workers="16" \
--learning_rate="8e-5" \
--warmup_steps="800" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--weight_decay="0.1" \
--overwrite_output_dir \
--num_train_epochs="5" \
--logging_steps="50" \
--eval_steps="2000" \
--report_to="wandb" \
--dtype="bfloat16" \
--save_strategy epoch \
--gradient_accumulation_steps 1 \
```
## Intended Use and Limitations
The model is finetuned to solve programming problems given a text description and optional starter code.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps")
prompt = """
A function to greet user. Given a user name it should say hello
def greet(name):
ANSWER:
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt
formatting is different from that used in APPS dataset.
GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon...
| 251281f4f3f24aab72ae032fd8510187 |
Sushant45/Canadian_Armed_Forces-clustered | Sushant45 | distilbert | 8 | 29 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,871 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Canadian_Armed_Forces-clustered
This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5757
- Train End Logits Accuracy: 0.8542
- Train Start Logits Accuracy: 0.8160
- Validation Loss: 0.4930
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.4000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5757 | 0.8542 | 0.8160 | 0.4930 | 1.0 | 0.4000 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 68759fc95135ad275b6300da3db62019 |
cankeles/DPTNet_WHAMR_enhsingle_16k | cankeles | null | 3 | 13 | asteroid | 1 | audio-to-audio | true | false | false | cc-by-sa-4.0 | null | ['Libri1Mix', 'enh_single'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['asteroid', 'audio', 'DPTNet', 'audio-to-audio'] | false | true | true | 1,325 | false | ## Asteroid model `cankeles/DPTNet_WHAMR_enhsignle_16k`
Description:
This model was trained by M. Can Keleş using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
segment: 2.0
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
positional arguments: {}
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 60
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On custom min test set :
```yml
'sar': 12.853384266251018,
'sar_imp': 8.950332361953906,
'sdr': 12.853384266251018,
'sdr_imp': 8.950332361953906,
'si_sdr': 12.247012621312548,
'si_sdr_imp': 8.429646186633407,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9022338865380519,
'stoi_imp': 0.09735707619500522
```
| 0a8718a840c9477ccf30de8a56e1c8a4 |
wietsedv/xlm-roberta-base-ft-udpos28-hyw | wietsedv | xlm-roberta | 8 | 8 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['hyw'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['part-of-speech', 'token-classification'] | true | true | true | 578 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Western Armenian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw")
```
| 92ebb7758c347d46c212592b59145783 |
emmyapi/distilbart-podimo-data-eval-3 | emmyapi | bart | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,206 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-podimo-data-eval-3
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3828
- Rouge1: 32.8203
- Rouge2: 7.8994
- Rougel: 18.9659
- Rougelsum: 29.4196
- Gen Len: 114.5264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 3.9049 | 1.0 | 132 | 3.5343 | 30.2542 | 6.031 | 17.269 | 26.9847 | 113.7689 |
| 3.4248 | 2.0 | 264 | 3.4055 | 31.6518 | 7.2786 | 18.2641 | 28.4006 | 114.6547 |
| 3.1594 | 3.0 | 396 | 3.3579 | 32.0442 | 7.3554 | 18.3492 | 28.7615 | 113.7443 |
| 2.9645 | 4.0 | 528 | 3.3445 | 32.0945 | 7.637 | 18.6289 | 28.899 | 115.5321 |
| 2.8073 | 5.0 | 660 | 3.3470 | 32.7852 | 7.9597 | 19.2358 | 29.5057 | 108.3519 |
| 2.685 | 6.0 | 792 | 3.3532 | 32.3775 | 7.661 | 18.6719 | 28.9282 | 117.1104 |
| 2.5941 | 7.0 | 924 | 3.3711 | 32.6976 | 7.8917 | 19.069 | 29.3785 | 113.1943 |
| 2.5267 | 8.0 | 1056 | 3.3828 | 32.8203 | 7.8994 | 18.9659 | 29.4196 | 114.5264 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| 72b1cca90d88c8c60dc2c213298ad133 |
rajistics/donut-base-sroiev2 | rajistics | vision-encoder-decoder | 14 | 0 | transformers | 0 | null | true | false | false | mit | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroiev2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 53163a091a88e6264f3e9ac001b62b46 |
jarvisx17/wav2vec2-base-timit-small | jarvisx17 | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,986 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-small
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5361
- Wer: 0.3380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.571 | 1.0 | 500 | 1.9252 | 1.0022 |
| 0.8969 | 2.01 | 1000 | 0.5066 | 0.5292 |
| 0.4326 | 3.01 | 1500 | 0.4523 | 0.4562 |
| 0.2993 | 4.02 | 2000 | 0.4228 | 0.4202 |
| 0.2335 | 5.02 | 2500 | 0.4252 | 0.4178 |
| 0.2009 | 6.02 | 3000 | 0.4136 | 0.3910 |
| 0.1552 | 7.03 | 3500 | 0.4747 | 0.3863 |
| 0.1388 | 8.03 | 4000 | 0.4359 | 0.3859 |
| 0.1226 | 9.04 | 4500 | 0.4367 | 0.3879 |
| 0.1109 | 10.04 | 5000 | 0.4360 | 0.3760 |
| 0.0991 | 11.04 | 5500 | 0.4899 | 0.3672 |
| 0.0882 | 12.05 | 6000 | 0.4608 | 0.3653 |
| 0.0792 | 13.05 | 6500 | 0.4882 | 0.3703 |
| 0.0745 | 14.06 | 7000 | 0.4716 | 0.3625 |
| 0.065 | 15.06 | 7500 | 0.4896 | 0.3651 |
| 0.0596 | 16.06 | 8000 | 0.4831 | 0.3659 |
| 0.0563 | 17.07 | 8500 | 0.5092 | 0.3585 |
| 0.0536 | 18.07 | 9000 | 0.5376 | 0.3675 |
| 0.0465 | 19.08 | 9500 | 0.5019 | 0.3534 |
| 0.049 | 20.08 | 10000 | 0.4869 | 0.3723 |
| 0.0423 | 21.08 | 10500 | 0.4947 | 0.3501 |
| 0.0348 | 22.09 | 11000 | 0.5524 | 0.3453 |
| 0.0315 | 23.09 | 11500 | 0.5369 | 0.3499 |
| 0.0312 | 24.1 | 12000 | 0.5283 | 0.3519 |
| 0.0258 | 25.1 | 12500 | 0.5202 | 0.3461 |
| 0.0249 | 26.1 | 13000 | 0.5270 | 0.3449 |
| 0.0236 | 27.11 | 13500 | 0.5388 | 0.3408 |
| 0.0206 | 28.11 | 14000 | 0.5361 | 0.3388 |
| 0.0224 | 29.12 | 14500 | 0.5361 | 0.3380 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
| 0dc34a9ff135d598509d460aa446ffb7 |
johko/capdec_025 | johko | null | 3 | 0 | null | 0 | image-to-text | false | false | false | apache-2.0 | ['en'] | ['MS-COCO', 'Flickr30k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Image Captioning'] | false | true | true | 1,348 | false |
# CapDec - NoiseLevel: 0.025
## Model Description
These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf).
Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding.
In their words:
*Specifically, we assume that the visual embedding corresponding to a text embedding
lies somewhere within a ball of small radius around the text embedding (see Fig. 1).
We would like all text embeddings in this ball to decode to the same caption,which should
also correspond to the visual content mapped to this ball. We implement this intuition by
adding zero-mean Gaussian noise of STD to the text embedding before decoding it.*
The "Noise Level" of 0.025 is equivalent to the Noise Variance which is the square of the STD.
The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
## Datasets
The authors trained the model on MS-COCO and Flickr30k datasets.
## Performance
The authors don't explicitly report the performance for this NoiseLevel but it can be estimated from the following figure from the original paper:
![](capdec_performance.png) | 98c3c1dca52a704b8370370e0a94108d |
JovialValley/model_broadclass_onSet1 | JovialValley | wav2vec2 | 13 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 11,482 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_broadclass_onSet1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9014
- 0 Precision: 0.5217
- 0 Recall: 1.0
- 0 F1-score: 0.6857
- 0 Support: 24
- 1 Precision: 1.0
- 1 Recall: 0.7692
- 1 F1-score: 0.8696
- 1 Support: 39
- 2 Precision: 1.0
- 2 Recall: 0.5652
- 2 F1-score: 0.7222
- 2 Support: 23
- 3 Precision: 1.0
- 3 Recall: 0.75
- 3 F1-score: 0.8571
- 3 Support: 12
- Accuracy: 0.7755
- Macro avg Precision: 0.8804
- Macro avg Recall: 0.7711
- Macro avg F1-score: 0.7837
- Macro avg Support: 98
- Weighted avg Precision: 0.8829
- Weighted avg Recall: 0.7755
- Weighted avg F1-score: 0.7884
- Weighted avg Support: 98
- Wer: 0.9368
- Mtrix: [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 9, 30, 0, 0], [2, 10, 0, 13, 0], [3, 3, 0, 0, 9]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:|
| 2.395 | 4.16 | 100 | 2.2004 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 2.2919 | 8.33 | 200 | 2.1576 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 2.0987 | 12.49 | 300 | 2.0882 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.9079 | 16.65 | 400 | 1.8619 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.7168 | 20.82 | 500 | 1.6469 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.551 | 24.98 | 600 | 1.6614 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.6399 | 29.16 | 700 | 1.5818 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.3329 | 33.33 | 800 | 1.2267 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.1996 | 37.49 | 900 | 1.2143 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 1.01 | 41.65 | 1000 | 0.9496 | 0.2474 | 1.0 | 0.3967 | 24 | 1.0 | 0.0256 | 0.05 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2551 | 0.3119 | 0.2564 | 0.1117 | 98 | 0.4586 | 0.2551 | 0.1170 | 98 | 0.9777 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 38, 1, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] |
| 0.9516 | 45.82 | 1100 | 0.9471 | 0.2927 | 1.0 | 0.4528 | 24 | 1.0 | 0.3846 | 0.5556 | 39 | 1.0 | 0.0435 | 0.0833 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.4082 | 0.5732 | 0.3570 | 0.2729 | 98 | 0.7043 | 0.4082 | 0.3515 | 98 | 0.9661 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 24, 15, 0, 0], [2, 22, 0, 1, 0], [3, 12, 0, 0, 0]] |
| 0.9544 | 49.98 | 1200 | 0.9452 | 0.3582 | 1.0 | 0.5275 | 24 | 1.0 | 0.5128 | 0.6780 | 39 | 1.0 | 0.3043 | 0.4667 | 23 | 0.75 | 0.25 | 0.375 | 12 | 0.5510 | 0.7771 | 0.5168 | 0.5118 | 98 | 0.8122 | 0.5510 | 0.5544 | 98 | 0.9540 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 18, 20, 0, 1], [2, 16, 0, 7, 0], [3, 9, 0, 0, 3]] |
| 0.9538 | 54.16 | 1300 | 0.9259 | 0.4615 | 1.0 | 0.6316 | 24 | 1.0 | 0.6923 | 0.8182 | 39 | 1.0 | 0.5217 | 0.6857 | 23 | 0.8571 | 0.5 | 0.6316 | 12 | 0.7041 | 0.8297 | 0.6785 | 0.6918 | 98 | 0.8506 | 0.7041 | 0.7185 | 98 | 0.9439 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 11, 27, 0, 1], [2, 11, 0, 12, 0], [3, 6, 0, 0, 6]] |
| 0.952 | 58.33 | 1400 | 0.9052 | 0.4528 | 1.0 | 0.6234 | 24 | 1.0 | 0.6667 | 0.8 | 39 | 1.0 | 0.4348 | 0.6061 | 23 | 0.8889 | 0.6667 | 0.7619 | 12 | 0.6939 | 0.8354 | 0.6920 | 0.6978 | 98 | 0.8524 | 0.6939 | 0.7066 | 98 | 0.9464 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 12, 26, 0, 1], [2, 13, 0, 10, 0], [3, 4, 0, 0, 8]] |
| 0.8938 | 62.49 | 1500 | 0.9070 | 0.48 | 1.0 | 0.6486 | 24 | 0.9677 | 0.7692 | 0.8571 | 39 | 1.0 | 0.4348 | 0.6061 | 23 | 1.0 | 0.5833 | 0.7368 | 12 | 0.7245 | 0.8619 | 0.6968 | 0.7122 | 98 | 0.8598 | 0.7245 | 0.7324 | 98 | 0.9398 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 9, 30, 0, 0], [2, 12, 1, 10, 0], [3, 5, 0, 0, 7]] |
| 0.9027 | 66.65 | 1600 | 0.8919 | 0.5714 | 1.0 | 0.7273 | 24 | 1.0 | 0.8462 | 0.9167 | 39 | 1.0 | 0.7391 | 0.85 | 23 | 1.0 | 0.5 | 0.6667 | 12 | 0.8163 | 0.8929 | 0.7713 | 0.7902 | 98 | 0.8950 | 0.8163 | 0.8240 | 98 | 0.9398 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 6, 33, 0, 0], [2, 6, 0, 17, 0], [3, 6, 0, 0, 6]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| a358b3250a42c7dc150a15a2437218b8 |
mariolinml/roberta_large-ner-conll2003_0818_v1 | mariolinml | roberta | 14 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,441 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_large-ner-conll2003_0818_v1
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1481
- Precision: 0.8993
- Recall: 0.9269
- F1: 0.9129
- Accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2033 | 1.0 | 878 | 0.0472 | 0.9277 | 0.9551 | 0.9412 | 0.9887 |
| 0.044 | 2.0 | 1756 | 0.0428 | 0.9365 | 0.9610 | 0.9486 | 0.9895 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| ea60ea814e16a53d41e7393b7a8bc1b0 |
aXhyra/demo_hate_1234567 | aXhyra | distilbert | 10 | 10 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,389 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_hate_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.320702985778492e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 282 | 0.4850 | 0.7645 |
| 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 |
| 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 |
| 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| ac1685c5a7422439ced714d264808437 |
cataluna84/xlm-roberta-base-finetuned-panx-de-fr | cataluna84 | xlm-roberta | 10 | 11 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 924504ad4da58d47ae22bb170de9f040 |
HuggingAlex1247/gelectra-large-germaner | HuggingAlex1247 | electra | 18 | 9 | transformers | 0 | token-classification | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,365 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HuggingAlex1247/gelectra-large-germaner
This model is a fine-tuned version of [deepset/gelectra-large](https://huggingface.co/deepset/gelectra-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1696
- Validation Loss: 0.0800
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3475, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1696 | 0.0800 | 0 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.6.2
- Datasets 1.18.0
- Tokenizers 0.12.1
| 1cbba9c71c758dec92e9f2d570c5d01d |
jonatasgrosman/exp_w2v2t_it_vp-it_s411 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'it'] | false | true | true | 469 | false | # exp_w2v2t_it_vp-it_s411
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 5f20bb60c4d8b6601c8fccee3708df93 |
sd-concepts-library/a-female-hero-from-the-legend-of-mir | sd-concepts-library | null | 11 | 0 | null | 4 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,612 | false | ### a female hero from The Legend of Mir on Stable Diffusion
This is the `a <female-hero> from The Legend of Mir` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![a <female-hero> from The Legend of Mir 0](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/5.jpeg)
![a <female-hero> from The Legend of Mir 1](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/3.jpeg)
![a <female-hero> from The Legend of Mir 2](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/0.jpeg)
![a <female-hero> from The Legend of Mir 3](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/2.jpeg)
![a <female-hero> from The Legend of Mir 4](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/1.jpeg)
![a <female-hero> from The Legend of Mir 5](https://huggingface.co/sd-concepts-library/a-female-hero-from-the-legend-of-mir/resolve/main/concept_images/4.jpeg)
| a8c32f1a9f0ed8b24e11419cd313cb94 |
jonatasgrosman/wav2vec2-xls-r-1b-italian | jonatasgrosman | wav2vec2 | 25 | 9 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | true | true | true | 3,056 | false |
# Fine-tuned XLS-R 1B model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-italian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "it"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-italian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset mozilla-foundation/common_voice_8_0 --config it --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-italian,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {I}talian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-italian}},
year={2022}
}
``` | ae960af97aa23a706bd5680f71c55ff0 |
Helsinki-NLP/opus-mt-fr-bzs | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-fr-bzs
* source languages: fr
* target languages: bzs
* OPUS readme: [fr-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bzs | 30.2 | 0.477 |
| 4057788f15cc524c7e58817be5dc085f |
aipicasso/cool-japan-diffusion-2-1-1-beta | aipicasso | null | 21 | 318 | diffusers | 10 | text-to-image | false | false | false | other | null | null | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 7,137 | false |
# Cool Japan Diffusion 2.1.1 Beta Model Card
![アイキャッチ](eyecatch.jpg)
[注意事项。中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告)
English version is [here](README_en.md).
# はじめに
Cool Japan Diffusion (for learning) はStable Diffsionをファインチューニングして、アニメやマンガ、ゲームなどのクールジャパンを表現することに特化したモデルです。なお、内閣府のクールジャパン戦略とは特に関係はありません。
# ライセンスについて
ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。
例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。
この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。
ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。
営利企業にいる方は法務部にいる人と相談してください。
趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。
なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。
# 法律や倫理について
本モデルは日本にて作成されました。したがって、日本の法律が適用されます。
本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。
また、本モデルの配布については、著作権法や刑法175条に照らしてみても、
正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。
ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。
しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。
これは学習する著作物に対して著作者の許可を得ていないためです。
ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。
したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。
# 使い方
手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/aipicasso/cool-japan-diffusion-latest-demo)をお使いください。
詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)にかかれています。
モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1-beta/resolve/main/v2-1-1-beta.ckpt)からダウンロードできます。
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **開発者:** Robin Rombach, Patrick Esser, Alfred Increment
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** CreativeML Open RAIL++-M-NC License
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。
- **補足:**
- **参考文献:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## モデルの使用例
Stable Diffusion v2と同じ使い方です。
たくさんの方法がありますが、2つのパターンを提供します。
- Web UI
- Diffusers
### Web UIの場合
こちらの[取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)に従って作成してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "aipicasso/cool-japan-diffusion-2-1-1-beta"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)#,use_auth_token="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")
pipe = pipe.to("cuda")
prompt = "anime, a portrait of a girl with black short hair and red eyes, kimono, full color illustration, official art, 4k, detailed"
negative_prompt="(((deformed))), blurry, ((((bad anatomy)))), bad pupil, disfigured, poorly drawn face, mutation, mutated, (extra limb), (ugly), (poorly drawn hands), bad hands, fused fingers, messy drawing, broken legs censor, low quality, ((mutated hands and fingers:1.5), (long body :1.3), (mutation, poorly drawn :1.2), ((bad eyes)), ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 2d, 3d, cg, text"
image = pipe(prompt,negative_prompt=negative_prompt, width=512, height=512, num_inference_steps=20).images[0]
image.save("girl.png")
```
**注意**:
- [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。
- GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。
#### 想定される用途
- コンテスト
- [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿
- ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。
- コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。
- 画像生成AIに関する報道
- 公共放送だけでなく、営利企業でも可能
- 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。
- クールジャパンの紹介
- 他国の人にクールジャパンとはなにかを説明すること。
- 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- Latent Diffusion Modelとクールジャパンとの相性
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- 自己表現
- SNS上で自分の感情や思考を表現すること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 収益化されているYouTubeなどのコンテンツへの使用
- 商用のサービスとして直接提供すること
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 特に既存のキャラクターは公開しないでください(著作権法に違反するおそれ)
- なお、学習していない[キャラクターも生成できる](https://twitter.com/ThePioneerJPnew/status/1609074173892235264?s=20&t=-rY1ufzNeIDT3Fm5YdME6g)そうです。(このツイート自体は研究目的として許可しています。)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- よくわかっていない
### バイアス
Stable Diffusionと同じバイアスが掛かっています。
気をつけてください。
## 学習
**学習データ**
次のデータを主に使ってStable Diffusionをファインチューニングしています。
- VAEについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成)
- U-Netについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 80万ペア
**学習プロセス**
Stable DiffusionのVAEとU-Netをファインチューニングしました。
- **ハードウェア:** RTX 3090
- **オプティマイザー:** AdamW
- **Gradient Accumulations**: 1
- **バッチサイズ:** 1
## 評価結果
## 環境への影響
ほとんどありません。
- **ハードウェアタイプ:** RTX 3090
- **使用時間(単位は時間):** 500
- **クラウド事業者:** なし
- **学習した場所:** 日本
- **カーボン排出量:** そんなにない
## 参考文献
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。
| 128612d4fd5b2b28bac1f818fbffc7bb |
DOOGLAK/Article_100v7_NER_Model_3Epochs_UNAUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['article100v7_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,561 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_100v7_NER_Model_3Epochs_UNAUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v7_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6011
- Precision: 0.1661
- Recall: 0.0138
- F1: 0.0254
- Accuracy: 0.7860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 12 | 0.7375 | 0.0 | 0.0 | 0.0 | 0.7810 |
| No log | 2.0 | 24 | 0.6356 | 0.0571 | 0.0010 | 0.0020 | 0.7820 |
| No log | 3.0 | 36 | 0.6011 | 0.1661 | 0.0138 | 0.0254 | 0.7860 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| 45a69054538a670287f756a212b421b0 |
SummerZhang/distilbert-base-uncased-finetuned-squad | SummerZhang | distilbert | 18 | 2 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,180 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.248 | 1.0 | 8235 | 1.2293 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 4f22353604ecba850f196094655a5d0f |
clp/vit-base-patch16-224-finetuned | clp | vit | 9 | 9 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,454 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7617
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6063 | 0.6667 |
| No log | 2.0 | 2 | 0.6958 | 0.3333 |
| No log | 3.0 | 3 | 0.7617 | 0.3333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 2c010138d535421ded164f5c002d62f0 |
Yuri/xlm-roberta-base-finetuned-panx-de | Yuri | xlm-roberta | 26 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| f293cf457c742fd4dfac4044c29cf151 |
chrisvinsen/wav2vec2-15 | chrisvinsen | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,420 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-15
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 0.8585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6808 | 1.37 | 200 | 3.7154 | 1.0 |
| 3.0784 | 2.74 | 400 | 3.1542 | 1.0 |
| 2.8919 | 4.11 | 600 | 2.9918 | 1.0 |
| 2.8317 | 5.48 | 800 | 2.8971 | 1.0 |
| 2.7958 | 6.85 | 1000 | 2.8409 | 1.0 |
| 2.7699 | 8.22 | 1200 | 2.8278 | 1.0 |
| 2.6365 | 9.59 | 1400 | 2.4657 | 1.0 |
| 2.1096 | 10.96 | 1600 | 1.8358 | 0.9988 |
| 1.6485 | 12.33 | 1800 | 1.4525 | 0.9847 |
| 1.3967 | 13.7 | 2000 | 1.2467 | 0.9532 |
| 1.2492 | 15.07 | 2200 | 1.1261 | 0.9376 |
| 1.1543 | 16.44 | 2400 | 1.0654 | 0.9194 |
| 1.0863 | 17.81 | 2600 | 1.0136 | 0.9161 |
| 1.0275 | 19.18 | 2800 | 0.9601 | 0.8827 |
| 0.9854 | 20.55 | 3000 | 0.9435 | 0.8878 |
| 0.9528 | 21.92 | 3200 | 0.9170 | 0.8807 |
| 0.926 | 23.29 | 3400 | 0.9121 | 0.8783 |
| 0.9025 | 24.66 | 3600 | 0.8884 | 0.8646 |
| 0.8909 | 26.03 | 3800 | 0.8836 | 0.8690 |
| 0.8717 | 27.4 | 4000 | 0.8810 | 0.8646 |
| 0.8661 | 28.77 | 4200 | 0.8623 | 0.8585 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 71746766eb5f78afa54b73d4061cf224 |
Gabriel/bart-base-cnn-swe | Gabriel | bart | 27 | 96 | transformers | 0 | summarization | true | false | false | mit | ['sv'] | ['Gabriel/cnn_daily_swe'] | {'emissions': 0.0334, 'source': 'Google Colab', 'training_type': 'fine-tuning', 'geographical_location': 'Fredericia, Denmark', 'hardware_used': 'Tesla P100-PCIE-16GB'} | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['summarization'] | true | true | true | 4,544 | false |
# bart-base-cnn-swe
This model is a W.I.P
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. This model is a fine-tuned version of [KBLab/bart-base-swedish-cased](https://huggingface.co/KBLab/bart-base-swedish-cased) on the [Gabriel/bart-base-cnn-swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe) dataset and can be used for summarization tasks.
## Intended uses & limitations
This model should only be used to fine-tune further on and summarization tasks.
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="Gabriel/bart-base-cnn-swe")
ARTICLE = """
Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under lördagens VM semifinal i Paris. Simon Shaw lastar av trots att Raphael Ibanez, vänster, och Sebastien Chabal. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra-rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Chabal började matchen på ersättningsbänken, men kom i 26: e minuten att ersätta den skadade Fabien Pelous under värd Frankrikes 14-9 nederlag. Om han blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes. Samtidigt, Frankrike tränare Bernard Laporte sade att nederlaget var svårare att ta än Englands 24-7 seger i 2003 semifinalen. "År 2003 var de bättre än oss. I själva verket var de bättre än alla", sade Laporte, som lämnar sin roll att tillträda posten som junior idrottsminister i den franska regeringen. "De var som Nya Zeeland i denna turnering - favoriten, förutom att de gick hela vägen. Den här gången är det svårare för igår var det 50-50." Samtidigt, England -- försöker bli den första nationen att försvara VM-titeln -- avslöjade att stjärna kicker Jonny Wilkinson återigen hade problem med matchbollarna under semifinalen. Flughalvan, som uttryckte sin oro efter att ha kämpat med stöveln mot Australien, avvisade en boll innan han sparkade en vital trepoängare mot Frankrike. "Vi sa det inte förra veckan men en icke-match bollen kom ut på fältet i Marseille som Jonny sparkade," chef för rugby Rob Andrew sade. "Han tänkte inte på det när han sparkade det. Matchbollarna är märkta, numrerade ett till sex. Igår kväll hade de "World Cup semifinal England vs Frankrike" skrivet på dem. På matchkvällen var Jonny vaksam när han sparkade för mål att de faktiskt var matchbollar han sparkade. "Träningsbollarna förlorar tryck och form. Hela frågan förra veckan, arrangörerna accepterade alla sex matchbollar bör användas av båda sidor på torsdagen före matchen. " E-post till en vän.
"""
print(summarizer(ARTICLE, max_length=130, min_length=30, num_beams=10 ,do_sample=False))
>>> [{'summary_text': """ Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under VM semifinal i Paris. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra - rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Om Chabal blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes."""}]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2*2 = 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.2349 | 1.0 | 17944 | 2.0643 | 21.9564 | 10.2133 | 17.9958 | 20.6502 | 19.9992 |
| 2.0726 | 2.0 | 35888 | 2.0253 | 22.0568 | 10.3302 | 18.0648 | 20.7482 | 19.9996 |
| 1.8658 | 3.0 | 53832 | 2.0333 | 22.0871 | 10.2902 | 18.0577 | 20.7082 | 19.998 |
| 1.8121 | 4.0 | 71776 | 1.9759 | 22.2046 | 10.4332 | 18.1753 | 20.846 | 19.9971 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 6ea19655482812f44ac5593f6b66fe37 |
hiiamsid/BETO_es_binary_classification | hiiamsid | bert | 7 | 4 | transformers | 2 | text-classification | true | false | false | apache-2.0 | ['es'] | ['self made to classify whether text is related to technology or not.'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['es', 'ticket classification'] | false | true | true | 905 | false | # BETO(cased)
This model was built using pytorch.
## Model description
Input for the model: Any spanish text
Output for the model: Sentiment. (0 - Negative, 1 - Positive(i.e. technology relate))
#### How to use
Here is how to use this model to get the features of a given text in *PyTorch*:
```python
# You can include sample code which will be formatted
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification")
model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training procedure
I trained on the dataset on the [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased).
| c0796c1e7e6ddef709aa47bf6978c39f |
MichaelCHomeX/distilbert-base-uncased-finetuned-imdb | MichaelCHomeX | distilbert | 9 | 0 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 9c960819ac12a41385c34a9928e1818a |
Helsinki-NLP/opus-mt-lue-sv | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-lue-sv
* source languages: lue
* target languages: sv
* OPUS readme: [lue-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.sv | 23.7 | 0.412 |
| 374430b2d9c55913259feb666ef2bf1b |
eslamxm/mt5-base-finetuned-en-cnn | eslamxm | mt5 | 13 | 5 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | ['cnn_dailymail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'en', 'mt5', 'Abstractive Summarization', 'generated_from_trainer'] | true | true | true | 1,211 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-en-cnn
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1286
- Rouge-1: 22.84
- Rouge-2: 10.11
- Rouge-l: 21.8
- Gen Len: 19.0
- Bertscore: 87.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| ccf87daaab582ca64c4b9e62cb3f6dab |
anas-awadalla/t5-base-few-shot-k-64-finetuned-squad-infilling-seed-4 | anas-awadalla | t5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-64-finetuned-squad-infilling-seed-4
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| a032ba6168e4ccd0afb4954f4eb1dd16 |
MultiBertGunjanPatrick/multiberts-seed-4-80k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | true | true | 6,479 | false | # MultiBERTs Seed 4 Checkpoint 80k (uncased)
Seed 4 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-80k')
model = BertModel.from_pretrained("multiberts-seed-4-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 4ce124bc741d1ae75dde73c3de397f82 |
Suya03/my_awesome_billsum_model | Suya03 | t5 | 10 | 0 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['billsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,707 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5312
- Rouge1: 0.1421
- Rouge2: 0.0515
- Rougel: 0.1184
- Rougelsum: 0.1182
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8298 | 0.1269 | 0.0364 | 0.1068 | 0.1068 | 19.0 |
| No log | 2.0 | 124 | 2.6134 | 0.133 | 0.045 | 0.1114 | 0.1109 | 19.0 |
| No log | 3.0 | 186 | 2.5476 | 0.142 | 0.0518 | 0.118 | 0.1179 | 19.0 |
| No log | 4.0 | 248 | 2.5312 | 0.1421 | 0.0515 | 0.1184 | 0.1182 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 16fe2c675025318a10f7c18de86252ed |
rossanez/t5-small-finetuned-de-en-256-wd-01 | rossanez | t5 | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wmt14'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,167 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-wd-01
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1202 | 7.5964 | 17.3996 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| b8b02b90d4ebd02f3dabfe81cb7931a2 |
leonadase/distilbert-base-uncased-finetuned-ner | leonadase | distilbert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,556 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 90cd0e08a55b66d816cf64aa2ef4406e |
yanaiela/roberta-base-epoch_21 | yanaiela | roberta | 9 | 3 | transformers | 0 | fill-mask | true | false | false | mit | ['en'] | ['wikipedia', 'bookcorpus'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta-base', 'roberta-base-epoch_21'] | false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 21
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_21.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
| e89f68a90e32894fa255fcc1af61f64b |
dxiao/bert-finetuned-ner | dxiao | bert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0583
- Precision: 0.9396
- Recall: 0.9530
- F1: 0.9463
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0883 | 1.0 | 1756 | 0.0702 | 0.9184 | 0.9320 | 0.9252 | 0.9819 |
| 0.0338 | 2.0 | 3512 | 0.0661 | 0.9263 | 0.9480 | 0.9370 | 0.9853 |
| 0.0174 | 3.0 | 5268 | 0.0583 | 0.9396 | 0.9530 | 0.9463 | 0.9868 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 1778afb0eb36cc68d90613cb29fafcaa |
Helsinki-NLP/opus-mt-tc-big-fa-itc | Helsinki-NLP | marian | 13 | 11 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['fa', 'fr', 'pt', 'ro'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 8,334 | false | # opus-mt-tc-big-fa-itc
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Persian (fa) to Italic languages (itc).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2022-07-23
- **License:** CC-BY-4.0
- **Language(s):**
- Source Language(s): fas
- Target Language(s): fra ita por ron spa
- Language Pair(s): fas-fra fas-por fas-ron
- Valid Target Language Labels: >>acf<< >>aoa<< >>arg<< >>ast<< >>cat<< >>cbk<< >>ccd<< >>cks<< >>cos<< >>cri<< >>crs<< >>dlm<< >>drc<< >>egl<< >>ext<< >>fab<< >>fax<< >>fra<< >>frc<< >>frm<< >>fro<< >>frp<< >>fur<< >>gcf<< >>gcr<< >>glg<< >>hat<< >>idb<< >>ist<< >>ita<< >>itk<< >>kea<< >>kmv<< >>lad<< >>lad_Latn<< >>lat<< >>lat_Latn<< >>lij<< >>lld<< >>lmo<< >>lou<< >>mcm<< >>mfe<< >>mol<< >>mwl<< >>mxi<< >>mzs<< >>nap<< >>nrf<< >>oci<< >>osc<< >>osp<< >>pap<< >>pcd<< >>pln<< >>pms<< >>pob<< >>por<< >>pov<< >>pre<< >>pro<< >>qbb<< >>qhr<< >>rcf<< >>rgn<< >>roh<< >>ron<< >>ruo<< >>rup<< >>ruq<< >>scf<< >>scn<< >>sdc<< >>sdn<< >>spa<< >>spq<< >>spx<< >>src<< >>srd<< >>sro<< >>tmg<< >>tvy<< >>vec<< >>vkp<< >>wln<< >>xfa<< >>xum<<
- **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.zip)
- **Resources for more information:**
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- More information about released models for this language pair: [OPUS-MT fas-itc README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fas-itc/README.md)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>lad<< اسلام زیباست.",
">>spa<< ورود به کتابخانه رایگان است."
]
model_name = "pytorch-models/opus-mt-tc-big-fa-itc"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# O Islam é lindo.
# La entrada a la biblioteca es gratuita.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fa-itc")
print(pipe(">>lad<< اسلام زیباست."))
# expected output: O Islam é lindo.
```
## Training
- **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fas-fra | tatoeba-test-v2021-08-07 | 0.57949 | 37.5 | 376 | 3377 |
| fas-fra | flores101-devtest | 0.55883 | 28.9 | 1012 | 28343 |
| fas-ita | flores101-devtest | 0.49512 | 19.7 | 1012 | 27306 |
| fas-por | flores101-devtest | 0.54829 | 27.6 | 1012 | 26519 |
| fas-ron | flores101-devtest | 0.48821 | 19.7 | 1012 | 26799 |
| fas-spa | flores101-devtest | 0.47722 | 19.4 | 1012 | 29199 |
## Citation Information
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 8b9f0b0
* port time: Sat Aug 13 00:08:53 EEST 2022
* port machine: LM0-400-22516.local
| a2c948bbe7481ad1742ec5e9b3aba7fa |
DmitryPogrebnoy/MedDistilBertBaseRuCased | DmitryPogrebnoy | distilbert | 8 | 12 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['ru'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,245 | false |
# Model DmitryPogrebnoy/MedDistilBertBaseRuCased
# Model Description
This model is fine-tuned version of [DmitryPogrebnoy/distilbert-base-russian-cased](https://huggingface.co/DmitryPogrebnoy/distilbert-base-russian-cased).
The code for the fine-tuned process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_distilbert_base_russian_cased/fine_tune_distilbert_base_russian_cased.py).
The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian.
The collected dataset can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv).
This model was created as part of a master's project to develop a method for correcting typos
in medical histories using BERT models as a ranking of candidates.
The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
# How to Get Started With the Model
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedDistilBertBaseRuCased')
>>> pipeline("У пациента [MASK] боль в грудине.")
[{'score': 0.1733243614435196,
'token': 6880,
'token_str': 'имеется',
'sequence': 'У пациента имеется боль в грудине.'},
{'score': 0.08818087726831436,
'token': 1433,
'token_str': 'есть',
'sequence': 'У пациента есть боль в грудине.'},
{'score': 0.03620537742972374,
'token': 3793,
'token_str': 'особенно',
'sequence': 'У пациента особенно боль в грудине.'},
{'score': 0.03438418731093407,
'token': 5168,
'token_str': 'бол',
'sequence': 'У пациента бол боль в грудине.'},
{'score': 0.032936397939920425,
'token': 6281,
'token_str': 'протекает',
'sequence': 'У пациента протекает боль в грудине.'}]
```
Or you can load the model and tokenizer and do what you need to do:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedDistilBertBaseRuCased")
>>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedDistilBertBaseRuCased")
```
| fd7213be524b0f36697de322b67a056b |
amartyobanerjee/bert-finetuned-ner | amartyobanerjee | bert | 14 | 21 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9314
- Recall: 0.9507
- F1: 0.9410
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0821 | 1.0 | 1756 | 0.0639 | 0.9108 | 0.9371 | 0.9238 | 0.9834 |
| 0.0366 | 2.0 | 3512 | 0.0585 | 0.9310 | 0.9497 | 0.9403 | 0.9857 |
| 0.019 | 3.0 | 5268 | 0.0622 | 0.9314 | 0.9507 | 0.9410 | 0.9863 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 7db5987e855774fab53bf71050e73fcd |
Helsinki-NLP/opus-mt-pt-tl | Helsinki-NLP | marian | 11 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | ['pt', 'tl'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,000 | false |
### por-tgl
* source group: Portuguese
* target group: Tagalog
* OPUS readme: [por-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md)
* model: transformer-align
* source language(s): por
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.tgl | 28.4 | 0.565 |
### System Info:
- hf_name: por-tgl
- source_languages: por
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'tl']
- src_constituents: {'por'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: tgl
- short_pair: pt-tl
- chrF2_score: 0.565
- bleu: 28.4
- brevity_penalty: 1.0
- ref_len: 13620.0
- src_name: Portuguese
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: tl
- prefer_old: False
- long_pair: por-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | be1b5b77920f27a3ba1d656692459b2a |
jhaochenz/finetuned_distilgpt2_sst2_negation0.0001_pretrainedTrue_epochs1 | jhaochenz | gpt2 | 14 | 0 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,165 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.0001_pretrainedTrue_epochs1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7111 | 1.0 | 1322 | 3.2798 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| fb01b1fc9a9f222fba58c58144df7f0f |
farleyknight-org-username/vit-base-mnist | farleyknight-org-username | vit | 28 | 1,137 | transformers | 1 | image-classification | true | false | false | apache-2.0 | null | ['mnist'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'vision', 'generated_from_trainer'] | true | true | true | 1,490 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-mnist
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0236
- Accuracy: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3717 | 1.0 | 6375 | 0.0522 | 0.9893 |
| 0.3453 | 2.0 | 12750 | 0.0370 | 0.9906 |
| 0.3736 | 3.0 | 19125 | 0.0308 | 0.9916 |
| 0.3224 | 4.0 | 25500 | 0.0269 | 0.9939 |
| 0.2846 | 5.0 | 31875 | 0.0236 | 0.9949 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.4.0
- Tokenizers 0.12.1
| 7cb20593448115bc3bb6d2ac8d823bd7 |
vesteinn/fasttext_is_rmh | vesteinn | null | 6 | 0 | null | 0 | null | false | false | false | agpl-3.0 | ['is'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,307 | false | # FastText model trained on Icelandic
This model is trained on the lemmas of the Icelandic Gigaword Corpus version 20.05. It is trained using the gensim package, version 4.1.0. and parameters were set to default (100 dimensions, windows size 5)
This model can not be loaded directly since it uses gensim, clone the repository and run the following to use it.
```python
import gensim
model = gensim.models.FastText.load("./rmh.w2v.model")
```
## Example output
```bash
In [1]: model.wv.most_similar("england")
Out[1]:
[('englands', 0.8778558969497681),
('southland', 0.8573296070098877),
('skotland', 0.846065878868103),
('englaland', 0.8320872187614441),
('hoogland', 0.8299505114555359),
('hoagland', 0.8277317881584167),
('totland', 0.8265103697776794),
('lackland', 0.8234561681747437),
('skarpengland', 0.8227219581604004),
('langland', 0.8222305774688721)]
In [2]: model.wv.most_similar("kanína")
Out[2]:
[('loðkanína', 0.9271067976951599),
('dvergkanína', 0.9106121063232422),
('angórakanína', 0.895512044429779),
('angórukanína', 0.8741581439971924),
('feldkanína', 0.8696010708808899),
('kanínubangsi', 0.8562541604042053),
('holdakanína', 0.8543838858604431),
('villikanína', 0.8525990843772888),
('silkikanína', 0.8515204191207886),
('kaníni', 0.8445548415184021)]
```
| 67b024dad59e6b1b494414fcda681f02 |
JeremiahZ/reproduce-unsup-roberta-base-avg | JeremiahZ | roberta | 23 | 1 | transformers | 0 | null | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reproduce-unsup-roberta-base-avg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 512
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| c1e535b2239bf90bf99eaabce8291092 |
abyaugustinek/distilbert-base-uncased-finetuned | abyaugustinek | distilbert | 12 | 3 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,843 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# abyaugustinek/distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3693
- Validation Loss: 1.2106
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.6565
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.0691 | 1.5942 | 0.0 | 0.0 | 0.0 | 0.6565 | 0 |
| 1.4705 | 1.2376 | 0.0 | 0.0 | 0.0 | 0.6565 | 1 |
| 1.3693 | 1.2106 | 0.0 | 0.0 | 0.0 | 0.6565 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.7.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 0a31b3f0d07889cc7ab2c9801cd855a7 |
celine98/canine-s-finetuned-sst2 | celine98 | canine | 11 | 4 | transformers | 1 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,451 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-sst2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Accuracy: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 |
| 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 |
| 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 |
| 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 |
| 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| a0fcef5435cbc9ee297b80bd3f16e03f |
Helsinki-NLP/opus-mt-fi-en | Helsinki-NLP | marian | 11 | 57,390 | transformers | 3 | translation | true | true | false | apache-2.0 | ['fi', 'en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,486 | false |
### fin-eng
* source group: Finnish
* target group: English
* OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-05.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip)
* test set translations: [opus-2020-08-05.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt)
* test set scores: [opus-2020-08-05.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-fineng.fin.eng | 25.3 | 0.536 |
| newstest2015-enfi-fineng.fin.eng | 26.9 | 0.547 |
| newstest2016-enfi-fineng.fin.eng | 29.0 | 0.571 |
| newstest2017-enfi-fineng.fin.eng | 32.3 | 0.594 |
| newstest2018-enfi-fineng.fin.eng | 23.8 | 0.517 |
| newstest2019-fien-fineng.fin.eng | 29.0 | 0.565 |
| newstestB2016-enfi-fineng.fin.eng | 24.5 | 0.527 |
| newstestB2017-enfi-fineng.fin.eng | 27.4 | 0.557 |
| newstestB2017-fien-fineng.fin.eng | 27.4 | 0.557 |
| Tatoeba-test.fin.eng | 53.4 | 0.697 |
### System Info:
- hf_name: fin-eng
- source_languages: fin
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'en']
- src_constituents: {'fin'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt
- src_alpha3: fin
- tgt_alpha3: eng
- short_pair: fi-en
- chrF2_score: 0.6970000000000001
- bleu: 53.4
- brevity_penalty: 0.99
- ref_len: 74651.0
- src_name: Finnish
- tgt_name: English
- train_date: 2020-08-05
- src_alpha2: fi
- tgt_alpha2: en
- prefer_old: False
- long_pair: fin-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | fd1c9c87004561b6ee405089b3fe0ce1 |
groar/distilgpt2-finetuned-wikitext2 | groar | gpt2 | 13 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,140 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7852 | 1.0 | 2334 | 3.6895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| c575780dd6439fc19b8441af8fb014c8 |
mshoaibsarwar/finetuning-sentiment-model-samples | mshoaibsarwar | distilbert | 12 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 922 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 363ee81c0eabcf08c48a15de7bbed8be |
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fr'] | false | true | true | 480 | false | # exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 487a51881b3bef1a55fe6c7e2c8ecb84 |
PaddlePaddle/ernie-m-large | PaddlePaddle | ernie_m | 9 | 0 | paddlenlp | 3 | null | false | false | false | apache-2.0 | ['fr', 'es', 'en', 'de', 'sw', 'ru', 'zh', 'el', 'bg', 'ar', 'vi', 'th', 'hi', 'ur'] | ['xnli', 'mlqa', 'paws-x'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 11,214 | false | [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/ernie-m-base
## Ernie-M
ERNIE-M, proposed by Baidu, is a new training method that encourages the model to align the representation of multiple languages with monolingual corpora,
to overcome the constraint that the parallel corpus size places on the model performance. The insight is to integrate back-translation into the pre-training
process by generating pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages,
thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and
delivers new state-of-the-art results in various cross-lingual downstream tasks.
We proposed two novel methods to align the representation of multiple languages:
Cross-Attention Masked Language Modeling(CAMLM): In CAMLM, we learn the multilingual semantic representation by restoring the MASK tokens in the input sentences.
Back-Translation masked language modeling(BTMLM): We use BTMLM to train our model to generate pseudo-parallel sentences from the monolingual sentences. The generated pairs are then used as the input of the model to further align the cross-lingual semantics, thus enhancing the multilingual representation.
![ernie-m](ernie_m.png)
## Benchmark
### XNLI
XNLI is a subset of MNLI and has been translated into 14 different kinds of languages including some low-resource languages. The goal of the task is to predict testual entailment (whether sentence A implies / contradicts / neither sentence B).
| Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | Avg |
| ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Cross-lingual Transfer | | | | | | | | | | | | | | | | |
| XLM | 85.0 | 78.7 | 78.9 | 77.8 | 76.6 | 77.4 | 75.3 | 72.5 | 73.1 | 76.1 | 73.2 | 76.5 | 69.6 | 68.4 | 67.3 | 75.1 |
| Unicoder | 85.1 | 79.0 | 79.4 | 77.8 | 77.2 | 77.2 | 76.3 | 72.8 | 73.5 | 76.4 | 73.6 | 76.2 | 69.4 | 69.7 | 66.7 | 75.4 |
| XLM-R | 85.8 | 79.7 | 80.7 | 78.7 | 77.5 | 79.6 | 78.1 | 74.2 | 73.8 | 76.5 | 74.6 | 76.7 | 72.4 | 66.5 | 68.3 | 76.2 |
| INFOXLM | **86.4** | **80.6** | 80.8 | 78.9 | 77.8 | 78.9 | 77.6 | 75.6 | 74.0 | 77.0 | 73.7 | 76.7 | 72.0 | 66.4 | 67.1 | 76.2 |
| **ERNIE-M** | 85.5 | 80.1 | **81.2** | **79.2** | **79.1** | **80.4** | **78.1** | **76.8** | **76.3** | **78.3** | **75.8** | **77.4** | **72.9** | **69.5** | **68.8** | **77.3** |
| XLM-R Large | 89.1 | 84.1 | 85.1 | 83.9 | 82.9 | 84.0 | 81.2 | 79.6 | 79.8 | 80.8 | 78.1 | 80.2 | 76.9 | 73.9 | 73.8 | 80.9 |
| INFOXLM Large | **89.7** | 84.5 | 85.5 | 84.1 | 83.4 | 84.2 | 81.3 | 80.9 | 80.4 | 80.8 | 78.9 | 80.9 | 77.9 | 74.8 | 73.7 | 81.4 |
| VECO Large | 88.2 | 79.2 | 83.1 | 82.9 | 81.2 | 84.2 | 82.8 | 76.2 | 80.3 | 74.3 | 77.0 | 78.4 | 71.3 | **80.4** | **79.1** | 79.9 |
| **ERNIR-M Large** | 89.3 | **85.1** | **85.7** | **84.4** | **83.7** | **84.5** | 82.0 | **81.2** | **81.2** | **81.9** | **79.2** | **81.0** | **78.6** | 76.2 | 75.4 | **82.0** |
| Translate-Train-All | | | | | | | | | | | | | | | | |
| XLM | 85.0 | 80.8 | 81.3 | 80.3 | 79.1 | 80.9 | 78.3 | 75.6 | 77.6 | 78.5 | 76.0 | 79.5 | 72.9 | 72.8 | 68.5 | 77.8 |
| Unicoder | 85.6 | 81.1 | 82.3 | 80.9 | 79.5 | 81.4 | 79.7 | 76.8 | 78.2 | 77.9 | 77.1 | 80.5 | 73.4 | 73.8 | 69.6 | 78.5 |
| XLM-R | 85.4 | 81.4 | 82.2 | 80.3 | 80.4 | 81.3 | 79.7 | 78.6 | 77.3 | 79.7 | 77.9 | 80.2 | 76.1 | 73.1 | 73.0 | 79.1 |
| INFOXLM | 86.1 | 82.0 | 82.8 | 81.8 | 80.9 | 82.0 | 80.2 | 79.0 | 78.8 | 80.5 | 78.3 | 80.5 | 77.4 | 73.0 | 71.6 | 79.7 |
| **ERNIE-M** | **86.2** | **82.5** | **83.8** | **82.6** | **82.4** | **83.4** | **80.2** | **80.6** | **80.5** | **81.1** | **79.2** | **80.5** | **77.7** | **75.0** | **73.3** | **80.6** |
| XLM-R Large | 89.1 | 85.1 | 86.6 | 85.7 | 85.3 | 85.9 | 83.5 | 83.2 | 83.1 | 83.7 | 81.5 | **83.7** | **81.6** | 78.0 | 78.1 | 83.6 |
| VECO Large | 88.9 | 82.4 | 86.0 | 84.7 | 85.3 | 86.2 | **85.8** | 80.1 | 83.0 | 77.2 | 80.9 | 82.8 | 75.3 | **83.1** | **83.0** | 83.0 |
| **ERNIE-M Large** | **89.5** | **86.5** | **86.9** | **86.1** | **86.0** | **86.8** | 84.1 | **83.8** | **84.1** | **84.5** | **82.1** | 83.5 | 81.1 | 79.4 | 77.9 | **84.2** |
### Cross-lingual Named Entity Recognition
* datasets:CoNLI
| Model | en | nl | es | de | Avg |
| ------------------------------ | --------- | --------- | --------- | --------- | --------- |
| *Fine-tune on English dataset* | | | | | |
| mBERT | 91.97 | 77.57 | 74.96 | 69.56 | 78.52 |
| XLM-R | 92.25 | **78.08** | 76.53 | **69.60** | 79.11 |
| **ERNIE-M** | **92.78** | 78.01 | **79.37** | 68.08 | **79.56** |
| XLM-R LARGE | 92.92 | 80.80 | 78.64 | 71.40 | 80.94 |
| **ERNIE-M LARGE** | **93.28** | **81.45** | **78.83** | **72.99** | **81.64** |
| *Fine-tune on all dataset* | | | | | |
| XLM-R | 91.08 | 89.09 | 87.28 | 83.17 | 87.66 |
| **ERNIE-M** | **93.04** | **91.73** | **88.33** | **84.20** | **89.32** |
| XLM-R LARGE | 92.00 | 91.60 | **89.52** | 84.60 | 89.43 |
| **ERNIE-M LARGE** | **94.01** | **93.81** | 89.23 | **86.20** | **90.81** |
### Cross-lingual Question Answering
* datasets:MLQA
| Model | en | es | de | ar | hi | vi | zh | Avg |
| ----------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- |
| mBERT | 77.7 / 65.2 | 64.3 / 46.6 | 57.9 / 44.3 | 45.7 / 29.8 | 43.8 / 29.7 | 57.1 / 38.6 | 57.5 / 37.3 | 57.7 / 41.6 |
| XLM | 74.9 / 62.4 | 68.0 / 49.8 | 62.2 / 47.6 | 54.8 / 36.3 | 48.8 / 27.3 | 61.4 / 41.8 | 61.1 / 39.6 | 61.6 / 43.5 |
| XLM-R | 77.1 / 64.6 | 67.4 / 49.6 | 60.9 / 46.7 | 54.9 / 36.6 | 59.4 / 42.9 | 64.5 / 44.7 | 61.8 / 39.3 | 63.7 / 46.3 |
| INFOXLM | 81.3 / 68.2 | 69.9 / 51.9 | 64.2 / 49.6 | 60.1 / 40.9 | 65.0 / 47.5 | 70.0 / 48.6 | 64.7 / **41.2** | 67.9 / 49.7 |
| **ERNIE-M** | **81.6 / 68.5** | **70.9 / 52.6** | **65.8 / 50.7** | **61.8 / 41.9** | **65.4 / 47.5** | **70.0 / 49.2** | **65.6** / 41.0 | **68.7 / 50.2** |
| XLM-R LARGE | 80.6 / 67.8 | 74.1 / 56.0 | 68.5 / 53.6 | 63.1 / 43.5 | 62.9 / 51.6 | 71.3 / 50.9 | 68.0 / 45.4 | 70.7 / 52.7 |
| INFOXLM LARGE | **84.5 / 71.6** | **75.1 / 57.3** | **71.2 / 56.2** | **67.6 / 47.6** | 72.5 / 54.2 | **75.2 / 54.1** | 69.2 / 45.4 | 73.6 / 55.2 |
| **ERNIE-M LARGE** | 84.4 / 71.5 | 74.8 / 56.6 | 70.8 / 55.9 | 67.4 / 47.2 | **72.6 / 54.7** | 75.0 / 53.7 | **71.1 / 47.5** | **73.7 / 55.3** |
### Cross-lingual Paraphrase Identification
* datasets:PAWS-X
| Model | en | de | es | fr | ja | ko | zh | Avg |
| ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Cross-lingual Transfer | | | | | | | | |
| mBERT | 94.0 | 85.7 | 87.4 | 87.0 | 73.0 | 69.6 | 77.0 | 81.9 |
| XLM | 94.0 | 85.9 | 88.3 | 87.4 | 69.3 | 64.8 | 76.5 | 80.9 |
| MMTE | 93.1 | 85.1 | 87.2 | 86.9 | 72.0 | 69.2 | 75.9 | 81.3 |
| XLM-R LARGE | 94.7 | 89.7 | 90.1 | 90.4 | 78.7 | 79.0 | 82.3 | 86.4 |
| VECO LARGE | **96.2** | 91.3 | 91.4 | 92.0 | 81.8 | 82.9 | 85.1 | 88.7 |
| **ERNIE-M LARGE** | 96.0 | **91.9** | **91.4** | **92.2** | **83.9** | **84.5** | **86.9** | **89.5** |
| Translate-Train-All | | | | | | | | |
| VECO LARGE | 96.4 | 93.0 | 93.0 | 93.5 | 87.2 | 86.8 | 87.9 | 91.1 |
| **ERNIE-M LARGE** | **96.5** | **93.5** | **93.3** | **93.8** | **87.9** | **88.4** | **89.2** | **91.8** |
### Cross-lingual Sentence Retrieval
* dataset:Tatoeba
| Model | Avg |
| --------------------------------------- | -------- |
| XLM-R LARGE | 75.2 |
| VECO LARGE | 86.9 |
| **ERNIE-M LARGE** | **87.9** |
| **ERNIE-M LARGE( after fine-tuning)** | **93.3** |
## Citation Info
```text
@article{Ouyang2021ERNIEMEM,
title={ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora},
author={Xuan Ouyang and Shuohuan Wang and Chao Pang and Yu Sun and Hao Tian and Hua Wu and Haifeng Wang},
journal={ArXiv},
year={2021},
volume={abs/2012.15674}
}
``` | 57650c4833830b0f2c96e68358655206 |
responsibility-framing/predict-perception-xlmr-focus-victim | responsibility-framing | xlm-roberta | 12 | 21 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,857 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-victim
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Rmse: 0.6301
- Rmse Focus::a Sulla vittima: 0.6301
- Mae: 0.5441
- Mae Focus::a Sulla vittima: 0.5441
- R2: 0.7205
- R2 Focus::a Sulla vittima: 0.7205
- Cos: 0.8261
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.7802
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sulla vittima | Mae | Mae Focus::a Sulla vittima | R2 | R2 Focus::a Sulla vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0607 | 1.0 | 15 | 0.9261 | 1.2017 | 1.2017 | 0.9557 | 0.9557 | -0.0166 | -0.0166 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 1.0107 | 2.0 | 30 | 0.9481 | 1.2159 | 1.2159 | 0.9861 | 0.9861 | -0.0408 | -0.0408 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 0.9921 | 3.0 | 45 | 0.9068 | 1.1892 | 1.1892 | 0.9548 | 0.9548 | 0.0045 | 0.0045 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 0.7769 | 4.0 | 60 | 0.5014 | 0.8842 | 0.8842 | 0.7121 | 0.7121 | 0.4496 | 0.4496 | 0.7391 | 0.0 | 0.5 | 0.6232 | nan |
| 0.5763 | 5.0 | 75 | 0.4019 | 0.7917 | 0.7917 | 0.6737 | 0.6737 | 0.5588 | 0.5588 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.4378 | 6.0 | 90 | 0.3594 | 0.7486 | 0.7486 | 0.5957 | 0.5957 | 0.6055 | 0.6055 | 0.7391 | 0.0 | 0.5 | 0.4442 | nan |
| 0.3595 | 7.0 | 105 | 0.3452 | 0.7337 | 0.7337 | 0.6333 | 0.6333 | 0.6210 | 0.6210 | 0.5652 | 0.0 | 0.5 | 0.2649 | nan |
| 0.3192 | 8.0 | 120 | 0.3275 | 0.7147 | 0.7147 | 0.6205 | 0.6205 | 0.6405 | 0.6405 | 0.7391 | 0.0 | 0.5 | 0.6561 | nan |
| 0.2482 | 9.0 | 135 | 0.2978 | 0.6815 | 0.6815 | 0.5754 | 0.5754 | 0.6731 | 0.6731 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan |
| 0.2416 | 10.0 | 150 | 0.3018 | 0.6860 | 0.6860 | 0.5954 | 0.5954 | 0.6687 | 0.6687 | 0.5652 | 0.0 | 0.5 | 0.2553 | nan |
| 0.2292 | 11.0 | 165 | 0.2764 | 0.6565 | 0.6565 | 0.5522 | 0.5522 | 0.6966 | 0.6966 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.1752 | 12.0 | 180 | 0.3070 | 0.6920 | 0.6920 | 0.5680 | 0.5680 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan |
| 0.1956 | 13.0 | 195 | 0.2923 | 0.6752 | 0.6752 | 0.5499 | 0.5499 | 0.6791 | 0.6791 | 0.8261 | 0.0 | 0.5 | 0.7843 | nan |
| 0.1424 | 14.0 | 210 | 0.3163 | 0.7023 | 0.7023 | 0.6060 | 0.6060 | 0.6528 | 0.6528 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.152 | 15.0 | 225 | 0.2436 | 0.6164 | 0.6164 | 0.5127 | 0.5127 | 0.7326 | 0.7326 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.1277 | 16.0 | 240 | 0.2471 | 0.6208 | 0.6208 | 0.5367 | 0.5367 | 0.7287 | 0.7287 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1269 | 17.0 | 255 | 0.2573 | 0.6334 | 0.6334 | 0.5329 | 0.5329 | 0.7175 | 0.7175 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1058 | 18.0 | 270 | 0.2538 | 0.6291 | 0.6291 | 0.5530 | 0.5530 | 0.7214 | 0.7214 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.107 | 19.0 | 285 | 0.2568 | 0.6328 | 0.6328 | 0.5464 | 0.5464 | 0.7181 | 0.7181 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1185 | 20.0 | 300 | 0.2452 | 0.6183 | 0.6183 | 0.5317 | 0.5317 | 0.7309 | 0.7309 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.1029 | 21.0 | 315 | 0.2419 | 0.6142 | 0.6142 | 0.5415 | 0.5415 | 0.7344 | 0.7344 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.0908 | 22.0 | 330 | 0.2462 | 0.6196 | 0.6196 | 0.5261 | 0.5261 | 0.7297 | 0.7297 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0901 | 23.0 | 345 | 0.2528 | 0.6279 | 0.6279 | 0.5330 | 0.5330 | 0.7225 | 0.7225 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0979 | 24.0 | 360 | 0.2800 | 0.6607 | 0.6607 | 0.5682 | 0.5682 | 0.6927 | 0.6927 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.0992 | 25.0 | 375 | 0.2502 | 0.6246 | 0.6246 | 0.5517 | 0.5517 | 0.7254 | 0.7254 | 0.6522 | 0.0 | 0.5 | 0.2372 | nan |
| 0.0846 | 26.0 | 390 | 0.2570 | 0.6331 | 0.6331 | 0.5524 | 0.5524 | 0.7178 | 0.7178 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0717 | 27.0 | 405 | 0.2562 | 0.6321 | 0.6321 | 0.5456 | 0.5456 | 0.7187 | 0.7187 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0739 | 28.0 | 420 | 0.2570 | 0.6330 | 0.6330 | 0.5471 | 0.5471 | 0.7179 | 0.7179 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0828 | 29.0 | 435 | 0.2553 | 0.6309 | 0.6309 | 0.5446 | 0.5446 | 0.7198 | 0.7198 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.086 | 30.0 | 450 | 0.2546 | 0.6301 | 0.6301 | 0.5441 | 0.5441 | 0.7205 | 0.7205 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| 3bf33957ecf48466fb7cb71a72141068 |
nielsr/nt5-small-rc1 | nielsr | t5 | 10 | 91 | transformers | 2 | text2text-generation | true | false | true | apache-2.0 | null | ['drop'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 4,140 | false |
# NT5, a T5 model trained to perform numerical reasoning
T5-small model pre-trained on 3 million (partly synthetic) texts and fine-tuned on [DROP](https://allennlp.org/drop.html). It was introduced in the paper [NT5?! Training T5 to Perform Numerical Reasoning](https://arxiv.org/abs/2104.07307) by Yang et al. and first released in [this repository](https://github.com/lesterpjy/numeric-t5). As the original implementation was in Tensorflow 2, I've converted the weigths to PyTorch. This model corresponds to RC Experiment 1 (see the paper), their best performing model.
Disclaimer: The team releasing NT5 did not write a model card for this model so this model card has been written by me.
## Model description
The NT5 model is a T5 model, in other words, an encoder-decoder Transformer. In order to encourage numerical reasoning, the model was further pre-trained on three datasets designed to strengthen skills necessary for numerical reasoning over text (NRoT) and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset.
## Intended uses & limitations
You can use the model for numerical reasoning over text.
### How to use
Here is how to use this model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
context = """Saint Jean de Brébeuf was a French Jesuit missionary who
travelled to New France in 1625. There he worked primarily with the Huron
for the rest of his life, except for a few years in France from 1629 to
1633. He learned their language and culture, writing extensively about
each to aid other missionaries. In 1649, Br´ebeuf and another missionary
were captured when an Iroquois raid took over a Huron village . Together
with Huron captives, the missionaries were ritually tortured and killed
on March 16, 1649. Br´ebeuf was beatified in 1925 and among eight Jesuit
missionaries canonized as saints in the Roman Catholic Church in 1930."""
question = "How many years did Saint Jean de Brébeuf stay in New France
before he went back to France for a few years?"
tokenizer = T5Tokenizer.from_pretrained("nielsr/nt5-small-rc1")
model = T5ForConditionalGeneration.from_pretrained("nielsr/nt5-small-rc1")
# encode context & question
input_text = f"answer_me: {question} context: {context}"
encoded_query = tokenizer(
input_text,
return_tensors='pt',
padding='max_length',
truncation=True,
max_length=512)
# generate answer
generated_answer = model.generate(input_ids=encoded_query["input_ids"],
attention_mask=encoded_query["attention_mask"],
max_length=54)
decoded_answer = tokenizer.decode(generated_answer.numpy()[0])
print("T5 Answer: ", decoded_answer)
T5 Answer: 4
```
## Evaluation results
This model achieves an F1 score of 0.7031 and exact match of 0.6687 on the development set of DROP.
### BibTeX entry and citation info
```bibtex
@misc{yang2021nt5,
title={NT5?! Training T5 to Perform Numerical Reasoning},
author={Peng-Jian Yang and Ying Ting Chen and Yuechan Chen and Daniel Cer},
year={2021},
eprint={2104.07307},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1903-00161,
author = {Dheeru Dua and
Yizhong Wang and
Pradeep Dasigi and
Gabriel Stanovsky and
Sameer Singh and
Matt Gardner},
title = {{DROP:} {A} Reading Comprehension Benchmark Requiring Discrete Reasoning
Over Paragraphs},
journal = {CoRR},
volume = {abs/1903.00161},
year = {2019},
url = {http://arxiv.org/abs/1903.00161},
archivePrefix = {arXiv},
eprint = {1903.00161},
timestamp = {Wed, 03 Jul 2019 07:17:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1903-00161.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
a service of Schloss Dagstuhl - Leibniz Center for Informatics\\\\thomebrowsesearchabout
``` | b540745020a818731eda3ce899c58b69 |
csikasote/xlsr-53-bemba-10hrs | csikasote | wav2vec2 | 13 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,829 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-53-bemba-10hrs
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3190
- Wer: 0.4032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3207 | 1.07 | 400 | 0.3720 | 0.5923 |
| 0.5688 | 2.14 | 800 | 0.3073 | 0.5002 |
| 0.3927 | 3.22 | 1200 | 0.2678 | 0.4521 |
| 0.316 | 4.29 | 1600 | 0.2703 | 0.4261 |
| 0.2531 | 5.36 | 2000 | 0.2663 | 0.4198 |
| 0.2051 | 6.43 | 2400 | 0.2614 | 0.4037 |
| 0.1584 | 7.51 | 2800 | 0.2853 | 0.4046 |
| 0.1343 | 8.58 | 3200 | 0.3072 | 0.4121 |
| 0.1031 | 9.65 | 3600 | 0.3190 | 0.4032 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 7173341e150869e8c8d1b491e411b7ca |
jonatasgrosman/exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s877 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 497 | false | # exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s877
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 084836cb11805feb241ed2f905e45981 |
Go2Heart/BERT_Mod_2 | Go2Heart | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,106 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Mod_2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5659
- eval_accuracy: 0.9037
- eval_runtime: 0.3838
- eval_samples_per_second: 2271.724
- eval_steps_per_second: 143.285
- epoch: 0.01
- step: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
| af593e5ce812e1bd58d08b527cb8f9cf |
jkhan447/sarcasm-detection-Bert-base-uncased-POS | jkhan447 | bert | 13 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-POS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1904
- Accuracy: 0.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
| a8d79419c610a983fc2f35e52013ffdd |
Helsinki-NLP/opus-mt-sv-xh | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-sv-xh
* source languages: sv
* target languages: xh
* OPUS readme: [sv-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.xh | 26.7 | 0.561 |
| 71bbfcc71ccc84ea466d09e8cc1d3cbf |
Gladiator/microsoft-deberta-v3-large_ner_conll2003 | Gladiator | deberta-v2 | 13 | 378 | transformers | 0 | token-classification | true | false | false | mit | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,742 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-deberta-v3-large_ner_conll2003
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0293
- Precision: 0.9667
- Recall: 0.9724
- F1: 0.9695
- Accuracy: 0.9945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0986 | 1.0 | 878 | 0.0323 | 0.9453 | 0.9596 | 0.9524 | 0.9921 |
| 0.0212 | 2.0 | 1756 | 0.0270 | 0.9571 | 0.9675 | 0.9623 | 0.9932 |
| 0.009 | 3.0 | 2634 | 0.0280 | 0.9638 | 0.9714 | 0.9676 | 0.9940 |
| 0.0035 | 4.0 | 3512 | 0.0290 | 0.9657 | 0.9712 | 0.9685 | 0.9943 |
| 0.0022 | 5.0 | 4390 | 0.0293 | 0.9667 | 0.9724 | 0.9695 | 0.9945 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 39612d0797d739fd477c87d38c79298e |
alexcaillet/ddpm-butterflies-128 | alexcaillet | null | 11 | 3 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,233 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/alexcaillet/ddpm-butterflies-128/tensorboard?#scalars)
| ad7b91bc39cbcef26ac2ed52d9939f9f |
AndrewR/distilgpt2-finetuned-imdb-lm | AndrewR | gpt2 | 18 | 9 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,246 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-imdb-lm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.9577 | 1.0 | 7315 | 3.8818 |
| 3.8965 | 2.0 | 14630 | 3.8570 |
| 3.8561 | 3.0 | 21945 | 3.8512 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 27288f1adcde48ce3d6ec7c82bdd7b75 |
jonatasgrosman/exp_w2v2t_sv-se_vp-it_s975 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sv-SE'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'sv-SE'] | false | true | true | 475 | false | # exp_w2v2t_sv-se_vp-it_s975
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 967e0f73b0f66078197154f0009b7790 |
IIIT-L/roberta-large-finetuned-code-mixed-DS | IIIT-L | roberta | 11 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,097 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-code-mixed-DS
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1340
- Accuracy: 0.7203
- Precision: 0.6584
- Recall: 0.6548
- F1: 0.6558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9729 | 1.0 | 248 | 0.7491 | 0.6922 | 0.6434 | 0.6625 | 0.6358 |
| 0.7474 | 1.99 | 496 | 0.6947 | 0.7183 | 0.6712 | 0.6915 | 0.6760 |
| 0.5938 | 2.99 | 744 | 0.7370 | 0.7123 | 0.6624 | 0.6839 | 0.6642 |
| 0.4264 | 3.98 | 992 | 0.8820 | 0.7123 | 0.6540 | 0.6636 | 0.6492 |
| 0.2806 | 4.98 | 1240 | 1.2022 | 0.7404 | 0.6807 | 0.6694 | 0.6742 |
| 0.2239 | 5.98 | 1488 | 1.3933 | 0.7223 | 0.6593 | 0.6587 | 0.6568 |
| 0.1585 | 6.97 | 1736 | 1.8543 | 0.7304 | 0.6730 | 0.6763 | 0.6737 |
| 0.1302 | 7.97 | 1984 | 2.0783 | 0.7143 | 0.6495 | 0.6520 | 0.6504 |
| 0.1008 | 8.96 | 2232 | 2.3523 | 0.7183 | 0.6588 | 0.6561 | 0.6552 |
| 0.0793 | 9.96 | 2480 | 2.5260 | 0.7163 | 0.6516 | 0.6566 | 0.6538 |
| 0.0498 | 10.96 | 2728 | 2.6074 | 0.7425 | 0.6902 | 0.6817 | 0.6830 |
| 0.0484 | 11.95 | 2976 | 2.6758 | 0.7284 | 0.6687 | 0.6734 | 0.6709 |
| 0.0409 | 12.95 | 3224 | 2.8658 | 0.7425 | 0.6817 | 0.6756 | 0.6781 |
| 0.0239 | 13.94 | 3472 | 2.9484 | 0.7465 | 0.6980 | 0.6818 | 0.6870 |
| 0.025 | 14.94 | 3720 | 3.0827 | 0.7304 | 0.6778 | 0.6577 | 0.6641 |
| 0.0286 | 15.94 | 3968 | 3.0011 | 0.7183 | 0.6509 | 0.6475 | 0.6491 |
| 0.0264 | 16.93 | 4216 | 3.1581 | 0.7264 | 0.6645 | 0.6563 | 0.6595 |
| 0.009 | 17.93 | 4464 | 3.1200 | 0.7223 | 0.6589 | 0.6561 | 0.6569 |
| 0.012 | 18.92 | 4712 | 3.1364 | 0.7203 | 0.6573 | 0.6503 | 0.6525 |
| 0.017 | 19.92 | 4960 | 3.1340 | 0.7203 | 0.6584 | 0.6548 | 0.6558 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| e45f9cec2d0796c6c4d40061fdf8138a |
Keneston/distilbert-base-uncased-finetuned-emotion | Keneston | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2214
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8568 | 1.0 | 250 | 0.3328 | 0.9 | 0.8947 |
| 0.2576 | 2.0 | 500 | 0.2214 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| e3cbfe91d1e60b669d5720bbdbed3324 |
Salesforce/codegen-6B-mono | Salesforce | codegen | 10 | 2,517 | transformers | 4 | text-generation | true | false | false | bsd-3-clause | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,933 | false | # CodeGen (CodeGen-Mono 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 6B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 6B* and further pre-trained on a Python programming language dataset, and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 6B) was firstly initialized with *CodeGen-Multi 6B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
| 88ac0d512da89b9c091387665c74d7d5 |
dbmdz/bert-base-italian-cased | dbmdz | bert | 8 | 14,147 | transformers | 4 | fill-mask | true | true | true | mit | ['it'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,938 | false |
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
| 544d14c6a93c01548ec7443a691e36d2 |
microsoft/git-large-textvqa | microsoft | git | 10 | 113 | transformers | 1 | visual-question-answering | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision'] | false | true | true | 3,046 | false |
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextVQA
GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextVQA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text).
Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs.
The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens.
The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token.
![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg)
This allows the model to be used for tasks like:
- image and video captioning
- visual question answering (VQA) on images and videos
- even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
## Intended uses & limitations
You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html).
## Training data
From the paper:
> We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions
(CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016),
Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B
data following a similar collection procedure in Hu et al. (2021a).
=> however this is for the model referred to as "GIT" in the paper, which is not open-sourced.
This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs.
Next, the model was fine-tuned on TextVQA.
See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
### Preprocessing
We refer to the original repo regarding details for preprocessing during training.
During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100). | 5ce5f9d4f6ee60632d13d5101a40fe6c |
shkim/distilbert-base-uncased-finetuned-imdb | shkim | distilbert | 9 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7117 | 1.0 | 157 | 2.4977 |
| 2.5783 | 2.0 | 314 | 2.4241 |
| 2.5375 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3719492cfadf1c3c1d535a819bb8a14f |
nbroad/openai-detector-base | nbroad | roberta | 10 | 25 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 9,255 | false |
# USE THIS SPACE: https://huggingface.co/spaces/nbroad/openai-detector-base
The following is copied from this repo: https://huggingface.co/roberta-base-openai-detector
# RoBERTa Base OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
More information needed. | c04c353f6b1f6a6fe6dc0595e87ef048 |
sandorscog/finetuning-sentiment-model-3000-samples | sandorscog | distilbert | 19 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,092 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0826
- Accuracy: 0.9761
- Precision: 0.9727
- Recall: 0.9654
- F1: 0.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| e01df45d07c46394122f98063c891c43 |
manandey/wav2vec2-large-xlsr-breton | manandey | wav2vec2 | 9 | 9 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['br'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,304 | false |
# Wav2Vec2-Large-XLSR-53-Breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "br", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)\/\«\»\½\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 54.04%
## Training
The Common Voice `train`, `validation` datasets were used for training.
| 0335cb6e6008a8030d4294a0446637d6 |
theojolliffe/bart-large-cnn-finetuned-roundup-2-2 | theojolliffe | bart | 18 | 3 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,561 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2-2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1521
- Rouge1: 52.6634
- Rouge2: 32.537
- Rougel: 33.3148
- Rougelsum: 50.148
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 1.2139 | 52.546 | 32.4912 | 32.9529 | 49.8241 | 142.0 |
| No log | 2.0 | 334 | 1.1521 | 52.6634 | 32.537 | 33.3148 | 50.148 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| e9bae9aa2f0239a5042a086c6bb853f0 |
ZhiyuanQiu/camembert-base-finetuned-sans-symbole-dd | ZhiyuanQiu | camembert | 12 | 6 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,635 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-sans-symbole-dd
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2642
- Precision: 0.8856
- Recall: 0.9176
- F1: 0.9013
- Accuracy: 0.9364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1961 | 1.0 | 4317 | 0.2216 | 0.8675 | 0.9039 | 0.8853 | 0.9319 |
| 0.161 | 2.0 | 8634 | 0.2243 | 0.8614 | 0.9158 | 0.8878 | 0.9237 |
| 0.1169 | 3.0 | 12951 | 0.2507 | 0.8752 | 0.9154 | 0.8949 | 0.9329 |
| 0.0875 | 4.0 | 17268 | 0.2642 | 0.8856 | 0.9176 | 0.9013 | 0.9364 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| c4eee46fa24f0b720650561a17c47227 |
Neha2608/xlm-roberta-base-finetuned-panx-fr | Neha2608 | xlm-roberta | 10 | 2 | transformers | 0 | null | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,260 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1699
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5975 | 1.0 | 191 | 0.2612 | 0.8237 |
| 0.2798 | 2.0 | 382 | 0.1699 | 0.8725 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| e043ec375c3c7f0f0ccd088b46dd5a64 |
KoichiYasuoka/roberta-base-coptic-upos | KoichiYasuoka | roberta | 9 | 7 | transformers | 0 | token-classification | true | false | false | cc-by-sa-4.0 | ['cop'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coptic', 'token-classification', 'pos', 'dependency-parsing'] | false | true | true | 881 | false |
# roberta-base-coptic-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [roberta-base-coptic](https://huggingface.co/KoichiYasuoka/roberta-base-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-coptic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| b106a9afaae02a0d575f8c2976fd0775 |
kyryl0s/gpt2-uk-xxs | kyryl0s | gpt2 | 7 | 4 | transformers | 0 | text-generation | true | false | false | afl-3.0 | ['uk'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 713 | false | ## GPT2 being trained on Ukrainian news.
### General info:
The model is not ready yet but I'm working on it. It also has a relatively small context window, which makes it quite uninteresting.
### Example of usage:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kyryl0s/gpt2-uk-xxs")
model = AutoModelForCausalLM.from_pretrained("kyryl0s/gpt2-uk-xxs")
input_ids = tokenizer.encode("Путін — ", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
``` | bb87cf4669b21514e9386dd8a7c8ca47 |
fanzru/distilbart-cnn-6-6-finetuned-xsum-intro-test | fanzru | bart | 13 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['xsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,482 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-6-6-finetuned-xsum-intro-test
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9036
- Rouge1: 32.0474
- Rouge2: 12.3779
- Rougel: 23.5491
- Rougelsum: 24.251
- Gen Len: 60.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9432 | 1.0 | 12753 | 1.9036 | 32.0474 | 12.3779 | 23.5491 | 24.251 | 60.8594 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 4034c96a038de7046e9093d0b8fa7462 |
Maheshnma/distilbert-base-uncased-finetuned-emotion | Maheshnma | distilbert | 12 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8477 | 1.0 | 250 | 0.3204 | 0.9025 | 0.9000 |
| 0.2559 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| cf2046a716a400452a1f448987dd0eb9 |
BenTata-86/distilbert-base-turkish-cased-finetuned-emotion | BenTata-86 | distilbert | 18 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | ['turkish-multiclass-dataset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,430 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased-finetuned-emotion
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the turkish-multiclass-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4861
- F1: {'f1': 0.8276613385259164}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------:|
| 0.2578 | 1.0 | 313 | 0.5459 | {'f1': 0.8212239281513611} |
| 0.381 | 2.0 | 626 | 0.4861 | {'f1': 0.8276613385259164} |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| de93228f9c0b285b77bcae3579923059 |
alxdfy/noggles6000 | alxdfy | null | 20 | 3 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,329 | false | ### noggles6000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by alxdfy
This your the Stable Diffusion model fine-tuned the noggles6000 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **nounsbud.jpg**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
nounsbud.jpg
![nounsbud.jpg 0](https://huggingface.co/alxdfy/noggles6000/resolve/main/concept_images/nounsbud.jpg)
| 27be4a03619e80253aba594d4bc7cdc9 |
Helsinki-NLP/opus-mt-ro-fi | Helsinki-NLP | marian | 10 | 32 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.fi | 25.2 | 0.521 |
| 0d3704a45957a233988dc7cb6075d826 |
Helsinki-NLP/opus-mt-tc-big-he-en | Helsinki-NLP | marian | 13 | 2,233 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['en', 'he'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 5,253 | false | # opus-mt-tc-big-he-en
Neural machine translation model for translating from Hebrew (he) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): heb
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT heb-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"היא שכחה לכתוב לו.",
"אני רוצה לדעת מיד כשמשהו יקרה."
]
model_name = "pytorch-models/opus-mt-tc-big-he-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# She forgot to write to him.
# I want to know as soon as something happens.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-he-en")
print(pipe("היא שכחה לכתוב לו."))
# expected output: She forgot to write to him.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| heb-eng | tatoeba-test-v2021-08-07 | 0.68565 | 53.8 | 10519 | 77427 |
| heb-eng | flores101-devtest | 0.68116 | 44.1 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:27:12 EEST 2022
* port machine: LM0-400-22516.local
| 25f663296e9d15b58cb1976d6cd14bce |
fathyshalab/all-roberta-large-v1-utility-3-16-5 | fathyshalab | roberta | 11 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,512 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 18deed91f78a092ec3b79c9f2b4a4684 |
google/t5-efficient-base-dl8 | google | t5 | 12 | 30 | transformers | 1 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,248 | false |
# T5-Efficient-BASE-DL8 (Deep-Narrow version)
T5-Efficient-BASE-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-dl8** - is of model type **Base** with the following variations:
- **dl** is **8**
It has **185.17** million parameters and thus requires *ca.* **740.67 MB** of memory in full precision (*fp32*)
or **370.34 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 29e08d618507876daec5e166414f7f4c |
EIStakovskii/french_toxicity_classifier_plus_v2 | EIStakovskii | camembert | 8 | 17 | transformers | 0 | text-classification | true | false | false | other | ['fr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,340 | false | ## Description
NB: this version of the model is the improved version of [EIStakovskii/french_toxicity_classifier_plus](https://huggingface.co/EIStakovskii/french_toxicity_classifier_plus).
To see the source code of training and the data please follow [the github link](https://github.com/eistakovskii/NLP_projects/tree/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR).
This model was trained for toxicity labeling.
The model was fine-tuned based off [the CamemBERT language model](https://huggingface.co/camembert-base).
To use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model = 'EIStakovskii/french_toxicity_classifier_plus_v2')
print(classifier("Foutez le camp d'ici!"))
```
## Metrics (at validation):
epoch|step|eval_accuracy|eval_f1|eval_loss
-|-|-|-|-
1.16|1600|0.9015412511332729|0.8968269048071442|0.3014959990978241
## Comparison against Perspective
This model was compared against the Google's [Perspective API](https://developers.perspectiveapi.com/s/?language=en_US) that similarly detects toxicity.
Two models were tested on two datasets: the size of [200 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_fr_200.csv) and [400 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_fr_400.csv).
The first one (arguably harder) was collected from the sentences of the [JigSaw](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data) and [DeTox](https://github.com/hdaSprachtechnologie/detox) datasets.
The second one (easier) was collected from the combination of sources: both from JigSaw and DeTox as well as [Paradetox](https://github.com/s-nlp/multilingual_detox/tree/main/data) translations and sentences extracted from [Reverso Context](https://context.reverso.net/translation/) by keywords.
# french_toxicity_classifier_plus_v2
size|accuracy|f1
-|-|-
200|0.783|0.803
400|0.890|0.879
# Perspective
size|accuracy|f1
-|-|-
200|0.826|0.795
**400|0.632|0.418
**I suspect that Perspective has such a low score in the case of the FR dataset (400) because it refuses to trigger on the words "merde" and "putain" and some more rarer words in French like "cul" and so on. | a9866f52bfdf4bffe12e449fbfd23a24 |
google/t5-efficient-small-nl16 | google | t5 | 12 | 9 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,256 | false |
# T5-Efficient-SMALL-NL16 (Deep-Narrow version)
T5-Efficient-SMALL-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-nl16** - is of model type **Small** with the following variations:
- **nl** is **16**
It has **133.97** million parameters and thus requires *ca.* **535.88 MB** of memory in full precision (*fp32*)
or **267.94 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | e8452ae33a83ee155f9fcf34d2304478 |
Tatiana239/lilt-en-funsd | Tatiana239 | lilt | 19 | 1 | transformers | 0 | token-classification | true | false | false | mit | null | ['funsd-layoutlmv3'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,738 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7928
- Answer: {'precision': 0.8716763005780347, 'recall': 0.9228886168910648, 'f1': 0.8965517241379309, 'number': 817}
- Header: {'precision': 0.5648148148148148, 'recall': 0.5126050420168067, 'f1': 0.5374449339207047, 'number': 119}
- Question: {'precision': 0.8945454545454545, 'recall': 0.9136490250696379, 'f1': 0.9039963252181902, 'number': 1077}
- Overall Precision: 0.8678
- Overall Recall: 0.8937
- Overall F1: 0.8806
- Overall Accuracy: 0.7985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4236 | 10.53 | 200 | 0.9583 | {'precision': 0.8623962040332147, 'recall': 0.8898408812729498, 'f1': 0.8759036144578314, 'number': 817} | {'precision': 0.5131578947368421, 'recall': 0.3277310924369748, 'f1': 0.39999999999999997, 'number': 119} | {'precision': 0.8450704225352113, 'recall': 0.947075208913649, 'f1': 0.893169877408056, 'number': 1077} | 0.8401 | 0.8872 | 0.8630 | 0.8016 |
| 0.0421 | 21.05 | 400 | 1.4064 | {'precision': 0.8573113207547169, 'recall': 0.8898408812729498, 'f1': 0.8732732732732732, 'number': 817} | {'precision': 0.4301675977653631, 'recall': 0.6470588235294118, 'f1': 0.5167785234899329, 'number': 119} | {'precision': 0.8667883211678832, 'recall': 0.8820798514391829, 'f1': 0.87436723423838, 'number': 1077} | 0.8262 | 0.8713 | 0.8482 | 0.7733 |
| 0.0121 | 31.58 | 600 | 1.5114 | {'precision': 0.8534090909090909, 'recall': 0.9192166462668299, 'f1': 0.8850913376546846, 'number': 817} | {'precision': 0.5930232558139535, 'recall': 0.42857142857142855, 'f1': 0.4975609756097561, 'number': 119} | {'precision': 0.8824577025823687, 'recall': 0.9201485608170845, 'f1': 0.9009090909090909, 'number': 1077} | 0.8583 | 0.8907 | 0.8742 | 0.8044 |
| 0.0058 | 42.11 | 800 | 1.4988 | {'precision': 0.8361391694725028, 'recall': 0.9118727050183598, 'f1': 0.8723653395784543, 'number': 817} | {'precision': 0.5203252032520326, 'recall': 0.5378151260504201, 'f1': 0.5289256198347108, 'number': 119} | {'precision': 0.8798206278026905, 'recall': 0.9108635097493036, 'f1': 0.8950729927007299, 'number': 1077} | 0.8408 | 0.8892 | 0.8643 | 0.7982 |
| 0.004 | 52.63 | 1000 | 1.5823 | {'precision': 0.8455467869222097, 'recall': 0.9179926560587516, 'f1': 0.880281690140845, 'number': 817} | {'precision': 0.5263157894736842, 'recall': 0.5042016806722689, 'f1': 0.5150214592274679, 'number': 119} | {'precision': 0.867595818815331, 'recall': 0.924791086350975, 'f1': 0.8952808988764045, 'number': 1077} | 0.8404 | 0.8972 | 0.8679 | 0.7996 |
| 0.0028 | 63.16 | 1200 | 1.6518 | {'precision': 0.8492822966507177, 'recall': 0.8690330477356181, 'f1': 0.8590441621294616, 'number': 817} | {'precision': 0.5855855855855856, 'recall': 0.5462184873949579, 'f1': 0.5652173913043478, 'number': 119} | {'precision': 0.88, 'recall': 0.9192200557103064, 'f1': 0.899182561307902, 'number': 1077} | 0.8518 | 0.8768 | 0.8641 | 0.7939 |
| 0.0013 | 73.68 | 1400 | 1.8819 | {'precision': 0.8378672470076169, 'recall': 0.9424724602203183, 'f1': 0.8870967741935485, 'number': 817} | {'precision': 0.6794871794871795, 'recall': 0.44537815126050423, 'f1': 0.5380710659898478, 'number': 119} | {'precision': 0.9006622516556292, 'recall': 0.8839368616527391, 'f1': 0.8922211808809747, 'number': 1077} | 0.8642 | 0.8818 | 0.8729 | 0.7931 |
| 0.0013 | 84.21 | 1600 | 1.8234 | {'precision': 0.8519362186788155, 'recall': 0.9155446756425949, 'f1': 0.8825958702064898, 'number': 817} | {'precision': 0.5585585585585585, 'recall': 0.5210084033613446, 'f1': 0.5391304347826087, 'number': 119} | {'precision': 0.9120982986767486, 'recall': 0.8960074280408542, 'f1': 0.9039812646370023, 'number': 1077} | 0.8671 | 0.8818 | 0.8744 | 0.7996 |
| 0.0008 | 94.74 | 1800 | 1.7898 | {'precision': 0.844170403587444, 'recall': 0.9216646266829865, 'f1': 0.8812170860152135, 'number': 817} | {'precision': 0.5294117647058824, 'recall': 0.5294117647058824, 'f1': 0.5294117647058824, 'number': 119} | {'precision': 0.8756613756613757, 'recall': 0.9220055710306406, 'f1': 0.898236092265943, 'number': 1077} | 0.8434 | 0.8987 | 0.8701 | 0.7901 |
| 0.0004 | 105.26 | 2000 | 1.8115 | {'precision': 0.8396436525612472, 'recall': 0.9228886168910648, 'f1': 0.8793002915451895, 'number': 817} | {'precision': 0.6063829787234043, 'recall': 0.4789915966386555, 'f1': 0.5352112676056338, 'number': 119} | {'precision': 0.8909090909090909, 'recall': 0.9099350046425255, 'f1': 0.90032154340836, 'number': 1077} | 0.8561 | 0.8897 | 0.8726 | 0.7939 |
| 0.0004 | 115.79 | 2200 | 1.7928 | {'precision': 0.8716763005780347, 'recall': 0.9228886168910648, 'f1': 0.8965517241379309, 'number': 817} | {'precision': 0.5648148148148148, 'recall': 0.5126050420168067, 'f1': 0.5374449339207047, 'number': 119} | {'precision': 0.8945454545454545, 'recall': 0.9136490250696379, 'f1': 0.9039963252181902, 'number': 1077} | 0.8678 | 0.8937 | 0.8806 | 0.7985 |
| 0.0003 | 126.32 | 2400 | 1.8271 | {'precision': 0.863013698630137, 'recall': 0.9253365973072215, 'f1': 0.8930891907855877, 'number': 817} | {'precision': 0.6105263157894737, 'recall': 0.48739495798319327, 'f1': 0.5420560747663552, 'number': 119} | {'precision': 0.8935395814376706, 'recall': 0.9117920148560817, 'f1': 0.9025735294117648, 'number': 1077} | 0.8676 | 0.8922 | 0.8797 | 0.7983 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu102
- Datasets 2.8.0
- Tokenizers 0.13.1
| 8987b3598f1177cf589d772c724c72e4 |
sd-concepts-library/gim | sd-concepts-library | null | 13 | 0 | null | 2 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,469 | false | ### Gim on Stable Diffusion
This is the `<grimes-album-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<grimes-album-style> 0](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/4.jpeg)
![<grimes-album-style> 1](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/0.jpeg)
![<grimes-album-style> 2](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/6.jpeg)
![<grimes-album-style> 3](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/3.jpeg)
![<grimes-album-style> 4](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/7.jpeg)
![<grimes-album-style> 5](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/2.jpeg)
![<grimes-album-style> 6](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/1.jpeg)
![<grimes-album-style> 7](https://huggingface.co/sd-concepts-library/gim/resolve/main/concept_images/5.jpeg)
| 0198028516448a7757201ce05edd9773 |
inovex/multi2convai-logistics-pl-bert | inovex | bert | 8 | 3 | transformers | 2 | text-classification | true | false | false | mit | ['pl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification'] | false | true | true | 866 | false |
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] | 9305e7b18da1d982697a9e79494f15dd |
mallikrao2/new_asr_model | mallikrao2 | wav2vec2 | 17 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,557 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_asr_model
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0553
- Wer: 0.1515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.1498 | 3.88 | 500 | 1.9949 | 0.9938 |
| 0.4835 | 7.75 | 1000 | 0.0690 | 0.1562 |
| 0.1202 | 11.63 | 1500 | 0.0555 | 0.1513 |
| 0.0842 | 15.5 | 2000 | 0.0564 | 0.1516 |
| 0.0637 | 19.38 | 2500 | 0.0559 | 0.1521 |
| 0.0647 | 23.26 | 3000 | 0.0553 | 0.1515 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu117
- Datasets 1.14.0
- Tokenizers 0.10.3
| 2dd228b49c266ba2d7ab23d208a32218 |
asapp/sew-small-100k | asapp | sew | 5 | 5 | transformers | 0 | feature-extraction | true | false | false | apache-2.0 | ['en'] | ['librispeech_asr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['speech'] | false | true | true | 1,696 | false |
# SEW-small
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
| f387400c6cfe971bda36360b08362f7f |
FluxML/densenet169 | FluxML | null | 3 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 523 | false |
DenseNet169 model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef).
To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/#Adding-packages). Then execute:
```julia
using Metalhead
model = DenseNet(169; pretrain = true)
``` | 5537452051cc3cc30c53c852079f4798 |
Hatman/bert-finetuned-ner | Hatman | bert | 16 | 8 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,188 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0814
## Model description
bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a bert-large-NER version is also available.
# How to Use
You can use this model with Transformers pipeline for NER.
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Hatman/bert-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Hatman/bert-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0181 | 1.0 | 1756 | 0.1301 |
| 0.0166 | 2.0 | 3512 | 0.0762 |
| 0.0064 | 3.0 | 5268 | 0.0814 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 2382d68b18f8fa87e40f06ece9f32062 |
google/multiberts-seed_3-step_0k | google | bert | 8 | 14 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_0k'] | false | true | true | 3,509 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 3, Step 0k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #3, captured at step 0k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_0k')
model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_0k')
model = BertModel.from_pretrained("google/multiberts-seed_3-step_0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| a6f3cd09c532a5313fc070820ef4fd38 |