repo_id
stringclasses 55
values | file_path
stringlengths 42
186
| content
stringlengths 1
333k
| __index_level_0__
int64 0
0
|
---|---|---|---|
mavonic_private_repos | mavonic_private_repos/transformers/README_ko.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<b>ํ๊ตญ์ด</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p> Jax, Pytorch, TensorFlow๋ฅผ ์ํ ์ต์ฒจ๋จ ์์ฐ์ด์ฒ๋ฆฌ</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers๋ ๋ถ๋ฅ, ์ ๋ณด ์ถ์ถ, ์ง๋ฌธ ๋ต๋ณ, ์์ฝ, ๋ฒ์ญ, ๋ฌธ์ฅ ์์ฑ ๋ฑ์ 100๊ฐ ์ด์์ ์ธ์ด๋ก ์ํํ ์ ์๋ ์์ฒ๊ฐ์ ์ฌ์ ํ์ต๋ ๋ชจ๋ธ์ ์ ๊ณตํฉ๋๋ค. ์ฐ๋ฆฌ์ ๋ชฉํ๋ ๋ชจ๋๊ฐ ์ต์ฒจ๋จ์ NLP ๊ธฐ์ ์ ์ฝ๊ฒ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค.
๐ค Transformers๋ ์ด๋ฌํ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ๋น ๋ฅด๊ฒ ๋ค์ด๋ก๋ํด ํน์ ํ
์คํธ์ ์ฌ์ฉํ๊ณ , ์ํ๋ ๋ฐ์ดํฐ๋ก fine-tuningํด ์ปค๋ฎค๋ํฐ๋ ์ฐ๋ฆฌ์ [๋ชจ๋ธ ํ๋ธ](https://huggingface.co/models)์ ๊ณต์ ํ ์ ์๋๋ก API๋ฅผ ์ ๊ณตํฉ๋๋ค. ๋ํ, ๋ชจ๋ธ ๊ตฌ์กฐ๋ฅผ ์ ์ํ๋ ๊ฐ ํ์ด์ฌ ๋ชจ๋์ ์์ ํ ๋
๋ฆฝ์ ์ด์ฌ์ ์ฐ๊ตฌ ์คํ์ ์ํด ์์ฝ๊ฒ ์์ ํ ์ ์์ต๋๋ค.
๐ค Transformers๋ ๊ฐ์ฅ ์ ๋ช
ํ 3๊ฐ์ ๋ฅ๋ฌ๋ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ง์ํฉ๋๋ค. ์ด๋ค์ ์๋ก ์๋ฒฝํ ์ฐ๋๋ฉ๋๋ค โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). ๊ฐ๋จํ๊ฒ ์ด ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์ค ํ๋๋ก ๋ชจ๋ธ์ ํ์ตํ๊ณ , ๋ ๋ค๋ฅธ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก ์ถ๋ก ์ ์ํด ๋ชจ๋ธ์ ๋ถ๋ฌ์ฌ ์ ์์ต๋๋ค.
## ์จ๋ผ์ธ ๋ฐ๋ชจ
๋๋ถ๋ถ์ ๋ชจ๋ธ์ [๋ชจ๋ธ ํ๋ธ](https://huggingface.co/models) ํ์ด์ง์์ ๋ฐ๋ก ํ
์คํธํด๋ณผ ์ ์์ต๋๋ค. ๊ณต๊ฐ ๋ฐ ๋น๊ณต๊ฐ ๋ชจ๋ธ์ ์ํ [๋น๊ณต๊ฐ ๋ชจ๋ธ ํธ์คํ
, ๋ฒ์ ๊ด๋ฆฌ, ์ถ๋ก API](https://huggingface.co/pricing)๋ ์ ๊ณตํฉ๋๋ค.
์์:
- [BERT๋ก ๋ง์คํน๋ ๋จ์ด ์์ฑํ๊ธฐ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Electra๋ฅผ ์ด์ฉํ ๊ฐ์ฒด๋ช
์ธ์](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [GPT-2๋ก ํ
์คํธ ์์ฑํ๊ธฐ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [RoBERTa๋ก ์์ฐ์ด ์ถ๋ก ํ๊ธฐ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [BART๋ฅผ ์ด์ฉํ ์์ฝ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [DistilBERT๋ฅผ ์ด์ฉํ ์ง๋ฌธ ๋ต๋ณ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [T5๋ก ๋ฒ์ญํ๊ธฐ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
**[Transformer์ ๊ธ์ฐ๊ธฐ](https://transformer.huggingface.co)** ๋ ์ด ์ ์ฅ์์ ํ
์คํธ ์์ฑ ๋ฅ๋ ฅ์ ๊ดํ Hugging Face ํ์ ๊ณต์ ๋ฐ๋ชจ์
๋๋ค.
## Hugging Face ํ์ ์ปค์คํ
์ง์์ ์ํ๋ค๋ฉด
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## ํต ํฌ์ด
์ํ๋ ํ
์คํธ์ ๋ฐ๋ก ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์๋๋ก, ์ฐ๋ฆฌ๋ `pipeline` API๋ฅผ ์ ๊ณตํฉ๋๋ค. Pipeline์ ์ฌ์ ํ์ต ๋ชจ๋ธ๊ณผ ๊ทธ ๋ชจ๋ธ์ ํ์ตํ ๋ ์ ์ฉํ ์ ์ฒ๋ฆฌ ๋ฐฉ์์ ํ๋๋ก ํฉ์นฉ๋๋ค. ๋ค์์ ๊ธ์ ์ ์ธ ํ
์คํธ์ ๋ถ์ ์ ์ธ ํ
์คํธ๋ฅผ ๋ถ๋ฅํ๊ธฐ ์ํด pipeline์ ์ฌ์ฉํ ๊ฐ๋จํ ์์์
๋๋ค:
```python
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
์ฝ๋์ ๋๋ฒ์งธ ์ค์ pipeline์ด ์ฌ์ฉํ๋ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ๋ค์ด๋ก๋ํ๊ณ ์บ์๋ก ์ ์ฅํฉ๋๋ค. ์ธ๋ฒ์งธ ์ค์์ ๊ทธ ๋ชจ๋ธ์ด ์ฃผ์ด์ง ํ
์คํธ๋ฅผ ํ๊ฐํฉ๋๋ค. ์ฌ๊ธฐ์ ๋ชจ๋ธ์ 99.97%์ ํ๋ฅ ๋ก ํ
์คํธ๊ฐ ๊ธ์ ์ ์ด๋ผ๊ณ ํ๊ฐํ์ต๋๋ค.
๋ง์ NLP ๊ณผ์ ๋ค์ `pipeline`์ผ๋ก ๋ฐ๋ก ์ํํ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด, ์ง๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์ด์ง๋ฉด ์์ฝ๊ฒ ๋ต๋ณ์ ์ถ์ถํ ์ ์์ต๋๋ค:
``` python
>>> from transformers import pipeline
# Allocate a pipeline for question-answering
>>> question_answerer = pipeline('question-answering')
>>> question_answerer({
... 'question': 'What is the name of the repository ?',
... 'context': 'Pipeline has been included in the huggingface/transformers repository'
... })
{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
```
๋ต๋ณ๋ฟ๋ง ์๋๋ผ, ์ฌ๊ธฐ์ ์ฌ์ฉ๋ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ํ์ ๋์ ํ ํฌ๋์ด์ฆ๋ ๋ฌธ์ฅ ์ ๋ต๋ณ์ ์์์ , ๋์ ๊น์ง ๋ฐํํฉ๋๋ค. [์ด ํํ ๋ฆฌ์ผ](https://huggingface.co/docs/transformers/task_summary)์์ `pipeline` API๊ฐ ์ง์ํ๋ ๋ค์ํ ๊ณผ์ ๋ฅผ ํ์ธํ ์ ์์ต๋๋ค.
์ฝ๋ 3์ค๋ก ์ํ๋ ๊ณผ์ ์ ๋ง๊ฒ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ๋ค์ด๋ก๋ ๋ฐ๊ณ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ค์์ PyTorch ๋ฒ์ ์
๋๋ค:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
๋ค์์ TensorFlow ๋ฒ์ ์
๋๋ค:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
ํ ํฌ๋์ด์ ๋ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ๋ชจ๋ ์ ์ฒ๋ฆฌ๋ฅผ ์ฑ
์์ง๋๋ค. ๊ทธ๋ฆฌ๊ณ (์์ ์์์ฒ๋ผ) 1๊ฐ์ ์คํธ๋ง์ด๋ ๋ฆฌ์คํธ๋ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. ํ ํฌ๋์ด์ ๋ ๋์
๋๋ฆฌ๋ฅผ ๋ฐํํ๋๋ฐ, ์ด๋ ๋ค์ด์คํธ๋ฆผ ์ฝ๋์ ์ฌ์ฉํ๊ฑฐ๋ ์ธํจํน ์ฐ์ฐ์ ** ๋ฅผ ์ด์ฉํด ๋ชจ๋ธ์ ๋ฐ๋ก ์ ๋ฌํ ์๋ ์์ต๋๋ค.
๋ชจ๋ธ ์์ฒด๋ ์ผ๋ฐ์ ์ผ๋ก ์ฌ์ฉ๋๋ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๋ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)์
๋๋ค. [์ด ํํ ๋ฆฌ์ผ](https://huggingface.co/transformers/training.html)์ ์ด๋ฌํ ๋ชจ๋ธ์ ํ์ค์ ์ธ PyTorch๋ TensorFlow ํ์ต ๊ณผ์ ์์ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ, ๋๋ ์๋ก์ด ๋ฐ์ดํฐ๋ก fine-tuneํ๊ธฐ ์ํด `Trainer` API๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ค๋ช
ํด์ค๋๋ค.
## ์ transformers๋ฅผ ์ฌ์ฉํด์ผ ํ ๊น์?
1. ์์ฝ๊ฒ ์ฌ์ฉํ ์ ์๋ ์ต์ฒจ๋จ ๋ชจ๋ธ:
- NLU์ NLG ๊ณผ์ ์์ ๋ฐ์ด๋ ์ฑ๋ฅ์ ๋ณด์
๋๋ค.
- ๊ต์ก์ ์ค๋ฌด์์๊ฒ ์ง์
์ฅ๋ฒฝ์ด ๋ฎ์ต๋๋ค.
- 3๊ฐ์ ํด๋์ค๋ง ๋ฐฐ์ฐ๋ฉด ๋ฐ๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค.
- ํ๋์ API๋ก ๋ชจ๋ ์ฌ์ ํ์ต ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
1. ๋ ์ ์ ๊ณ์ฐ ๋น์ฉ, ๋ ์ ์ ํ์ ๋ฐ์๊ตญ:
- ์ฐ๊ตฌ์๋ค์ ๋ชจ๋ธ์ ๊ณ์ ๋ค์ ํ์ต์ํค๋ ๋์ ํ์ต๋ ๋ชจ๋ธ์ ๊ณต์ ํ ์ ์์ต๋๋ค.
- ์ค๋ฌด์๋ค์ ํ์ต์ ํ์ํ ์๊ฐ๊ณผ ๋น์ฉ์ ์ ์ฝํ ์ ์์ต๋๋ค.
- ์์ญ๊ฐ์ ๋ชจ๋ธ ๊ตฌ์กฐ, 2,000๊ฐ ์ด์์ ์ฌ์ ํ์ต ๋ชจ๋ธ, 100๊ฐ ์ด์์ ์ธ์ด๋ก ํ์ต๋ ๋ชจ๋ธ ๋ฑ.
1. ๋ชจ๋ธ์ ๊ฐ ์์ ์ฃผ๊ธฐ์ ์ ํฉํ ํ๋ ์์ํฌ:
- ์ฝ๋ 3์ค๋ก ์ต์ฒจ๋จ ๋ชจ๋ธ์ ํ์ตํ์ธ์.
- ์์ ๋กญ๊ฒ ๋ชจ๋ธ์ TF2.0๋ PyTorch ํ๋ ์์ํฌ๋ก ๋ณํํ์ธ์.
- ํ์ต, ํ๊ฐ, ๊ณต๊ฐ ๋ฑ ๊ฐ ๋จ๊ณ์ ๋ง๋ ํ๋ ์์ํฌ๋ฅผ ์ํ๋๋๋ก ์ ํํ์ธ์.
1. ํ์ํ ๋๋ก ๋ชจ๋ธ์ด๋ ์์๋ฅผ ์ปค์คํฐ๋ง์ด์ฆํ์ธ์:
- ์ฐ๋ฆฌ๋ ์ ์๊ฐ ๊ณต๊ฐํ ๊ฒฐ๊ณผ๋ฅผ ์ฌํํ๊ธฐ ์ํด ๊ฐ ๋ชจ๋ธ ๊ตฌ์กฐ์ ์์๋ฅผ ์ ๊ณตํฉ๋๋ค.
- ๋ชจ๋ธ ๋ด๋ถ ๊ตฌ์กฐ๋ ๊ฐ๋ฅํ ์ผ๊ด์ ์ผ๋ก ๊ณต๊ฐ๋์ด ์์ต๋๋ค.
- ๋น ๋ฅธ ์คํ์ ์ํด ๋ชจ๋ธ ํ์ผ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ๋
๋ฆฝ์ ์ผ๋ก ์ฌ์ฉ๋ ์ ์์ต๋๋ค.
## ์ transformers๋ฅผ ์ฌ์ฉํ์ง ๋ง์์ผ ํ ๊น์?
- ์ด ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ ์ ๊ฒฝ๋ง ๋ธ๋ก์ ๋ง๋ค๊ธฐ ์ํ ๋ชจ๋์ด ์๋๋๋ค. ์ฐ๊ตฌ์๋ค์ด ์ฌ๋ฌ ํ์ผ์ ์ดํด๋ณด์ง ์๊ณ ๋ฐ๋ก ๊ฐ ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์๋๋ก, ๋ชจ๋ธ ํ์ผ ์ฝ๋์ ์ถ์ํ ์์ค์ ์ ์ ํ๊ฒ ์ ์งํ์ต๋๋ค.
- ํ์ต API๋ ๋ชจ๋ ๋ชจ๋ธ์ ์ ์ฉํ ์ ์๋๋ก ๋ง๋ค์ด์ง์ง ์์์ง๋ง, ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ์ ๊ณตํ๋ ๋ชจ๋ธ๋ค์ ์ ์ฉํ ์ ์๋๋ก ์ต์ ํ๋์์ต๋๋ค. ์ผ๋ฐ์ ์ธ ๋จธ์ ๋ฌ๋์ ์ํด์ , ๋ค๋ฅธ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ฌ์ฉํ์ธ์.
- ๊ฐ๋ฅํ ๋ง์ ์ฌ์ฉ ์์๋ฅผ ๋ณด์ฌ๋๋ฆฌ๊ณ ์ถ์ด์, [์์ ํด๋](https://github.com/huggingface/transformers/tree/main/examples)์ ์คํฌ๋ฆฝํธ๋ฅผ ์ค๋นํ์ต๋๋ค. ์ด ์คํฌ๋ฆฝํธ๋ค์ ์์ ์์ด ํน์ ํ ๋ฌธ์ ์ ๋ฐ๋ก ์ ์ฉํ์ง ๋ชปํ ์ ์์ต๋๋ค. ํ์์ ๋ง๊ฒ ์ผ๋ถ ์ฝ๋๋ฅผ ์์ ํด์ผ ํ ์ ์์ต๋๋ค.
## ์ค์น
### pip๋ก ์ค์นํ๊ธฐ
์ด ์ ์ฅ์๋ Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, TensorFlow 2.6+์์ ํ
์คํธ ๋์์ต๋๋ค.
[๊ฐ์ ํ๊ฒฝ](https://docs.python.org/3/library/venv.html)์ ๐ค Transformers๋ฅผ ์ค์นํ์ธ์. Python ๊ฐ์ ํ๊ฒฝ์ ์ต์ํ์ง ์๋ค๋ฉด, [์ฌ์ฉ์ ๊ฐ์ด๋](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)๋ฅผ ํ์ธํ์ธ์.
์ฐ์ , ์ฌ์ฉํ Python ๋ฒ์ ์ผ๋ก ๊ฐ์ ํ๊ฒฝ์ ๋ง๋ค๊ณ ์คํํ์ธ์.
๊ทธ ๋ค์, Flax, PyTorch, TensorFlow ์ค ์ ์ด๋ ํ๋๋ ์ค์นํด์ผ ํฉ๋๋ค.
ํ๋ซํผ์ ๋ง๋ ์ค์น ๋ช
๋ น์ด๋ฅผ ํ์ธํ๊ธฐ ์ํด [TensorFlow ์ค์น ํ์ด์ง](https://www.tensorflow.org/install/), [PyTorch ์ค์น ํ์ด์ง](https://pytorch.org/get-started/locally/#start-locally), [Flax ์ค์น ํ์ด์ง](https://github.com/google/flax#quick-install)๋ฅผ ํ์ธํ์ธ์.
์ด๋ค ์ค ์ ์ด๋ ํ๋๊ฐ ์ค์น๋์๋ค๋ฉด, ๐ค Transformers๋ ๋ค์๊ณผ ๊ฐ์ด pip์ ์ด์ฉํด ์ค์นํ ์ ์์ต๋๋ค:
```bash
pip install transformers
```
์์๋ค์ ์ฒดํํด๋ณด๊ณ ์ถ๊ฑฐ๋, ์ต์ต์ต์ฒจ๋จ ์ฝ๋๋ฅผ ์ํ๊ฑฐ๋, ์๋ก์ด ๋ฒ์ ์ด ๋์ฌ ๋๊น์ง ๊ธฐ๋ค๋ฆด ์ ์๋ค๋ฉด [๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์์ค์์ ๋ฐ๋ก ์ค์น](https://huggingface.co/docs/transformers/installation#installing-from-source)ํ์
์ผ ํฉ๋๋ค.
### conda๋ก ์ค์นํ๊ธฐ
๐ค Transformers๋ ๋ค์๊ณผ ๊ฐ์ด conda๋ก ์ค์นํ ์ ์์ต๋๋ค:
```shell script
conda install conda-forge::transformers
```
> **_๋
ธํธ:_** `huggingface` ์ฑ๋์์ `transformers`๋ฅผ ์ค์นํ๋ ๊ฒ์ ์ฌ์ฉ์ด ์ค๋จ๋์์ต๋๋ค.
Flax, PyTorch, TensorFlow ์ค์น ํ์ด์ง์์ ์ด๋ค์ conda๋ก ์ค์นํ๋ ๋ฐฉ๋ฒ์ ํ์ธํ์ธ์.
## ๋ชจ๋ธ ๊ตฌ์กฐ
**๐ค Transformers๊ฐ ์ ๊ณตํ๋ [๋ชจ๋ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ](https://huggingface.co/models)** ๋ huggingface.co [๋ชจ๋ธ ํ๋ธ](https://huggingface.co)์ ์๋ฒฝํ ์ฐ๋๋์ด ์์ต๋๋ค. [๊ฐ์ธ](https://huggingface.co/users)๊ณผ [๊ธฐ๊ด](https://huggingface.co/organizations)์ด ๋ชจ๋ธ ํ๋ธ์ ์ง์ ์
๋ก๋ํ ์ ์์ต๋๋ค.
ํ์ฌ ์ฌ์ฉ ๊ฐ๋ฅํ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์ ๊ฐ์: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers๋ ๋ค์ ๋ชจ๋ธ๋ค์ ์ ๊ณตํฉ๋๋ค: ๊ฐ ๋ชจ๋ธ์ ์์ฝ์ [์ฌ๊ธฐ](https://huggingface.co/docs/transformers/model_summary)์ ํ์ธํ์ธ์.
๊ฐ ๋ชจ๋ธ์ด Flax, PyTorch, TensorFlow์ผ๋ก ๊ตฌํ๋์๋์ง ๋๋ ๐ค Tokenizers ๋ผ์ด๋ธ๋ฌ๋ฆฌ๊ฐ ์ง์ํ๋ ํ ํฌ๋์ด์ ๋ฅผ ์ฌ์ฉํ๋์ง ํ์ธํ๋ ค๋ฉด, [์ด ํ](https://huggingface.co/docs/transformers/index#supported-frameworks)๋ฅผ ํ์ธํ์ธ์.
์ด ๊ตฌํ์ ์ฌ๋ฌ ๋ฐ์ดํฐ๋ก ๊ฒ์ฆ๋์๊ณ (์์ ์คํฌ๋ฆฝํธ๋ฅผ ์ฐธ๊ณ ํ์ธ์) ์ค๋ฆฌ์ง๋ ๊ตฌํ์ ์ฑ๋ฅ๊ณผ ๊ฐ์์ผ ํฉ๋๋ค. [๋ํ๋จผํธ](https://huggingface.co/docs/transformers/examples)์ Examples ์น์
์์ ์ฑ๋ฅ์ ๋ํ ์์ธํ ์ค๋ช
์ ํ์ธํ ์ ์์ต๋๋ค.
## ๋ ์์๋ณด๊ธฐ
| ์น์
| ์ค๋ช
|
|-|-|
| [๋ํ๋จผํธ](https://huggingface.co/transformers/) | ์ ์ฒด API ๋ํ๋จผํธ์ ํํ ๋ฆฌ์ผ |
| [๊ณผ์ ์์ฝ](https://huggingface.co/docs/transformers/task_summary) | ๐ค Transformers๊ฐ ์ง์ํ๋ ๊ณผ์ ๋ค |
| [์ ์ฒ๋ฆฌ ํํ ๋ฆฌ์ผ](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` ํด๋์ค๋ฅผ ์ด์ฉํด ๋ชจ๋ธ์ ์ํ ๋ฐ์ดํฐ ์ค๋นํ๊ธฐ |
| [ํ์ต๊ณผ fine-tuning](https://huggingface.co/docs/transformers/training) | ๐ค Transformers๊ฐ ์ ๊ณตํ๋ ๋ชจ๋ธ PyTorch/TensorFlow ํ์ต ๊ณผ์ ๊ณผ `Trainer` API์์ ์ฌ์ฉํ๊ธฐ |
| [ํต ํฌ์ด: Fine-tuning/์ฌ์ฉ ์คํฌ๋ฆฝํธ](https://github.com/huggingface/transformers/tree/main/examples) | ๋ค์ํ ๊ณผ์ ์์ ๋ชจ๋ธ fine-tuningํ๋ ์์ ์คํฌ๋ฆฝํธ |
| [๋ชจ๋ธ ๊ณต์ ๋ฐ ์
๋ก๋](https://huggingface.co/docs/transformers/model_sharing) | ์ปค๋ฎค๋ํฐ์ fine-tune๋ ๋ชจ๋ธ์ ์
๋ก๋ ๋ฐ ๊ณต์ ํ๊ธฐ |
| [๋ง์ด๊ทธ๋ ์ด์
](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`๋ `pytorch-pretrained-bert`์์ ๐ค Transformers๋ก ์ด๋ํ๊ธฐ|
## ์ธ์ฉ
๐ค Transformers ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ธ์ฉํ๊ณ ์ถ๋ค๋ฉด, ์ด [๋
ผ๋ฌธ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)์ ์ธ์ฉํด ์ฃผ์ธ์:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/CONTRIBUTING.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Contribute to ๐ค Transformers
Everyone is welcome to contribute, and we value everybody's contribution. Code
contributions are not the only way to help the community. Answering questions, helping
others, and improving the documentation are also immensely valuable.
It also helps us if you spread the word! Reference the library in blog posts
about the awesome projects it made possible, shout out on Twitter every time it has
helped you, or simply โญ๏ธ the repository to say thank you.
However you choose to contribute, please be mindful and respect our
[code of conduct](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md).
**This guide was heavily inspired by the awesome [scikit-learn guide to contributing](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md).**
## Ways to contribute
There are several ways you can contribute to ๐ค Transformers:
* Fix outstanding issues with the existing code.
* Submit issues related to bugs or desired new features.
* Implement new models.
* Contribute to the examples or to the documentation.
If you don't know where to start, there is a special [Good First
Issue](https://github.com/huggingface/transformers/contribute) listing. It will give you a list of
open issues that are beginner-friendly and help you start contributing to open-source. The best way to do that is to open a Pull Request and link it to the issue that you'd like to work on. We try to give priority to opened PRs as we can easily track the progress of the fix, and if the contributor does not have time anymore, someone else can take the PR over.
For something slightly more challenging, you can also take a look at the [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) list. In general though, if you feel like you know what you're doing, go for it and we'll help you get there! ๐
> All contributions are equally valuable to the community. ๐ฅฐ
## Fixing outstanding issues
If you notice an issue with the existing code and have a fix in mind, feel free to [start contributing](#create-a-pull-request) and open a Pull Request!
## Submitting a bug-related issue or feature request
Do your best to follow these guidelines when submitting a bug-related issue or a feature
request. It will make it easier for us to come back to you quickly and with good
feedback.
### Did you find a bug?
The ๐ค Transformers library is robust and reliable thanks to users who report the problems they encounter.
Before you report an issue, we would really appreciate it if you could **make sure the bug was not
already reported** (use the search bar on GitHub under Issues). Your issue should also be related to bugs in the library itself, and not your code. If you're unsure whether the bug is in your code or the library, please ask in the [forum](https://discuss.huggingface.co/) first. This helps us respond quicker to fixing issues related to the library versus general questions.
Once you've confirmed the bug hasn't already been reported, please include the following information in your issue so we can quickly resolve it:
* Your **OS type and version** and **Python**, **PyTorch** and
**TensorFlow** versions when applicable.
* A short, self-contained, code snippet that allows us to reproduce the bug in
less than 30s.
* The *full* traceback if an exception is raised.
* Attach any other additional information, like screenshots, you think may help.
To get the OS and software versions automatically, run the following command:
```bash
transformers-cli env
```
You can also run the same command from the root of the repository:
```bash
python src/transformers/commands/transformers_cli.py env
```
### Do you want a new feature?
If there is a new feature you'd like to see in ๐ค Transformers, please open an issue and describe:
1. What is the *motivation* behind this feature? Is it related to a problem or frustration with the library? Is it a feature related to something you need for a project? Is it something you worked on and think it could benefit the community?
Whatever it is, we'd love to hear about it!
2. Describe your requested feature in as much detail as possible. The more you can tell us about it, the better we'll be able to help you.
3. Provide a *code snippet* that demonstrates the features usage.
4. If the feature is related to a paper, please include a link.
If your issue is well written we're already 80% of the way there by the time you create it.
We have added [templates](https://github.com/huggingface/transformers/tree/main/templates) to help you get started with your issue.
## Do you want to implement a new model?
New models are constantly released and if you want to implement a new model, please provide the following information:
* A short description of the model and a link to the paper.
* Link to the implementation if it is open-sourced.
* Link to the model weights if they are available.
If you are willing to contribute the model yourself, let us know so we can help you add it to ๐ค Transformers!
We have a technical guide for [how to add a model to ๐ค Transformers](https://huggingface.co/docs/transformers/add_new_model).
## Do you want to add documentation?
We're always looking for improvements to the documentation that make it more clear and accurate. Please let us know how the documentation can be improved such as typos and any content that is missing, unclear or inaccurate. We'll be happy to make the changes or help you make a contribution if you're interested!
For more details about how to generate, build, and write the documentation, take a look at the documentation [README](https://github.com/huggingface/transformers/tree/main/docs).
## Create a Pull Request
Before writing any code, we strongly advise you to search through the existing PRs or
issues to make sure nobody is already working on the same thing. If you are
unsure, it is always a good idea to open an issue to get some feedback.
You will need basic `git` proficiency to contribute to
๐ค Transformers. While `git` is not the easiest tool to use, it has the greatest
manual. Type `git --help` in a shell and enjoy! If you prefer books, [Pro
Git](https://git-scm.com/book/en/v2) is a very good reference.
You'll need **[Python 3.8](https://github.com/huggingface/transformers/blob/main/setup.py#L426)** or above to contribute to ๐ค Transformers. Follow the steps below to start contributing:
1. Fork the [repository](https://github.com/huggingface/transformers) by
clicking on the **[Fork](https://github.com/huggingface/transformers/fork)** button on the repository's page. This creates a copy of the code
under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
git clone [email protected]:<your Github handle>/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
```
3. Create a new branch to hold your development changes:
```bash
git checkout -b a-descriptive-name-for-my-changes
```
๐จ **Do not** work on the `main` branch!
4. Set up a development environment by running the following command in a virtual environment:
```bash
pip install -e ".[dev]"
```
If ๐ค Transformers was already installed in the virtual environment, remove
it with `pip uninstall transformers` before reinstalling it in editable
mode with the `-e` flag.
Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a
failure with this command. If that's the case make sure to install the Deep Learning framework you are working with
(PyTorch, TensorFlow and/or Flax) then do:
```bash
pip install -e ".[quality]"
```
which should be enough for most use cases.
5. Develop the features in your branch.
As you work on your code, you should make sure the test suite
passes. Run the tests impacted by your changes like this:
```bash
pytest tests/<TEST_TO_RUN>.py
```
For more information about tests, check out the
[Testing](https://huggingface.co/docs/transformers/testing) guide.
๐ค Transformers relies on `black` and `ruff` to format its source code
consistently. After you make changes, apply automatic style corrections and code verifications
that can't be automated in one go with:
```bash
make fixup
```
This target is also optimized to only work with files modified by the PR you're working on.
If you prefer to run the checks one after the other, the following command applies the
style corrections:
```bash
make style
```
๐ค Transformers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality
controls are run by the CI, but you can run the same checks with:
```bash
make quality
```
Finally, we have a lot of scripts to make sure we don't forget to update
some files when adding a new model. You can run these scripts with:
```bash
make repo-consistency
```
To learn more about those checks and how to fix any issues with them, check out the
[Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide.
If you're modifying documents under the `docs/source` directory, make sure the documentation can still be built. This check will also run in the CI when you open a pull request. To run a local check
make sure you install the documentation builder:
```bash
pip install ".[docs]"
```
Run the following command from the root of the repository:
```bash
doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build
```
This will build the documentation in the `~/tmp/test-build` folder where you can inspect the generated
Markdown files with your favorite editor. You can also preview the docs on GitHub when you open a pull request.
Once you're happy with your changes, add the changed files with `git add` and
record your changes locally with `git commit`:
```bash
git add modified_file.py
git commit
```
Please remember to write [good commit
messages](https://chris.beams.io/posts/git-commit/) to clearly communicate the changes you made!
To keep your copy of the code up to date with the original
repository, rebase your branch on `upstream/branch` *before* you open a pull request or if requested by a maintainer:
```bash
git fetch upstream
git rebase upstream/main
```
Push your changes to your branch:
```bash
git push -u origin a-descriptive-name-for-my-changes
```
If you've already opened a pull request, you'll need to force push with the `--force` flag. Otherwise, if the pull request hasn't been opened yet, you can just push your changes normally.
6. Now you can go to your fork of the repository on GitHub and click on **Pull Request** to open a pull request. Make sure you tick off all the boxes on our [checklist](#pull-request-checklist) below. When you're ready, you can send your changes to the project maintainers for review.
7. It's ok if maintainers request changes, it happens to our core contributors
too! So everyone can see the changes in the pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request.
### Pull request checklist
โ The pull request title should summarize your contribution.<br>
โ If your pull request addresses an issue, please mention the issue number in the pull
request description to make sure they are linked (and people viewing the issue know you
are working on it).<br>
โ To indicate a work in progress please prefix the title with `[WIP]`. These are
useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br>
โ Make sure existing tests pass.<br>
โ If adding a new feature, also add tests for it.<br>
- If you are adding a new model, make sure you use
`ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests.
- If you are adding new `@slow` tests, make sure they pass using
`RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`.
- If you are adding a new tokenizer, write tests and make sure
`RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes.
- CircleCI does not run the slow tests, but GitHub Actions does every night!<br>
โ All public methods must have informative docstrings (see
[`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py)
for an example).<br>
โ Due to the rapidly growing repository, don't add any images, videos and other
non-text files that'll significantly weigh down the repository. Instead, use a Hub
repository such as [`hf-internal-testing`](https://huggingface.co/hf-internal-testing)
to host these files and reference them by URL. We recommend placing documentation
related images in the following repository:
[huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
You can open a PR on this dataset repository and ask a Hugging Face member to merge it.
For more information about the checks run on a pull request, take a look at our [Checks on a Pull Request](https://huggingface.co/docs/transformers/pr_checks) guide.
### Tests
An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the [tests](https://github.com/huggingface/transformers/tree/main/tests) folder and examples tests in the
[examples](https://github.com/huggingface/transformers/tree/main/examples) folder.
We like `pytest` and `pytest-xdist` because it's faster. From the root of the
repository, specify a *path to a subfolder or a test file* to run the test:
```bash
python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
```
Similarly, for the `examples` directory, specify a *path to a subfolder or test file* to run the test. For example, the following command tests the text classification subfolder in the PyTorch `examples` directory:
```bash
pip install -r examples/xxx/requirements.txt # only needed the first time
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
```
In fact, this is actually how our `make test` and `make test-examples` commands are implemented (not including the `pip install`)!
You can also specify a smaller set of tests in order to test only the feature
you're working on.
By default, slow tests are skipped but you can set the `RUN_SLOW` environment variable to
`yes` to run them. This will download many gigabytes of models so make sure you
have enough disk space, a good internet connection or a lot of patience!
<Tip warning={true}>
Remember to specify a *path to a subfolder or a test file* to run the test. Otherwise, you'll run all the tests in the `tests` or `examples` folder, which will take a very long time!
</Tip>
```bash
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model
RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification
```
Like the slow tests, there are other environment variables available which not enabled by default during testing:
- `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers.
- `RUN_PT_FLAX_CROSS_TESTS`: Enables tests for PyTorch + Flax integration.
- `RUN_PT_TF_CROSS_TESTS`: Enables tests for TensorFlow + PyTorch integration.
More environment variables and additional information can be found in the [testing_utils.py](src/transformers/testing_utils.py).
๐ค Transformers uses `pytest` as a test runner only. It doesn't use any
`pytest`-specific features in the test suite itself.
This means `unittest` is fully supported. Here's how to run tests with
`unittest`:
```bash
python -m unittest discover -s tests -t . -v
python -m unittest discover -s examples -t examples -v
```
### Style guide
For documentation strings, ๐ค Transformers follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html).
Check our [documentation writing guide](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification)
for more information.
### Develop on Windows
On Windows (unless you're working in [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) or WSL), you need to configure git to transform Windows `CRLF` line endings to Linux `LF` line endings:
```bash
git config core.autocrlf input
```
One way to run the `make` command on Windows is with MSYS2:
1. [Download MSYS2](https://www.msys2.org/), and we assume it's installed in `C:\msys64`.
2. Open the command line `C:\msys64\msys2.exe` (it should be available from the **Start** menu).
3. Run in the shell: `pacman -Syu` and install `make` with `pacman -S make`.
4. Add `C:\msys64\usr\bin` to your PATH environment variable.
You can now use `make` from any terminal (PowerShell, cmd.exe, etc.)! ๐
### Sync a forked repository with upstream main (the Hugging Face repository)
When updating the main branch of a forked repository, please follow these steps to avoid pinging the upstream repository which adds reference notes to each upstream PR, and sends unnecessary notifications to the developers involved in these PRs.
1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main.
2. If a PR is absolutely necessary, use the following steps after checking out your branch:
```bash
git checkout -b your-branch-for-syncing
git pull --squash --no-commit upstream main
git commit -m '<your message without GitHub references>'
git push --set-upstream origin your-branch-for-syncing
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/LICENSE | Copyright 2018- The Hugging Face team. All rights reserved.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/CODE_OF_CONDUCT.md |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[email protected].
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_hd.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!---
A useful guide for English-Hindi translation of Hugging Face documentation
- Add space around English words and numbers when they appear between Hindi characters. E.g., เคเฅเคฒ เคฎเคฟเคฒเคพเคเคฐ 100 เคธเฅ เค
เคงเคฟเค เคญเคพเคทเคพเคเค; เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคฒเคพเคเคฌเฅเคฐเฅเคฐเฅ เคเคพ เคเคชเคฏเฅเค เคเคฐเคคเคพ เคนเฅเฅค
- เคตเคฐเฅเคเคพเคเคพเคฐ เคเคฆเฅเคงเคฐเคฃเฅเค เคเคพ เคชเฅเคฐเคฏเฅเค เคเคฐเฅเค, เคเฅเคธเฅ, "เคเคฆเฅเคงเคฐเคฃ"
Dictionary
Hugging Face: เคเคฒเฅ เคฒเคเคพเค เคเฅเคนเคฐเคพ
token: เคถเคฌเฅเคฆ (เคเคฐ เคฎเฅเคฒ เค
เคเคเฅเคฐเฅเคเฅ เคเฅ เคเฅเคทเฅเค เค เคฎเฅเค เคเคฟเคนเฅเคจเคฟเคค เคเคฐเฅเค๏ผ
tokenize: เคเฅเคเคจเคจเคพเคเคเคผ เคเคฐเฅเค (เคเคฐ เคฎเฅเคฒ เค
เคเคเฅเคฐเฅเคเคผเฅ เคเฅ เคเคฟเคนเฅเคจเคฟเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคเฅเคทเฅเค เค เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค)
tokenizer: Tokenizer (เคฎเฅเคฒ เค
เคเคเฅเคฐเฅเคเฅ เคฎเฅเค เคเฅเคทเฅเค เค เคเฅ เคธเคพเคฅ)
transformer: transformer
pipeline: เคธเคฎเคจเฅเคเฅเคฐเคฎ
API: API (เค
เคจเฅเคตเคพเคฆ เคเฅ เคฌเคฟเคจเคพ)
inference: เคตเคฟเคเคพเคฐ
Trainer: เคชเฅเคฐเคถเคฟเคเฅเคทเคเฅค เคเคเฅเคทเคพ เคเฅ เคจเคพเคฎ เคเฅ เคฐเฅเคช เคฎเฅเค เคชเฅเคฐเคธเฅเคคเฅเคค เคเคฟเค เคเคพเคจเฅ เคชเคฐ เค
เคจเฅเคตเคพเคฆเคฟเคค เคจเคนเฅเค เคเคฟเคฏเคพ เคเคฏเคพเฅค
pretrained/pretrain: เคชเฅเคฐเฅเคต เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ
finetune: เคซเคผเคพเคเคจ เคเฅเคฏเฅเคจเคฟเคเค
community: เคธเคฎเฅเคฆเคพเคฏ
example: เคเคฌ เคตเคฟเคถเคฟเคทเฅเค เคเฅเคฆเคพเคฎ example เคเฅเคเคฒเฅเค เคเคฐเคคเฅ เคธเคฎเคฏ "เคเฅเคธ เคเฅเคธ" เคเฅ เคฐเฅเคช เคฎเฅเค เค
เคจเฅเคตเคพเคฆเคฟเคค
Python data structures (e.g., list, set, dict): เคฎเฅเคฒ เค
เคเคเฅเคฐเฅเคเฅ เคเฅ เคเคฟเคนเฅเคจเคฟเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคธเฅเคเคฟเคฏเฅเค, เคธเฅเคเฅเค, เคถเคฌเฅเคฆเคเฅเคถเฅเค เคฎเฅเค เค
เคจเฅเคตเคพเคฆ เคเคฐเฅเค เคเคฐ เคเฅเคทเฅเค เค เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค
NLP/Natural Language Processing: เคฆเฅเคตเคพเคฐเคพ NLP เค
เคจเฅเคตเคพเคฆ เคเฅ เคฌเคฟเคจเคพ เคชเฅเคฐเคเค เคนเฅเคคเฅ เคนเฅเค Natural Language Processing เคชเฅเคฐเคธเฅเคคเฅเคค เคเคฟเค เคเคพเคจเฅ เคชเคฐ เคชเฅเคฐเคพเคเฅเคคเคฟเค เคญเคพเคทเคพ เคธเคเคธเคพเคงเคจ เคฎเฅเค เค
เคจเฅเคตเคพเคฆ เคเคฐเฅเค
checkpoint: เคเคพเคเค เคฌเคฟเคเคฆเฅ
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<b>เคนเคฟเคจเฅเคฆเฅ</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>Jax, PyTorch เคเคฐ TensorFlow เคเฅ เคฒเคฟเค เคเคจเฅเคจเคค เคฎเคถเฅเคจ เคฒเคฐเฅเคจเคฟเคเค</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers 100 เคธเฅ เค
เคงเคฟเค เคญเคพเคทเคพเคเค เคฎเฅเค เคชเคพเค เคตเคฐเฅเคเฅเคเคฐเคฃ, เคธเฅเคเคจเคพ เคจเคฟเคทเฅเคเคฐเฅเคทเคฃ, เคชเฅเคฐเคถเฅเคจ เคเคคเฅเคคเคฐ, เคธเคพเคฐเคพเคเคถเฅเคเคฐเคฃ, เค
เคจเฅเคตเคพเคฆ, เคชเคพเค เคจเคฟเคฐเฅเคฎเคพเคฃ เคเคพ เคธเคฎเคฐเฅเคฅเคจ เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคนเคเคพเคฐเฅเค เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเคพ เคนเฅเฅค เคเคธเคเคพ เคเคฆเฅเคฆเฅเคถเฅเคฏ เคธเคฌเคธเฅ เคเคจเฅเคจเคค เคเคจเคเคฒเคชเฅ เคคเคเคจเฅเค เคเฅ เคธเคญเฅ เคเฅ เคฒเคฟเค เคธเฅเคฒเคญ เคฌเคจเคพเคจเคพ เคนเฅเฅค
๐ค Transformers เคคเฅเคตเคฐเคฟเคค เคกเคพเคเคจเคฒเฅเคก เคเคฐ เคเคชเคฏเฅเค เคเฅ เคฒเคฟเค เคเค เคเคชเฅเคเค เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเคพ เคนเฅ, เคเคฟเคธเคธเฅ เคเคช เคเคฟเคธเฅ เคฆเคฟเค เคเค เคชเคพเค เคชเคฐ เคเค เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคฒเฅ เคธเคเคคเฅ เคนเฅเค, เคเคธเฅ เค
เคชเคจเฅ เคกเฅเคเคพเคธเฅเค เคชเคฐ เค เฅเค เคเคฐ เคธเคเคคเฅ เคนเฅเค เคเคฐ เคเคธเฅ [เคฎเฅเคกเคฒ เคนเคฌ](https://huggingface.co/models) เคเฅ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ เคธเคฎเฅเคฆเคพเคฏ เคเฅ เคธเคพเคฅ เคธเคพเคเคพ เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค เคเคธเฅ เคธเคฎเคฏ, เคชเฅเคฐเคคเฅเคฏเฅเค เคชเคฐเคฟเคญเคพเคทเคฟเคค เคชเคพเคฏเคฅเคจ เคฎเฅเคกเฅเคฏเฅเคฒ เคชเฅเคฐเฅ เคคเคฐเคน เคธเฅ เคธเฅเคตเคคเคเคคเฅเคฐ เคนเฅ, เคเฅ เคธเคเคถเฅเคงเคจ เคเคฐ เคคเฅเคเฅ เคธเฅ เค
เคจเฅเคธเคเคงเคพเคจ เคชเฅเคฐเคฏเฅเคเฅเค เคเฅ เคฒเคฟเค เคธเฅเคตเคฟเคงเคพเคเคจเค เคนเฅเฅค
๐ค Transformers เคคเฅเคจ เคธเคฌเคธเฅ เคฒเฅเคเคชเฅเคฐเคฟเคฏ เคเคนเคจ เคถเคฟเคเฅเคทเคฃ เคชเฅเคธเฅเคคเคเคพเคฒเคฏเฅเค เคเคพ เคธเคฎเคฐเฅเคฅเคจ เคเคฐเคคเคพ เคนเฅ๏ผ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) โ เคเคฐ เคเคธเคเฅ เคธเคพเคฅ เคจเคฟเคฐเฅเคฌเคพเคง เคฐเฅเคช เคธเฅ เคเคเฅเคเฅเคค เคนเฅเคคเคพ เคนเฅเฅค เคเคช เค
เคชเคจเฅ เคฎเฅเคกเคฒ เคเฅ เคธเฅเคงเฅ เคเค เคขเคพเคเคเฅ เคเฅ เคธเคพเคฅ เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคเคฐ เคธเคเคคเฅ เคนเฅเค เคเคฐ เคฆเฅเคธเคฐเฅ เคเฅ เคธเคพเคฅ เคฒเฅเคก เคเคฐ เค
เคจเฅเคฎเคพเคจ เคฒเคเคพ เคธเคเคคเฅ เคนเฅเคเฅค
## เคเคจเคฒเคพเคเคจ เคกเฅเคฎเฅ
เคเคช เคธเคฌเคธเฅ เคธเฅเคงเฅ เคฎเฅเคกเคฒ เคชเฅเคทเฅเค เคชเคฐ เคชเคฐเฅเคเฅเคทเคฃ เคเคฐ เคธเคเคคเฅ เคนเฅเค [model hub](https://huggingface.co/models) เคฎเฅเคกเคฒ เคชเคฐเฅค เคนเคฎ [เคจเคฟเคเฅ เคฎเฅเคกเคฒ เคนเฅเคธเฅเคเคฟเคเค, เคฎเฅเคกเคฒ เคธเคเคธเฅเคเคฐเคฃ, เคเคฐ เค
เคจเฅเคฎเคพเคจ เคเคชเฅเคเค](https://huggingface.co/pricing) เคญเฅ เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเฅ เคนเฅเคเฅคใ
เคฏเคนเคพเค เคเฅเค เคเคฆเคพเคนเคฐเคฃ เคนเฅเค๏ผ
- [เคถเคฌเฅเคฆ เคเฅ เคญเคฐเคจเฅ เคเฅ เคฒเคฟเค เคฎเคพเคธเฅเค เคเฅ เคฐเฅเคช เคฎเฅเค BERT เคเคพ เคชเฅเคฐเคฏเฅเค เคเคฐเฅเค](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [เคเคฒเฅเคเฅเคเฅเคฐเคพ เคเฅ เคธเคพเคฅ เคจเคพเคฎเคฟเคค เคเคเคพเค เคชเคนเคเคพเคจ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [เคเฅเคชเฅเคเฅ-2 เคเฅ เคธเคพเคฅ เคเฅเคเฅเคธเฅเค เคเคจเคฐเฅเคถเคจ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [เคฐเฅเคฌเคฐเฅเคเคพ เคเฅ เคธเคพเคฅ เคชเฅเคฐเคพเคเฅเคคเคฟเค เคญเคพเคทเคพ เคจเคฟเคทเฅเคเคฐเฅเคท](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [เคฌเคพเคฐเฅเค เคเฅ เคธเคพเคฅ เคชเคพเค เคธเคพเคฐเคพเคเคถ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [เคกเคฟเคธเฅเคเคฟเคฒเคฌเคฐเฅเค เคเฅ เคธเคพเคฅ เคชเฅเคฐเคถเฅเคจเฅเคคเฅเคคเคฐ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [เค
เคจเฅเคตเคพเคฆ เคเฅ เคฒเคฟเค T5 เคเคพ เคชเฅเคฐเคฏเฅเค เคเคฐเฅเค](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
**[Write With Transformer](https://transformer.huggingface.co)**๏ผเคนเคเคฟเคเค เคซเฅเคธ เคเฅเคฎ เคฆเฅเคตเคพเคฐเคพ เคฌเคจเคพเคฏเคพ เคเคฏเคพ, เคฏเคน เคเค เคเคงเคฟเคเคพเคฐเคฟเค เคชเคพเค เคชเฅเคขเคผเฅ เคนเฅ demoใ
## เคฏเคฆเคฟ เคเคช เคนเคเคฟเคเค เคซเฅเคธ เคเฅเคฎ เคธเฅ เคฌเฅเคธเฅเคชเฅเค เคธเคฎเคฐเฅเคฅเคจ เคเฅ เคคเคฒเคพเคถ เคเคฐ เคฐเคนเฅ เคนเฅเค
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## เคเคฒเฅเคฆเฅ เคถเฅเคฐเฅ เคเคฐเฅเค
เคนเคฎ เคคเฅเคตเคฐเคฟเคค เคเคชเคฏเฅเค เคเฅ เคฒเคฟเค เคฎเฅเคกเคฒ เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเฅ เคนเฅเค `pipeline` (เคชเคพเคเคชเคฒเคพเคเคจ) เคเคชเฅเคเคเฅค เคชเคพเคเคชเคฒเคพเคเคจ เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคเคฐ เคธเคเคฌเคเคงเคฟเคค เคชเคพเค เคชเฅเคฐเฅเคชเฅเคฐเฅเคธเฅเคธเคฟเคเค เคเฅ เคเคเคคเฅเคฐเคฟเคค เคเคฐเคคเฅ เคนเฅเฅค เคธเคเคพเคฐเคพเคคเฅเคฎเค เคเคฐ เคจเคเคพเคฐเคพเคคเฅเคฎเค เคญเคพเคตเคจเคพ เคเฅ เคจเคฟเคฐเฅเคงเคพเคฐเคฟเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคชเคพเคเคชเคฒเคพเคเคจเฅเค เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเฅ เคเคพ เคเค เคคเฅเคตเคฐเคฟเคค เคเคฆเคพเคนเคฐเคฃ เคฏเคนเคพเค เคฆเคฟเคฏเคพ เคเคฏเคพ เคนเฅ:
```python
>>> from transformers import pipeline
# เคญเคพเคตเคจเคพ เคตเคฟเคถเฅเคฒเฅเคทเคฃ เคชเคพเคเคชเคฒเคพเคเคจ เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
เคเฅเคก เคเฅ เคฆเฅเคธเคฐเฅ เคชเคเคเฅเคคเคฟ เคชเคพเคเคชเคฒเคพเคเคจ เคฆเฅเคตเคพเคฐเคพ เคเคชเคฏเฅเค เคเคฟเค เคเค เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคเฅ เคกเคพเคเคจเคฒเฅเคก เคเคฐ เคเฅเคถ เคเคฐเคคเฅ เคนเฅ, เคเคฌเคเคฟ เคเฅเคก เคเฅ เคคเฅเคธเคฐเฅ เคชเคเคเฅเคคเคฟ เคฆเคฟเค เคเค เคชเคพเค เคชเคฐ เคฎเฅเคฒเฅเคฏเคพเคเคเคจ เคเคฐเคคเฅ เคนเฅเฅค เคฏเคนเคพเค เคเคคเฅเคคเคฐ 99 เคเคคเฅเคฎเคตเคฟเคถเฅเคตเคพเคธ เคเฅ เคธเฅเคคเคฐ เคเฅ เคธเคพเคฅ "เคธเคเคพเคฐเคพเคคเฅเคฎเค" เคนเฅเฅค
เคเค เคเคจเคเคฒเคชเฅ เคเคพเคฐเฅเคฏเฅเค เคฎเฅเค เคเคเค เคเฅ เคฆ เคฌเฅเคเฅเคธ เคชเคพเคเคชเคฒเคพเคเคจเฅเค เคเคพ เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคนเฅเคคเคพ เคนเฅเฅค เคเคฆเคพเคนเคฐเคฃ เคเฅ เคฒเคฟเค, เคนเคฎ เคเคฟเคธเฅ เคฆเคฟเค เคเค เคชเคพเค เคธเฅ เคเคฟเคธเฅ เคชเฅเคฐเคถเฅเคจ เคเคพ เคเคคเฅเคคเคฐ เคเคธเคพเคจเฅ เคธเฅ เคจเคฟเคเคพเคฒ เคธเคเคคเฅ เคนเฅเค:
``` python
>>> from transformers import pipeline
# เคชเฅเคฐเคถเฅเคจเฅเคคเฅเคคเคฐ เคชเคพเคเคชเคฒเคพเคเคจ เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ
>>> question_answerer = pipeline('question-answering')
>>> question_answerer({
... 'question': 'What is the name of the repository ?',
... 'context': 'Pipeline has been included in the huggingface/transformers repository'
... })
{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
```
เคเคคเฅเคคเคฐ เคฆเฅเคจเฅ เคเฅ เค
เคฒเคพเคตเคพ, เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคธเคเคเคค เคเคคเฅเคฎเคตเคฟเคถเฅเคตเคพเคธ เคธเฅเคเฅเคฐ เคญเฅ เคฆเฅเคคเคพ เคนเฅ, เคเคนเคพเค เคเคคเฅเคคเคฐ เคเฅเคเคจเคฏเฅเคเฅเคค เคชเคพเค เคฎเฅเค เคถเฅเคฐเฅ เคเคฐ เคธเคฎเคพเคชเฅเคค เคนเฅเคคเคพ เคนเฅเฅค เคเคช [เคเคธ เคเฅเคฏเฅเคเฅเคฐเคฟเคฏเคฒ](https://huggingface.co/docs/transformers/task_summary) เคธเฅ เคชเคพเคเคชเคฒเคพเคเคจ เคเคชเฅเคเค เคฆเฅเคตเคพเคฐเคพ เคธเคฎเคฐเฅเคฅเคฟเคค เคเคพเคฐเฅเคฏเฅเค เคเฅ เคฌเคพเคฐเฅ เคฎเฅเค เค
เคงเคฟเค เคเคพเคจ เคธเคเคคเฅ เคนเฅเคเฅค
เค
เคชเคจเฅ เคเคพเคฐเฅเคฏ เคชเคฐ เคเคฟเคธเฅ เคญเฅ เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคเฅ เคกเคพเคเคจเคฒเฅเคก เคเคฐเคจเคพ เคเคฐ เคเคธเคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ เคญเฅ เคเฅเคก เคเฅ เคคเฅเคจ เคชเคเคเฅเคคเคฟเคฏเฅเค เคเฅ เคคเคฐเคน เคธเคฐเคฒ เคนเฅเฅค เคฏเคนเคพเค PyTorch เคธเคเคธเฅเคเคฐเคฃ เคเฅ เคฒเคฟเค เคเค เคเคฆเคพเคนเคฐเคฃ เคฆเคฟเคฏเคพ เคเคฏเคพ เคนเฅ:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
เคฏเคนเคพเค เคธเคฎเคเคเฅเคท เคนเฅ TensorFlow เคเฅเคก:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
เคเฅเคเคจเคจเคพเคเคเคผเคฐ เคธเคญเฅ เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒเฅเค เคเฅ เคฒเคฟเค เคชเฅเคฐเฅเคชเฅเคฐเฅเคธเฅเคธเคฟเคเค เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเคพ เคนเฅ เคเคฐ เคเคธเฅ เคธเฅเคงเฅ เคเค เคธเฅเคเฅเคฐเคฟเคเค (เคเฅเคธเฅ เคเคชเคฐ เคฆเคฟเค เคเค เคเคฆเคพเคนเคฐเคฃ) เคฏเคพ เคเคฟเคธเฅ เคธเฅเคเฅ เคชเคฐ เคฌเฅเคฒเคพเคฏเคพ เคเคพ เคธเคเคคเคพ เคนเฅเฅค เคฏเคน เคเค เคกเคฟเคเฅเคถเคจเคฐเฅ (เคคเคพเคจเคพเคถเคพเคนเฅ) เคเฅ เคเคเคเคชเฅเค เคเคฐเคคเคพ เคนเฅ เคเคฟเคธเฅ เคเคช เคกเคพเคเคจเคธเฅเคเฅเคฐเฅเคฎ เคเฅเคก เคฎเฅเค เคเคชเคฏเฅเค เคเคฐ เคธเคเคคเฅ เคนเฅเค เคฏเคพ `**` เค
เคจเคชเฅเคเคฟเคเค เคเคเฅเคธเคชเฅเคฐเฅเคถเคจ เคเฅ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ เคธเฅเคงเฅ เคฎเฅเคกเคฒ เคเฅ เคชเคพเคธ เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค
เคฎเฅเคกเคฒ เคธเฅเคตเคฏเค เคเค เคจเคฟเคฏเคฎเคฟเคค [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) เคฏเคพ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (เคเคชเคเฅ เคฌเฅเคเคเคเคก เคเฅ เคเคงเคพเคฐ เคชเคฐ), เคเฅ เคนเฅ เคธเคเคคเคพ เคนเฅ เคธเคพเคฎเคพเคจเฅเคฏ เคคเคฐเฅเคเฅ เคธเฅ เคเคชเคฏเฅเค เคเคฟเคฏเคพ เคเคพเคคเคพ เคนเฅเฅค [เคฏเคน เคเฅเคฏเฅเคเฅเคฐเคฟเคฏเคฒ](https://huggingface.co/transformers/training.html) เคฌเคคเคพเคคเคพ เคนเฅ เคเคฟ เคเคธ เคคเคฐเคน เคเฅ เคฎเฅเคกเคฒ เคเฅ เคเฅเคฒเคพเคธเคฟเค PyTorch เคฏเคพ TensorFlow เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคฒเฅเคช เคฎเฅเค เคเฅเคธเฅ เคเคเฅเคเฅเคค เคเคฟเคฏเคพ เคเคพเค, เคฏเคพ เคนเคฎเคพเคฐเฅ `เคเฅเคฐเฅเคจเคฐ` เคเคชเฅเคเค เคเคพ เคเคชเคฏเฅเค เคเฅเคธเฅ เคเคฐเฅเค เคคเคพเคเคฟ เคเคธเฅ เคเคฒเฅเคฆเฅ เคธเฅ เคซเคผเคพเคเคจ เคเฅเคฏเฅเคจ เคเคฟเคฏเคพ เคเคพ เคธเคเฅเฅคเคเค เคจเคฏเคพ เคกเฅเคเคพเคธเฅเค เคชเฅเฅค
## เคเฅเคฐเคพเคเคธเคซเคพเคฐเฅเคฎเคฐ เคเคพ เคเคชเคฏเฅเค เคเฅเคฏเฅเค เคเคฐเฅเค?
1. เคเคชเคฏเฅเค เคฎเฅเค เคเคธเคพเคจเฅ เคเฅ เคฒเคฟเค เคเคจเฅเคจเคค เคฎเฅเคกเคฒ:
- เคเคจเคเคฒเคฏเฅ เคเคฐ เคเคจเคเคฒเคเฅ เคชเคฐ เคฌเฅเคนเคคเคฐ เคชเฅเคฐเคฆเคฐเฅเคถเคจ
- เคชเฅเคฐเคตเฅเคถ เคเฅ เคฒเคฟเค เคเคฎ เคฌเคพเคงเคพเคเค เคเฅ เคธเคพเคฅ เคถเคฟเคเฅเคทเคฃ เคเคฐ เค
เคญเฅเคฏเคพเคธ เคเฅ เค
เคจเฅเคเฅเคฒ
- เคเคชเคฏเฅเคเคเคฐเฅเคคเคพ-เคธเคพเคฎเคจเคพ เคเคฐเคจเฅ เคตเคพเคฒเฅ เคธเคพเคฐ เคคเคคเฅเคต, เคเฅเคตเคฒ เคคเฅเคจ เคตเคฐเฅเคเฅเค เคเฅ เคเคพเคจเคจเฅ เคเฅ เคเคฐเฅเคฐเคค เคนเฅ
- เคธเคญเฅ เคฎเฅเคกเคฒเฅเค เคเฅ เคฒเคฟเค เคเคเฅเคเฅเคค เคเคชเฅเคเค
1. เคเคฎ เคเคฎเฅเคชเฅเคฏเฅเคเฅเคถเคจเคฒ เคเคตเคฐเคนเฅเคก เคเคฐ เคเคฎ เคเคพเคฐเฅเคฌเคจ เคเคคเฅเคธเคฐเฅเคเคจ:
- เคถเฅเคงเคเคฐเฅเคคเคพ เคนเคฐ เคฌเคพเคฐ เคจเค เคธเคฟเคฐเฅ เคธเฅ เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคฆเฅเคจเฅ เคเฅ เคฌเคเคพเคฏ เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ เคธเคพเคเคพ เคเคฐ เคธเคเคคเฅ เคนเฅเค
- เคเคเคเฅเคจเคฟเคฏเคฐ เคเคฃเคจเคพ เคธเคฎเคฏ เคเคฐ เคเคคเฅเคชเคพเคฆเคจ เคเคตเคฐเคนเฅเคก เคเฅ เคเคฎ เคเคฐ เคธเคเคคเฅ เคนเฅเค
- เคฆเคฐเฅเคเคจเฅเค เคฎเฅเคกเคฒ เคเคฐเฅเคเคฟเคเฅเคเฅเคเคฐ, 2,000 เคธเฅ เค
เคงเคฟเค เคชเฅเคฐเฅเคต-เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคฎเฅเคกเคฒ, 100 เคธเฅ เค
เคงเคฟเค เคญเคพเคทเคพเคเค เคเคพ เคธเคฎเคฐเฅเคฅเคจ
1.เคฎเฅเคกเคฒ เคเฅเคตเคจเคเคเฅเคฐ เคเฅ เคนเคฐ เคนเคฟเคธเฅเคธเฅ เคเฅ เคถเคพเคฎเคฟเคฒ เคเคฐเคคเคพ เคนเฅ:
- เคเฅเคก เคเฅ เคเฅเคตเคฒ 3 เคชเคเคเฅเคคเคฟเคฏเฅเค เคฎเฅเค เคเคจเฅเคจเคค เคฎเฅเคกเคฒเฅเค เคเฅ เคชเฅเคฐเคถเคฟเคเฅเคทเคฟเคค เคเคฐเฅเค
- เคฎเฅเคกเคฒ เคเฅ เคฎเคจเคฎเคพเคจเฅ เคขเคเค เคธเฅ เคตเคฟเคญเคฟเคจเฅเคจ เคกเฅเคช เคฒเคฐเฅเคจเคฟเคเค เคซเฅเคฐเฅเคฎเคตเคฐเฅเค เคเฅ เคฌเฅเค เคธเฅเคฅเคพเคจเคพเคเคคเคฐเคฟเคค เคเคฟเคฏเคพ เคเคพ เคธเคเคคเคพ เคนเฅ, เคเฅเคธเคพ เคเคช เคเคพเคนเคคเฅ เคนเฅเค
- เคจเคฟเคฐเฅเคฌเคพเคง เคฐเฅเคช เคธเฅ เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ, เคฎเฅเคฒเฅเคฏเคพเคเคเคจ เคเคฐ เคเคคเฅเคชเคพเคฆเคจ เคเฅ เคฒเคฟเค เคธเคฌเคธเฅ เคเคชเคฏเฅเคเฅเคค เคขเคพเคเคเคพ เคเฅเคจเฅเค
1. เคเคธเคพเคจเฅ เคธเฅ เค
เคจเคจเฅเคฏ เคฎเฅเคกเคฒ เคเฅ เค
เคจเฅเคเฅเคฒเคฟเคค เคเคฐเฅเค เคเคฐ เค
เคชเคจเฅ เคเคตเคถเฅเคฏเคเคคเคพเคเค เคเฅ เคฒเคฟเค เคฎเคพเคฎเคฒเฅเค เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค:
- เคนเคฎ เคฎเฅเคฒ เคชเฅเคชเคฐ เคชเคฐเคฟเคฃเคพเคฎเฅเค เคเฅ เคชเฅเคจ: เคชเฅเคถ เคเคฐเคจเฅ เคเฅ เคฒเคฟเค เคชเฅเคฐเคคเฅเคฏเฅเค เคฎเฅเคกเคฒ เคเคฐเฅเคเคฟเคเฅเคเฅเคเคฐ เคเฅ เคฒเคฟเค เคเค เคเคชเคฏเฅเค เคเฅ เคฎเคพเคฎเคฒเฅ เคชเฅเคฐเคฆเคพเคจ เคเคฐเคคเฅ เคนเฅเค
- เคฎเฅเคกเคฒ เคเฅ เคเคเคคเคฐเคฟเค เคธเคเคฐเคเคจเคพ เคชเคพเคฐเคฆเคฐเฅเคถเฅ เคเคฐ เคธเฅเคธเคเคเคค เคฐเคนเคคเฅ เคนเฅ
- เคฎเฅเคกเคฒ เคซเคผเคพเคเคฒ เคเฅ เค
เคฒเค เคธเฅ เคเคธเฅเคคเฅเคฎเคพเคฒ เคเคฟเคฏเคพ เคเคพ เคธเคเคคเคพ เคนเฅ, เคเฅ เคธเคเคถเฅเคงเคจ เคเคฐ เคคเฅเคตเคฐเคฟเคค เคชเฅเคฐเคฏเฅเค เคเฅ เคฒเคฟเค เคธเฅเคตเคฟเคงเคพเคเคจเค เคนเฅ
## เคฎเฅเคเฅ เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคเคพ เคเคชเคฏเฅเค เคเคฌ เคจเคนเฅเค เคเคฐเคจเคพ เคเคพเคนเคฟเค?
- เคฏเคน เคฒเคพเคเคฌเฅเคฐเฅเคฐเฅ เคฎเฅเคกเฅเคฏเฅเคฒเคฐ เคจเฅเคฏเฅเคฐเคฒ เคจเฅเคเคตเคฐเฅเค เคเฅเคฒเคฌเฅเคเฅเคธ เคจเคนเฅเค เคนเฅเฅค เคฎเฅเคกเคฒ เคซเคผเคพเคเคฒ เคฎเฅเค เคเฅเคก เคเคพเคจเคฌเฅเคเคเคฐ เค
เคฒเฅเคชเคตเคฟเคเคธเคฟเคค เคนเฅ, เคฌเคฟเคจเคพ เค
เคคเคฟเคฐเคฟเคเฅเคค เคธเคพเคฐ เคเคจเคเฅเคชเฅเคธเฅเคฒเฅเคถเคจ เคเฅ, เคคเคพเคเคฟ เคถเฅเคงเคเคฐเฅเคคเคพ เค
เคฎเฅเคฐเฅเคคเคคเคพ เคเคฐ เคซเคผเคพเคเคฒ เคเคเคชเคฟเคเค เคฎเฅเค เคถเคพเคฎเคฟเคฒ เคนเฅเค เคเคฒเฅเคฆเฅ เคธเฅ เคชเฅเคจเคฐเคพเคตเฅเคคเคฟ เคเคฐ เคธเคเฅเคเฅค
- `เคเฅเคฐเฅเคจเคฐ` เคเคชเฅเคเค เคเคฟเคธเฅ เคญเฅ เคฎเฅเคกเคฒ เคเฅ เคธเคพเคฅ เคธเคเคเคค เคจเคนเฅเค เคนเฅ, เคฏเคน เคเฅเคตเคฒ เคเคธ เคชเฅเคธเฅเคคเคเคพเคฒเคฏ เคเฅ เคฎเฅเคกเคฒ เคเฅ เคฒเคฟเค เค
เคจเฅเคเฅเคฒเคฟเคค เคนเฅเฅค เคฏเคฆเคฟ เคเคช เคธเคพเคฎเคพเคจเฅเคฏ เคฎเคถเฅเคจ เคฒเคฐเฅเคจเคฟเคเค เคเฅ เคฒเคฟเค เคเคชเคฏเฅเคเฅเคค เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคฒเฅเคช เคเคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เคเฅ เคคเคฒเคพเคถ เคฎเฅเค เคนเฅเค, เคคเฅ เคเคนเฅเค เคเคฐ เคฆเฅเคเฅเคเฅค
- เคนเคฎเคพเคฐเฅ เคธเคฐเฅเคตเฅเคคเฅเคคเคฎ เคชเฅเคฐเคฏเคพเคธเฅเค เคเฅ เคฌเคพเคตเคเฅเคฆ, [เคเคฆเคพเคนเคฐเคฃ เคจเคฟเคฐเฅเคฆเฅเคถเคฟเคเคพ](https://github.com/huggingface/transformers/tree/main/examples) เคฎเฅเค เคธเฅเคเฅเคฐเคฟเคชเฅเค เคเฅเคตเคฒ เคเคชเคฏเฅเค เคเฅ เคฎเคพเคฎเคฒเฅ เคนเฅเคเฅค เคเคชเคเฅ เคตเคฟเคถเคฟเคทเฅเค เคธเคฎเคธเฅเคฏเคพ เคเฅ เคฒเคฟเค, เคตเฅ เคเคฐเฅเคฐเฅ เคจเคนเฅเค เคเคฟ เคฌเฅเคเฅเคธ เคธเฅ เคฌเคพเคนเคฐ เคเคพเคฎ เคเคฐเฅเค, เคเคฐ เคเคชเคเฅ เคเฅเคก เคเฅ เคเฅเค เคชเคเคเฅเคคเคฟเคฏเฅเค เคเฅ เคธเฅเค เคเคฐเคจเฅ เคเฅ เคเคตเคถเฅเคฏเคเคคเคพ เคนเฅ เคธเคเคคเฅ เคนเฅเฅค
## เคธเฅเคฅเคพเคชเคฟเคค เคเคฐเคจเคพ
### เคชเคฟเคช เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ
เคเคธ เคฐเคฟเคชเฅเคเคฟเคเคฐเฅ เคเคพ เคชเคฐเฅเคเฅเคทเคฃ Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ เคเคฐ TensorFlow 2.6+ เคเฅ เคคเคนเคค เคเคฟเคฏเคพ เคเคฏเคพ เคนเฅเฅค
เคเคช [เคตเคฐเฅเคเฅเค
เคฒ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค](https://docs.python.org/3/library/venv.html) เคฎเฅเค ๐ค เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคเคเคธเฅเคเฅเคฒ เคเคฐ เคธเคเคคเฅ เคนเฅเคเฅค เคฏเคฆเคฟ เคเคช เค
เคญเฅ เคคเค เคชเคพเคฏเคฅเคจ เคเฅ เคตเคฐเฅเคเฅเค
เคฒ เคเคจเคตเคพเคฏเคฐเคจเคฎเฅเคเค เคธเฅ เคชเคฐเคฟเคเคฟเคค เคจเคนเฅเค เคนเฅเค, เคคเฅ เคเฅเคชเคฏเคพ เคเคธเฅ [เคเคชเคฏเฅเคเคเคฐเฅเคคเคพ เคจเคฟเคฐเฅเคฆเฅเคถ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) เคชเคขเคผเฅเคเฅค
เคธเคฌเคธเฅ เคชเคนเคฒเฅ, เคชเคพเคฏเคฅเคจ เคเฅ เคเคธ เคธเคเคธเฅเคเคฐเคฃ เคเฅ เคธเคพเคฅ เคเค เคเคญเคพเคธเฅ เคตเคพเคคเคพเคตเคฐเคฃ เคฌเคจเคพเคเค เคเคฟเคธเคเคพ เคเคช เคเคชเคฏเฅเค เคเคฐเคจเฅ เคเคฐ เคเคธเฅ เคธเคเฅเคฐเคฟเคฏ เคเคฐเคจเฅ เคเฅ เคฏเฅเคเคจเคพ เคฌเคจเคพ เคฐเคนเฅ เคนเฅเคเฅค
เคซเคฟเคฐ, เคเคชเคเฅ Flax, PyTorch เคฏเคพ TensorFlow เคฎเฅเค เคธเฅ เคเคฟเคธเฅ เคเค เคเฅ เคธเฅเคฅเคพเคชเคฟเคค เคเคฐเคจเฅ เคเฅ เคเคตเคถเฅเคฏเคเคคเคพ เคนเฅเฅค เค
เคชเคจเฅ เคชเฅเคฒเฅเคเคซเคผเฅเคฐเฅเคฎ เคชเคฐ เคเคจ เคซเคผเฅเคฐเฅเคฎเคตเคฐเฅเค เคเฅ เคธเฅเคฅเคพเคชเคฟเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค, [TensorFlow เคธเฅเคฅเคพเคชเคจเคพ เคชเฅเคทเฅเค ](https://www.tensorflow.org/install/), [PyTorch เคธเฅเคฅเคพเคชเคจเคพ เคชเฅเคทเฅเค ](https://pytorch.org/get-started/locally)
เคฆเฅเคเฅเค start-locally เคฏเคพ [Flax เคธเฅเคฅเคพเคชเคจเคพ เคชเฅเคทเฅเค ](https://github.com/google/flax#quick-install).
เคเคฌ เคเคจเคฎเฅเค เคธเฅ เคเฅเค เคเค เคฌเฅเคเคเคเคก เคธเคซเคฒเคคเคพเคชเฅเคฐเฅเคตเค เคธเฅเคฅเคพเคชเคฟเคค เคนเฅ เคเคพเคคเคพ เคนเฅ, เคคเฅ เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคจเคฟเคฎเฅเคจเคพเคจเฅเคธเคพเคฐ เคธเฅเคฅเคพเคชเคฟเคค เคเคฟเค เคเคพ เคธเคเคคเฅ เคนเฅเค:
```bash
pip install transformers
```
เคฏเคฆเคฟ เคเคช เคเคชเคฏเฅเค เคเฅ เคฎเคพเคฎเคฒเฅเค เคเฅ เคเคเคผเคฎเคพเคจเคพ เคเคพเคนเคคเฅ เคนเฅเค เคฏเคพ เคเคงเคฟเคเคพเคฐเคฟเค เคฐเคฟเคฒเฅเคเคผ เคธเฅ เคชเคนเคฒเฅ เคจเคตเฅเคจเคคเคฎ เคเคจ-เคกเฅเคตเคฒเคชเคฎเฅเคเค เคเฅเคก เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ เคเคพเคนเคคเฅ เคนเฅเค, เคคเฅ เคเคชเคเฅ [เคธเฅเคฐเฅเคธ เคธเฅ เคเคเคธเฅเคเฅเคฒ เคเคฐเคจเคพ เคนเฅเคเคพ](https://huggingface.co/docs/transformers/installation#installing-from-) เคธเฅเคฐเฅเคคเฅค
### เคเฅเคเคกเคพ เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ
เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคเฅเคเคกเคพ เคเฅ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ เคจเคฟเคฎเฅเคจเคพเคจเฅเคธเคพเคฐ เคธเฅเคฅเคพเคชเคฟเคค เคเคฟเคฏเคพ เคเคพ เคธเคเคคเคพ เคนเฅ:
```shell script
conda install conda-forge::transformers
```
> **_เคจเฅเค:_** `huggingface` เคเฅเคจเคฒ เคธเฅ `transformers` เคเคเคธเฅเคเฅเคฒ เคเคฐเคจเคพ เคชเฅเคฐเคพเคจเคพ เคชเคกเคผ เคเฅเคเคพ เคนเฅเฅค
เคเฅเคเคกเคพ เคเฅ เคฎเคพเคงเฅเคฏเคฎ เคธเฅ Flax, PyTorch, เคฏเคพ TensorFlow เคฎเฅเค เคธเฅ เคเคฟเคธเฅ เคเค เคเฅ เคธเฅเคฅเคพเคชเคฟเคค เคเคฐเคจเฅ เคเฅ เคฒเคฟเค, เคจเคฟเคฐเฅเคฆเฅเคถเฅเค เคเฅ เคฒเคฟเค เคเคจเคเฅ เคธเคเคฌเคเคงเคฟเคค เคธเฅเคฅเคพเคชเคจเคพ เคชเฅเคทเฅเค เคฆเฅเคเฅเคเฅค
## เคฎเฅเคกเคฒ เคเคฐเฅเคเคฟเคเฅเคเฅเคเคฐ
[เคเคชเคฏเฅเคเคเคฐเฅเคคเคพ](https://huggingface.co/users) เคเคฐ [organization](https://huggingface.co) เคฆเฅเคตเคพเคฐเคพ เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคธเคฎเคฐเฅเคฅเคฟเคค [**เคธเคญเฅ เคฎเฅเคกเคฒ เคเฅเคเคฟเคฏเฅเค**](https://huggingface.co/models/users) เคนเคเคฟเคเคเคซเฅเคธ.เคเฅ/เคเคฐเฅเคเคจเคพเคเคเฅเคถเคจ), เคธเคญเฅ เคเฅ เคฌเคฟเคจเคพ เคเคฟเคธเฅ เคฌเคพเคงเคพ เคเฅ เคนเคเคฟเคเคเคซเฅเคธ.เคเฅ [เคฎเฅเคกเคฒ เคนเคฌ](https://huggingface.co) เคเฅ เคธเคพเคฅ เคเคเฅเคเฅเคค เคเคฟเคฏเคพ เคเคฏเคพ เคนเฅเฅค
เคเฅเคเคฟเคฏเฅเค เคเฅ เคตเคฐเฅเคคเคฎเคพเคจ เคธเคเคเฅเคฏเคพ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเฅเค เคจเคฟเคฎเฅเคจเคฒเคฟเคเคฟเคค เคเคฐเฅเคเคฟเคเฅเคเฅเคเคฐ เคเคพ เคธเคฎเคฐเฅเคฅเคจ เคเคฐเคคเฅ เคนเฅเค: เคฎเฅเคกเคฒ เคเฅ เค
เคตเคฒเฅเคเคจ เคเฅ เคฒเคฟเค [เคฏเคนเคพเค เคฆเฅเคเฅเค](https://huggingface.co/docs/transformers/model_summary)๏ผ
เคฏเคน เคเคพเคเคเคจเฅ เคเฅ เคฒเคฟเค เคเคฟ เคเฅเคฏเคพ เคเคฟเคธเฅ เคฎเฅเคกเคฒ เคฎเฅเค เคชเคนเคฒเฅ เคธเฅ เคนเฅ Flax, PyTorch เคฏเคพ TensorFlow เคเคพ เคเคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เคนเฅ, เคฏเคพ เคฏเคฆเคฟ เคเคธเคเฅ เคชเคพเคธ Tokenizers เคฒเคพเคเคฌเฅเคฐเฅเคฐเฅ เคฎเฅเค เคธเคเคฌเคเคงเคฟเคค เคเฅเคเคจ เคนเฅ, เคคเฅ [เคฏเคน เคคเคพเคฒเคฟเคเคพ](https://huggingface.co/docs/transformers/index#supported) เคฆเฅเคเฅเคเฅค -เคซเฅเคฐเฅเคฎเคตเคฐเฅเค)เฅค
เคเคจ เคเคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจเฅเค เคเคพ เคชเคฐเฅเคเฅเคทเคฃ เคเค เคกเฅเคเคพเคธเฅเค เคชเคฐ เคเคฟเคฏเคพ เคเคฏเคพ เคนเฅ (เคฆเฅเคเฅเค เคเฅเคธ เคธเฅเคเฅเคฐเคฟเคชเฅเค เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค) เคเคฐ เคตเฅเคจเคฟเคฒเคพ เคเคพเคฐเฅเคฏเคพเคจเฅเคตเคฏเคจ เคเฅ เคฒเคฟเค เคคเฅเคฒเคจเคพเคคเฅเคฎเค เคฐเฅเคช เคธเฅ เคชเฅเคฐเคฆเคฐเฅเคถเคจ เคเคฐเคจเคพ เคเคพเคนเคฟเคเฅค เคเคช เคเคชเคฏเฅเค เคเฅ เคฎเคพเคฎเคฒเฅ เคเฅ เคฆเคธเฅเคคเคพเคตเฅเคเคผ [เคเคธ เค
เคจเฅเคญเคพเค](https://huggingface.co/docs/transformers/examples) เคฎเฅเค เคตเฅเคฏเคตเคนเคพเคฐ เคเคพ เคตเคฟเคตเคฐเคฃ เคชเคขเคผ เคธเคเคคเฅ เคนเฅเคเฅค
## เค
เคงเคฟเค เคธเคฎเคเฅเค
|เค
เคงเฅเคฏเคพเคฏ | เคตเคฟเคตเคฐเคฃ |
|-|-|
| [เคฆเคธเฅเคคเคพเคตเฅเคเคผเฅเคเคฐเคฃ](https://huggingface.co/transformers/) | เคชเฅเคฐเคพ เคเคชเฅเคเค เคฆเคธเฅเคคเคพเคตเฅเคเคผเฅเคเคฐเคฃ เคเคฐ เคเฅเคฏเฅเคเฅเคฐเคฟเคฏเคฒ |
| [เคเคพเคฐเฅเคฏ เคธเคพเคฐเคพเคเคถ](https://huggingface.co/docs/transformers/task_summary) | เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคธเคฎเคฐเฅเคฅเคฟเคค เคเคพเคฐเฅเคฏ |
| [เคชเฅเคฐเฅเคชเฅเคฐเฅเคธเฅเคธเคฟเคเค เคเฅเคฏเฅเคเฅเคฐเคฟเคฏเคฒ](https://huggingface.co/docs/transformers/preprocessing) | เคฎเฅเคกเคฒ เคเฅ เคฒเคฟเค เคกเฅเคเคพ เคคเฅเคฏเคพเคฐ เคเคฐเคจเฅ เคเฅ เคฒเคฟเค `เคเฅเคเคจเคพเคเคเคผเคฐ` เคเคพ เคเคชเคฏเฅเค เคเคฐเคจเคพ |
| [เคชเฅเคฐเคถเคฟเคเฅเคทเคฃ เคเคฐ เคซเคพเคเคจ-เคเฅเคฏเฅเคจเคฟเคเค](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow เคเฅ เคเฅเคฐเฅเคจเคฟเคเค เคฒเฅเคช เคฏเคพ `เคเฅเคฐเฅเคจเคฐ` API เคฎเฅเค เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคฆเฅเคตเคพเคฐเคพ เคฆเคฟเค เคเค เคฎเฅเคกเคฒ เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค |
| [เคเฅเคตเคฟเค เคธเฅเคเคพเคฐเฅเค: เคเฅเคตเฅเคเคฟเคเค เคเคเคก เคฏเฅเคเคผ เคเฅเคธ เคธเฅเคเฅเคฐเคฟเคชเฅเคเฅเคธ](https://github.com/huggingface/transformers/tree/main/examples) | เคตเคฟเคญเคฟเคจเฅเคจ เคเคพเคฐเฅเคฏเฅเค เคเฅ เคฒเคฟเค เคเฅเคธ เคธเฅเคเฅเคฐเคฟเคชเฅเค เคเคพ เคเคชเคฏเฅเค เคเคฐเฅเค |
| [เคฎเฅเคกเคฒ เคธเคพเคเคพ เคเคฐเคจเคพ เคเคฐ เค
เคชเคฒเฅเคก เคเคฐเคจเคพ](https://huggingface.co/docs/transformers/model_sharing) | เคธเคฎเฅเคฆเคพเคฏ เคเฅ เคธเคพเคฅ เค
เคชเคจเฅ เคซเคพเคเคจ เคเฅเคจเคก เคฎเฅเคกเคฒ เค
เคชเคฒเฅเคก เคเคฐ เคธเคพเคเคพ เคเคฐเฅเค |
| [เคฎเคพเคเคเฅเคฐเฅเคถเคจ](https://huggingface.co/docs/transformers/migration) | `เคชเคพเคเคเฅเคฐเค-เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐเฅเคธ` เคฏเคพ `เคชเคพเคเคเฅเคฐเค-เคชเฅเคฐเฅเคเฅเคฐเฅเคจเคก-เคฌเคฐเฅเค` เคธเฅ เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคฎเฅเค เคฎเคพเคเคเฅเคฐเฅเค เคเคฐเคจเคพ |
## เคเคฆเฅเคงเคฐเคฃ
เคนเคฎเคจเฅ เคเคงเคฟเคเคพเคฐเคฟเค เคคเฅเคฐ เคชเคฐ เคเคธ เคฒเคพเคเคฌเฅเคฐเฅเคฐเฅ เคเคพ [เคชเฅเคชเคฐ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) เคชเฅเคฐเคเคพเคถเคฟเคค เคเคฟเคฏเคพ เคนเฅ, เค
เคเคฐ เคเคช เคเฅเคฐเคพเคจเฅเคธเคซเคผเฅเคฐเฅเคฎเคฐเฅเคธ เคฒเคพเคเคฌเฅเคฐเฅเคฐเฅ เคเคพ เคเคชเคฏเฅเค เคเคฐเคคเฅ เคนเฅเค, เคคเฅ เคเฅเคชเคฏเคพ เคเคฆเฅเคงเฅเคค เคเคฐเฅเค:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/awesome-transformers.md | # Awesome projects built with Transformers
This page lists awesome projects built on top of Transformers. Transformers is more than a toolkit to use pretrained
models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable
developers, researchers, students, professors, engineers, and anyone else to build their dream projects.
In this list, we showcase incredibly impactful and novel projects that have pushed the field forward. We celebrate
100 of these projects as we reach the milestone of 100k stars as a community; but we're very open to pull requests
adding other projects to the list. If you believe a project should be here and it's not, then please, open a PR
to add it.
## [gpt4all](https://github.com/nomic-ai/gpt4all)
[gpt4all](https://github.com/nomic-ai/gpt4all) is an ecosystem of open-source chatbots trained on massive collections of clean assistant data including code, stories and dialogue. It offers open-source, large language models such as LLaMA and GPT-J trained in an assistant-style.
Keywords: Open-source, LLaMa, GPT-J, instruction, assistant
## [recommenders](https://github.com/microsoft/recommenders)
This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. It goes over several aspects required to build efficient recommendation systems: data preparation, modeling, evaluation, model selection & optimization, as well as operationalization
Keywords: Recommender systems, AzureML
## [IOPaint](https://github.com/Sanster/IOPaint)
Image inpainting tool powered by Stable Diffusion. Remove any unwanted object, defect, people from your pictures or erase and replace anything on your pictures.
Keywords: inpainting, SD, Stable Diffusion
## [flair](https://github.com/flairNLP/flair)
FLAIR is a powerful PyTorch NLP framework, convering several important tasks: NER, sentiment-analysis, part-of-speech tagging, text and document embeddings, among other things.
Keywords: NLP, text embedding, document embedding, biomedical, NER, PoS, sentiment-analysis
## [mindsdb](https://github.com/mindsdb/mindsdb)
MindsDB is a low-code ML platform, which automates and integrates several ML frameworks into the data stack as "AI Tables" to streamline the integration of AI into applications, making it accessible to developers of all skill levels.
Keywords: Database, low-code, AI table
## [langchain](https://github.com/hwchase17/langchain)
[langchain](https://github.com/hwchase17/langchain) is aimed at assisting in the development of apps merging both LLMs and other sources of knowledge. The library allows chaining calls to applications, creating a sequence across many tools.
Keywords: LLMs, Large Language Models, Agents, Chains
## [LlamaIndex](https://github.com/jerryjliu/llama_index)
[LlamaIndex](https://github.com/jerryjliu/llama_index) is a project that provides a central interface to connect your LLM's with external data. It provides various kinds of indices and retreival mechanisms to perform different LLM tasks and obtain knowledge-augmented results.
Keywords: LLMs, Large Language Models, Data Retrieval, Indices, Knowledge Augmentation
## [ParlAI](https://github.com/facebookresearch/ParlAI)
[ParlAI](https://github.com/facebookresearch/ParlAI) is a python framework for sharing, training and testing dialogue models, from open-domain chitchat, to task-oriented dialogue, to visual question answering. It provides more than 100 datasets under the same API, a large zoo of pretrained models, a set of agents, and has several integrations.
Keywords: Dialogue, Chatbots, VQA, Datasets, Agents
## [sentence-transformers](https://github.com/UKPLab/sentence-transformers)
This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various task. Text is embedding in vector space such that similar text is close and can efficiently be found using cosine similarity.
Keywords: Dense vector representations, Text embeddings, Sentence embeddings
## [ludwig](https://github.com/ludwig-ai/ludwig)
Ludwig is a declarative machine learning framework that makes it easy to define machine learning pipelines using a simple and flexible data-driven configuration system. Ludwig is targeted at a wide variety of AI tasks. It provides a data-driven configuration system, training, prediction, and evaluation scripts, as well as a programmatic API.
Keywords: Declarative, Data-driven, ML Framework
## [InvokeAI](https://github.com/invoke-ai/InvokeAI)
[InvokeAI](https://github.com/invoke-ai/InvokeAI) is an engine for Stable Diffusion models, aimed at professionals, artists, and enthusiasts. It leverages the latest AI-driven technologies through CLI as well as a WebUI.
Keywords: Stable-Diffusion, WebUI, CLI
## [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP)
[PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) is an easy-to-use and powerful NLP library particularly targeted at the Chinese languages. It has support for multiple pre-trained model zoos, and supports a wide-range of NLP tasks from research to industrial applications.
Keywords: NLP, Chinese, Research, Industry
## [stanza](https://github.com/stanfordnlp/stanza)
The Stanford NLP Group's official Python NLP library. It contains support for running various accurate natural language processing tools on 60+ languages and for accessing the Java Stanford CoreNLP software from Python.
Keywords: NLP, Multilingual, CoreNLP
## [DeepPavlov](https://github.com/deeppavlov/DeepPavlov)
[DeepPavlov](https://github.com/deeppavlov/DeepPavlov) is an open-source conversational AI library. It is designed for the development of production ready chat-bots and complex conversational systems, as well as research in the area of NLP and, particularly, of dialog systems.
Keywords: Conversational, Chatbot, Dialog
## [alpaca-lora](https://github.com/tloen/alpaca-lora)
Alpaca-lora contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). The repository provides training (fine-tuning) as well as generation scripts.
Keywords: LoRA, Parameter-efficient fine-tuning
## [imagen-pytorch](https://github.com/lucidrains/imagen-pytorch)
An open-source Implementation of Imagen, Google's closed-source Text-to-Image Neural Network that beats DALL-E2. As of release, it is the new SOTA for text-to-image synthesis.
Keywords: Imagen, Text-to-image
## [adapters](https://github.com/adapter-hub/adapters)
[adapters](https://github.com/adapter-hub/adapters) is an extension of HuggingFace's Transformers library, integrating adapters into state-of-the-art language models by incorporating AdapterHub, a central repository for pre-trained adapter modules. It is a drop-in replacement for transformers, which is regularly updated to stay up-to-date with the developments of transformers.
Keywords: Adapters, LoRA, Parameter-efficient fine-tuning, Hub
## [NeMo](https://github.com/NVIDIA/NeMo)
NVIDIA [NeMo](https://github.com/NVIDIA/NeMo) is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), text-to-speech synthesis (TTS), large language models (LLMs), and natural language processing (NLP). The primary objective of [NeMo](https://github.com/NVIDIA/NeMo) is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new https://developer.nvidia.com/conversational-ai#started.
Keywords: Conversational, ASR, TTS, LLMs, NLP
## [Runhouse](https://github.com/run-house/runhouse)
[Runhouse](https://github.com/run-house/runhouse) allows to send code and data to any of your compute or data infra, all in Python, and continue to interact with them normally from your existing code and environment. Runhouse developers mention:
> Think of it as an expansion pack to your Python interpreter that lets it take detours to remote machines or manipulate remote data.
Keywords: MLOps, Infrastructure, Data storage, Modeling
## [MONAI](https://github.com/Project-MONAI/MONAI)
[MONAI](https://github.com/Project-MONAI/MONAI) is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its ambitions are:
- developing a community of academic, industrial and clinical researchers collaborating on a common foundation;
- creating state-of-the-art, end-to-end training workflows for healthcare imaging;
- providing researchers with the optimized and standardized way to create and evaluate deep learning models.
Keywords: Healthcare imaging, Training, Evaluation
## [simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers)
Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize, train, and evaluate a model. It supports a wide variety of NLP tasks.
Keywords: Framework, simplicity, NLP
## [JARVIS](https://github.com/microsoft/JARVIS)
[JARVIS](https://github.com/microsoft/JARVIS) is a system attempting to merge LLMs such as GPT-4 with the rest of the open-source ML community: leveraging up to 60 downstream models in order to perform tasks identified by the LLM.
Keywords: LLM, Agents, HF Hub
## [transformers.js](https://xenova.github.io/transformers.js/)
[transformers.js](https://xenova.github.io/transformers.js/) is a JavaScript library targeted at running models from transformers directly within the browser.
Keywords: Transformers, JavaScript, browser
## [bumblebee](https://github.com/elixir-nx/bumblebee)
Bumblebee provides pre-trained Neural Network models on top of Axon, a neural networks library for the Elixir language. It includes integration with ๐ค Models, allowing anyone to download and perform Machine Learning tasks with few lines of code.
Keywords: Elixir, Axon
## [argilla](https://github.com/argilla-io/argilla)
Argilla is an open-source platform providing advanced NLP labeling, monitoring, and workspaces. It is compatible with many open source ecosystems such as Hugging Face, Stanza, FLAIR, and others.
Keywords: NLP, Labeling, Monitoring, Workspaces
## [haystack](https://github.com/deepset-ai/haystack)
Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs. It offers production-ready tools to quickly build complex decision making, question answering, semantic search, text generation applications, and more.
Keywords: NLP, Framework, LLM
## [spaCy](https://github.com/explosion/spaCy)
[spaCy](https://github.com/explosion/spaCy) is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. It offers support for transformers models through its third party package, spacy-transformers.
Keywords: NLP, Framework
## [speechbrain](https://github.com/speechbrain/speechbrain)
SpeechBrain is an open-source and all-in-one conversational AI toolkit based on PyTorch.
The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, speech separation, language identification, multi-microphone signal processing, and many others.
Keywords: Conversational, Speech
## [skorch](https://github.com/skorch-dev/skorch)
Skorch is a scikit-learn compatible neural network library that wraps PyTorch. It has support for models within transformers, and tokenizers from tokenizers.
Keywords: Scikit-Learn, PyTorch
## [bertviz](https://github.com/jessevig/bertviz)
BertViz is an interactive tool for visualizing attention in Transformer language models such as BERT, GPT2, or T5. It can be run inside a Jupyter or Colab notebook through a simple Python API that supports most Huggingface models.
Keywords: Visualization, Transformers
## [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax)
[mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) is a haiku library using the xmap/pjit operators in JAX for model parallelism of transformers. This library is designed for scalability up to approximately 40B parameters on TPUv3s. It was the library used to train the GPT-J model.
Keywords: Haiku, Model parallelism, LLM, TPU
## [deepchem](https://github.com/deepchem/deepchem)
DeepChem aims to provide a high quality open-source toolchain that democratizes the use of deep-learning in drug discovery, materials science, quantum chemistry, and biology.
Keywords: Drug discovery, Materials Science, Quantum Chemistry, Biology
## [OpenNRE](https://github.com/thunlp/OpenNRE)
An Open-Source Package for Neural Relation Extraction (NRE). It is targeted at a wide range of users, from newcomers to relation extraction, to developers, researchers, or students.
Keywords: Neural Relation Extraction, Framework
## [pycorrector](https://github.com/shibing624/pycorrector)
PyCorrector is a Chinese Text Error Correction Tool. It uses a language model to detect errors, pinyin feature and shape feature to correct Chinese text errors. it can be used for Chinese Pinyin and stroke input method.
Keywords: Chinese, Error correction tool, Language model, Pinyin
## [nlpaug](https://github.com/makcedward/nlpaug)
This python library helps you with augmenting nlp for machine learning projects. It is a lightweight library featuring synthetic data generation for improving model performance, support for audio and text, and compatibility with several ecosystems (scikit-learn, pytorch, tensorflow).
Keywords: Data augmentation, Synthetic data generation, Audio, NLP
## [dream-textures](https://github.com/carson-katri/dream-textures)
[dream-textures](https://github.com/carson-katri/dream-textures) is a library targeted at bringing stable-diffusion support within Blender. It supports several use-cases, such as image generation, texture projection, inpainting/outpainting, ControlNet, and upscaling.
Keywords: Stable-Diffusion, Blender
## [seldon-core](https://github.com/SeldonIO/seldon-core)
Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices.
Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more.
Keywords: Microservices, Modeling, Language wrappers
## [open_model_zoo](https://github.com/openvinotoolkit/open_model_zoo)
This repository includes optimized deep learning models and a set of demos to expedite development of high-performance deep learning inference applications. Use these free pre-trained models instead of training your own models to speed-up the development and production deployment process.
Keywords: Optimized models, Demos
## [ml-stable-diffusion](https://github.com/apple/ml-stable-diffusion)
ML-Stable-Diffusion is a repository by Apple bringing Stable Diffusion support to Core ML, on Apple Silicon devices. It supports stable diffusion checkpoints hosted on the Hugging Face Hub.
Keywords: Stable Diffusion, Apple Silicon, Core ML
## [stable-dreamfusion](https://github.com/ashawkey/stable-dreamfusion)
Stable-Dreamfusion is a pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model.
Keywords: Text-to-3D, Stable Diffusion
## [txtai](https://github.com/neuml/txtai)
[txtai](https://github.com/neuml/txtai) is an open-source platform for semantic search and workflows powered by language models. txtai builds embeddings databases, which are a union of vector indexes and relational databases enabling similarity search with SQL. Semantic workflows connect language models together into unified applications.
Keywords: Semantic search, LLM
## [djl](https://github.com/deepjavalibrary/djl)
Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is designed to be easy to get started with and simple to use for developers. DJL provides a native Java development experience and functions like any other regular Java library. DJL offers [a Java binding](https://github.com/deepjavalibrary/djl/tree/master/extensions/tokenizers) for HuggingFace Tokenizers and easy conversion toolkit for HuggingFace model to deploy in Java.
Keywords: Java, Framework
## [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/)
This project provides a unified framework to test generative language models on a large number of different evaluation tasks. It has support for more than 200 tasks, and supports different ecosystems: HF Transformers, GPT-NeoX, DeepSpeed, as well as the OpenAI API.
Keywords: LLM, Evaluation, Few-shot
## [gpt-neox](https://github.com/EleutherAI/gpt-neox)
This repository records EleutherAI's library for training large-scale language models on GPUs. The framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. It is focused on training multi-billion-parameter models.
Keywords: Training, LLM, Megatron, DeepSpeed
## [muzic](https://github.com/microsoft/muzic)
Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence. Muzic was created by researchers from Microsoft Research Asia.
Keywords: Music understanding, Music generation
## [dalle-flow](https://github.com/jina-ai/dalle-flow)
DALLยทE Flow is an interactive workflow for generating high-definition images from a text prompt. Itt leverages DALLยทE-Mega, GLID-3 XL, and Stable Diffusion to generate image candidates, and then calls CLIP-as-service to rank the candidates w.r.t. the prompt.
The preferred candidate is fed to GLID-3 XL for diffusion, which often enriches the texture and background. Finally, the candidate is upscaled to 1024x1024 via SwinIR.
Keywords: High-definition image generation, Stable Diffusion, DALL-E Mega, GLID-3 XL, CLIP, SwinIR
## [lightseq](https://github.com/bytedance/lightseq)
LightSeq is a high performance training and inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP and CV models such as BERT, GPT, Transformer, etc. It is therefore best useful for machine translation, text generation, image classification, and other sequence related tasks.
Keywords: Training, Inference, Sequence Processing, Sequence Generation
## [LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR)
The goal of this project is to create a learning based system that takes an image of a math formula and returns corresponding LaTeX code.
Keywords: OCR, LaTeX, Math formula
## [open_clip](https://github.com/mlfoundations/open_clip)
OpenCLIP is an open source implementation of OpenAI's CLIP.
The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift.
The starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset.
Specifically, a ResNet-50 model trained with this codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet.
Keywords: CLIP, Open-source, Contrastive, Image-text
## [dalle-playground](https://github.com/saharmor/dalle-playground)
A playground to generate images from any text prompt using Stable Diffusion and Dall-E mini.
Keywords: WebUI, Stable Diffusion, Dall-E mini
## [FedML](https://github.com/FedML-AI/FedML)
[FedML](https://github.com/FedML-AI/FedML) is a federated learning and analytics library enabling secure and collaborative machine learning on decentralized data anywhere at any scale.
It supports large-scale cross-silo federated learning, and cross-device federated learning on smartphones/IoTs, and research simulation.
Keywords: Federated Learning, Analytics, Collaborative ML, Decentralized
## [gpt-code-clippy](https://github.com/CodedotAl/gpt-code-clippy)
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.
Keywords: LLM, Code
## [TextAttack](https://github.com/QData/TextAttack)
[TextAttack](https://github.com/QData/TextAttack) ๐ is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
Keywords: Adversarial attacks, Data augmentation, NLP
## [OpenPrompt](https://github.com/thunlp/OpenPrompt)
Prompt-learning is a paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modify the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. [OpenPrompt](https://github.com/thunlp/OpenPrompt) supports loading PLMs directly from https://github.com/huggingface/transformers.
## [text-generation-webui](https://github.com/oobabooga/text-generation-webui/)
[text-generation-webui](https://github.com/oobabooga/text-generation-webui/) is a Gradio Web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
Keywords: LLM, WebUI
## [libra](https://github.com/Palashio/libra)
An ergonomic machine learning [libra](https://github.com/Palashio/libra)ry for non-technical users. It focuses on ergonomics and on ensuring that training a model is as simple as it can be.
Keywords: Ergonomic, Non-technical
## [alibi](https://github.com/SeldonIO/alibi)
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
Keywords: Model inspection, Model interpretation, Black-box, White-box
## [tortoise-tts](https://github.com/neonbjb/tortoise-tts)
Tortoise is a text-to-speech program built with the following priorities: strong multi-voice capabilities, and highly realistic prosody and intonation.
Keywords: Text-to-speech
## [flower](https://github.com/adap/flower)
Flower (flwr) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles: customizability, extendability, framework agnosticity, and ease-of-use.
Keywords: Federated learning systems, Customizable, Extendable, Framework-agnostic, Simplicity
## [fast-bert](https://github.com/utterworks/fast-bert)
Fast-Bert is a deep learning library that allows developers and data scientists to train and deploy BERT and XLNet based models for natural language processing tasks beginning with Text Classification. It is aimed at simplicity.
Keywords: Deployment, BERT, XLNet
## [towhee](https://github.com/towhee-io/towhee)
Towhee makes it easy to build neural data processing pipelines for AI applications. We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks. Users can use Towhee's Pythonic API to build a prototype of their pipeline and automatically optimize it for production-ready environments.
Keywords: Data processing pipeline, Optimization
## [alibi-detect](https://github.com/SeldonIO/alibi-detect)
Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection.
Keywords: Adversarial, Outlier, Drift detection
## [FARM](https://github.com/deepset-ai/FARM)
[FARM](https://github.com/deepset-ai/FARM) makes Transfer Learning with BERT & Co simple, fast and enterprise-ready. It's built upon transformers and provides additional features to simplify the life of developers: Parallelized preprocessing, highly modular design, multi-task learning, experiment tracking, easy debugging and close integration with AWS SageMaker.
Keywords: Transfer Learning, Modular design, Multi-task learning, Experiment tracking
## [aitextgen](https://github.com/minimaxir/aitextgen)
A robust Python tool for text-based AI training and generation using OpenAI's GPT-2 and EleutherAI's GPT Neo/GPT-3 architecture.
[aitextgen](https://github.com/minimaxir/aitextgen) is a Python package that leverages PyTorch, Hugging Face Transformers and pytorch-lightning with specific optimizations for text generation using GPT-2, plus many added features.
Keywords: Training, Generation
## [diffgram](https://github.com/diffgram/diffgram)
Diffgram aims to integrate human supervision into platforms. We support your team programmatically changing the UI (Schema, layout, etc.) like in Streamlit. This means that you can collect and annotate timely data from users. In other words, we are the platform behind your platform, an integrated part of your application, to ship new & better AI products faster.
Keywords: Human supervision, Platform
## [ecco](https://github.com/jalammar/ecco)
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
Keywords: Model explainability
## [s3prl](https://github.com/s3prl/s3prl)
[s3prl](https://github.com/s3prl/s3prl) stands for Self-Supervised Speech Pre-training and Representation Learning. Self-supervised speech pre-trained models are called upstream in this toolkit, and are utilized in various downstream tasks.
Keywords: Speech, Training
## [ru-dalle](https://github.com/ai-forever/ru-dalle)
RuDALL-E aims to be similar to DALL-E, targeted to Russian.
Keywords: DALL-E, Russian
## [DeepKE](https://github.com/zjunlp/DeepKE)
[DeepKE](https://github.com/zjunlp/DeepKE) is a knowledge extraction toolkit for knowledge graph construction supporting cnSchema๏ผlow-resource, document-level and multimodal scenarios for entity, relation and attribute extraction.
Keywords: Knowledge Extraction, Knowledge Graphs
## [Nebuly](https://github.com/nebuly-ai/nebuly)
Nebuly is the next-generation platform to monitor and optimize your AI costs in one place. The platform connects to all your AI cost sources (compute, API providers, AI software licenses, etc) and centralizes them in one place to give you full visibility on a model basis. The platform also provides optimization recommendations and a co-pilot model that can guide during the optimization process. The platform builds on top of the open-source tools allowing you to optimize the different steps of your AI stack to squeeze out the best possible cost performances.
Keywords: Optimization, Performance, Monitoring
## [imaginAIry](https://github.com/brycedrennan/imaginAIry)
Offers a CLI and a Python API to generate images with Stable Diffusion. It has support for many tools, like image structure control (controlnet), instruction-based image edits (InstructPix2Pix), prompt-based masking (clipseg), among others.
Keywords: Stable Diffusion, CLI, Python API
## [sparseml](https://github.com/neuralmagic/sparseml)
SparseML is an open-source model optimization toolkit that enables you to create inference-optimized sparse models using pruning, quantization, and distillation algorithms. Models optimized with SparseML can then be exported to the ONNX and deployed with DeepSparse for GPU-class performance on CPU hardware.
Keywords: Model optimization, Pruning, Quantization, Distillation
## [opacus](https://github.com/pytorch/opacus)
Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment.
Keywords: Differential privacy
## [LAVIS](https://github.com/salesforce/LAVIS)
[LAVIS](https://github.com/salesforce/LAVIS) is a Python deep learning library for LAnguage-and-VISion intelligence research and applications. This library aims to provide engineers and researchers with a one-stop solution to rapidly develop models for their specific multimodal scenarios, and benchmark them across standard and customized datasets. It features a unified interface design to access
Keywords: Multimodal, NLP, Vision
## [buzz](https://github.com/chidiwilliams/buzz)
Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
Keywords: Audio transcription, Translation
## [rust-bert](https://github.com/guillaume-be/rust-bert)
Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's Transformers library, using the tch-rs crate and pre-processing from rust-tokenizers. Supports multi-threaded tokenization and GPU inference. This repository exposes the model base architecture, task-specific heads and ready-to-use pipelines.
Keywords: Rust, BERT, Inference
## [EasyNLP](https://github.com/alibaba/EasyNLP)
[EasyNLP](https://github.com/alibaba/EasyNLP) is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. [EasyNLP](https://github.com/alibaba/EasyNLP) integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and deployment for real-world applications.
Keywords: NLP, Knowledge distillation, Few-shot learning, Multi-modality, Training, Inference, Deployment
## [TurboTransformers](https://github.com/Tencent/TurboTransformers)
A fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Keywords: Optimization, Performance
## [hivemind](https://github.com/learning-at-home/hivemind)
Hivemind is a PyTorch library for decentralized deep learning across the Internet. Its intended usage is training one large model on hundreds of computers from different universities, companies, and volunteers.
Keywords: Decentralized training
## [docquery](https://github.com/impira/docquery)
DocQuery is a library and command-line tool that makes it easy to analyze semi-structured and unstructured documents (PDFs, scanned images, etc.) using large language models (LLMs). You simply point DocQuery at one or more documents and specify a question you want to ask. DocQuery is created by the team at Impira.
Keywords: Semi-structured documents, Unstructured documents, LLM, Document Question Answering
## [CodeGeeX](https://github.com/THUDM/CodeGeeX)
[CodeGeeX](https://github.com/THUDM/CodeGeeX) is a large-scale multilingual code generation model with 13 billion parameters, pre-trained on a large code corpus of more than 20 programming languages. It has several unique features:
- Multilingual code generation
- Crosslingual code translation
- Is a customizable programming assistant
Keywords: Code Generation Model
## [ktrain](https://github.com/amaiya/ktrain)
[ktrain](https://github.com/amaiya/ktrain) is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, [ktrain](https://github.com/amaiya/ktrain) is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners.
Keywords: Keras wrapper, Model building, Training, Deployment
## [FastDeploy](https://github.com/PaddlePaddle/FastDeploy)
[FastDeploy](https://github.com/PaddlePaddle/FastDeploy) is an Easy-to-use and High Performance AI model deployment toolkit for Cloud, Mobile and Edge with packageout-of-the-box and unified experience, endend-to-end optimization for over fire160+ Text, Vision, Speech and Cross-modal AI models. Including image classification, object detection, OCR, face detection, matting, pp-tracking, NLP, stable diffusion, TTS and other tasks to meet developers' industrial deployment needs for multi-scenario, multi-hardware and multi-platform.
Keywords: Model deployment, CLoud, Mobile, Edge
## [underthesea](https://github.com/undertheseanlp/underthesea)
[underthesea](https://github.com/undertheseanlp/underthesea) is a Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.
Keywords: Vietnamese, NLP
## [hasktorch](https://github.com/hasktorch/hasktorch)
Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the core C++ libraries shared by PyTorch.
Keywords: Haskell, Neural Networks
## [donut](https://github.com/clovaai/donut)
Donut, or Document understanding transformer, is a new method of document understanding that utilizes an OCR-free end-to-end Transformer model.
Donut does not require off-the-shelf OCR engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing).
Keywords: Document Understanding
## [transformers-interpret](https://github.com/cdpierse/transformers-interpret)
Transformers Interpret is a model explainability tool designed to work exclusively with the transformers package.
In line with the philosophy of the Transformers package Transformers Interpret allows any transformers model to be explained in just two lines. Explainers are available for both text and computer vision models. Visualizations are also available in notebooks and as savable png and html files
Keywords: Model interpretation, Visualization
## [mlrun](https://github.com/mlrun/mlrun)
MLRun is an open MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources. With MLRun, you can choose any IDE on your local machine or on the cloud. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous improvements.
Keywords: MLOps
## [FederatedScope](https://github.com/alibaba/FederatedScope)
[FederatedScope](https://github.com/alibaba/FederatedScope) is a comprehensive federated learning platform that provides convenient usage and flexible customization for various federated learning tasks in both academia and industry. Based on an event-driven architecture, [FederatedScope](https://github.com/alibaba/FederatedScope) integrates rich collections of functionalities to satisfy the burgeoning demands from federated learning, and aims to build up an easy-to-use platform for promoting learning safely and effectively.
Keywords: Federated learning, Event-driven
## [pythainlp](https://github.com/PyThaiNLP/pythainlp)
PyThaiNLP is a Python package for text processing and linguistic analysis, similar to NLTK with focus on Thai language.
Keywords: Thai, NLP, NLTK
## [FlagAI](https://github.com/FlagAI-Open/FlagAI)
[FlagAI](https://github.com/FlagAI-Open/FlagAI) (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model. Our goal is to support training, fine-tuning, and deployment of large-scale models on various downstream tasks with multi-modality.
Keywords: Large models, Training, Fine-tuning, Deployment, Multi-modal
## [pyserini](https://github.com/castorini/pyserini)
[pyserini](https://github.com/castorini/pyserini) is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with the group's Anserini IR toolkit. Retrieval using dense representations is provided via integration with Facebook's Faiss library.
Keywords: IR, Information Retrieval, Dense, Sparse
## [baal](https://github.com/baal-org/baal)
[baal](https://github.com/baal-org/baal) is an active learning library that supports both industrial applications and research usecases. [baal](https://github.com/baal-org/baal) currently supports Monte-Carlo Dropout, MCDropConnect, deep ensembles, and semi-supervised learning.
Keywords: Active Learning, Research, Labeling
## [cleanlab](https://github.com/cleanlab/cleanlab)
[cleanlab](https://github.com/cleanlab/cleanlab) is the standard data-centric AI package for data quality and machine learning with messy, real-world data and labels. For text, image, tabular, audio (among others) datasets, you can use cleanlab to automatically: detect data issues (outliers, label errors, near duplicates, etc), train robust ML models, infer consensus + annotator-quality for multi-annotator data, suggest data to (re)label next (active learning).
Keywords: Data-Centric AI, Data Quality, Noisy Labels, Outlier Detection, Active Learning
## [BentoML](https://github.com/bentoml/BentoML)
[BentoML](https://github.com/bentoml) is the unified framework for for building, shipping, and scaling production-ready AI applications incorporating traditional ML, pre-trained AI models, Generative and Large Language Models.
All Hugging Face models and pipelines can be seamlessly integrated into BentoML applications, enabling the running of models on the most suitable hardware and independent scaling based on usage.
Keywords: BentoML, Framework, Deployment, AI Applications
## [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory)
[LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory) offers a user-friendly fine-tuning framework that incorporates PEFT. The repository includes training(fine-tuning) and inference examples for LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, and other LLMs. A ChatGLM version is also available in [ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning).
Keywords: PEFT, fine-tuning, LLaMA-2, ChatGLM, Qwen
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_pt-br.md | <!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<b>ะ ortuguรชs</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>Aprendizado de mรกquina de รบltima geraรงรฃo para JAX, PyTorch e TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
A biblioteca ๐ค Transformers oferece milhares de modelos prรฉ-treinados para executar tarefas em diferentes modalidades, como texto, visรฃo e รกudio.
Esses modelos podem ser aplicados a:
* ๐ Texto, para tarefas como classificaรงรฃo de texto, extraรงรฃo de informaรงรตes, resposta a perguntas, sumarizaรงรฃo, traduรงรฃo, geraรงรฃo de texto, em mais de 100 idiomas.
* ๐ผ๏ธ Imagens, para tarefas como classificaรงรฃo de imagens, detecรงรฃo de objetos e segmentaรงรฃo.
* ๐ฃ๏ธ รudio, para tarefas como reconhecimento de fala e classificaรงรฃo de รกudio.
Os modelos Transformer tambรฉm podem executar tarefas em diversas modalidades combinadas, como responder a perguntas em tabelas, reconhecimento รณptico de caracteres, extraรงรฃo de informaรงรตes de documentos digitalizados, classificaรงรฃo de vรญdeo e resposta a perguntas visuais.
A biblioteca ๐ค Transformers oferece APIs para baixar e usar rapidamente esses modelos prรฉ-treinados em um texto especรญfico, ajustรก-los em seus prรณprios conjuntos de dados e, em seguida, compartilhรก-los com a comunidade em nosso [model hub](https://huggingface.co/models). Ao mesmo tempo, cada mรณdulo Python que define uma arquitetura รฉ totalmente independente e pode ser modificado para permitir experimentos de pesquisa rรกpidos.
A biblioteca ๐ค Transformers รฉ respaldada pelas trรชs bibliotecas de aprendizado profundo mais populares โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) e [TensorFlow](https://www.tensorflow.org/) โ com uma integraรงรฃo perfeita entre elas. ร simples treinar seus modelos com uma delas antes de carregรก-los para inferรชncia com a outra
## Demonstraรงรฃo Online
Vocรช pode testar a maioria de nossos modelos diretamente em suas pรกginas a partir do [model hub](https://huggingface.co/models). Tambรฉm oferecemos [hospedagem de modelos privados, versionamento e uma API de inferรชncia](https://huggingface.co/pricing)
para modelos pรบblicos e privados.
Aqui estรฃo alguns exemplos:
Em Processamento de Linguagem Natural:
- [Completar palavra mascarada com BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Reconhecimento de Entidades Nomeadas com Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Geraรงรฃo de texto com GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C)
- [Inferรชncia de Linguagem Natural com RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Sumarizaรงรฃo com BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Resposta a perguntas com DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Traduรงรฃo com T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
Em Visรฃo Computacional:
- [Classificaรงรฃo de Imagens com ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Detecรงรฃo de Objetos com DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Segmentaรงรฃo Semรขntica com SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Segmentaรงรฃo Panรณptica com MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [Estimativa de Profundidade com DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
- [Classificaรงรฃo de Vรญdeo com VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Segmentaรงรฃo Universal com OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
Em รudio:
- [Reconhecimento Automรกtico de Fala com Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Detecรงรฃo de Palavras-Chave com Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Classificaรงรฃo de รudio com Transformer de Espectrograma de รudio](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
Em Tarefas Multimodais:
- [Respostas de Perguntas em Tabelas com TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Respostas de Perguntas Visuais com ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Classificaรงรฃo de Imagens sem Anotaรงรฃo com CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
- [Respostas de Perguntas em Documentos com LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Classificaรงรฃo de Vรญdeo sem Anotaรงรฃo com X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
## 100 Projetos Usando Transformers
Transformers รฉ mais do que um conjunto de ferramentas para usar modelos prรฉ-treinados: รฉ uma comunidade de projetos construรญdos ao seu redor e o Hugging Face Hub. Queremos que o Transformers permita que desenvolvedores, pesquisadores, estudantes, professores, engenheiros e qualquer outra pessoa construa seus projetos dos sonhos.
Para celebrar as 100.000 estrelas do Transformers, decidimos destacar a comunidade e criamos a pรกgina [awesome-transformers](./awesome-transformers.md), que lista 100 projetos incrรญveis construรญdos nas proximidades dos Transformers.
Se vocรช possui ou utiliza um projeto que acredita que deveria fazer parte da lista, abra um PR para adicionรก-lo!
## Se vocรช estรก procurando suporte personalizado da equipe Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Tour Rรกpido
Para usar imediatamente um modelo em uma entrada especรญfica (texto, imagem, รกudio, ...), oferecemos a API `pipeline`. Os pipelines agrupam um modelo prรฉ-treinado com o prรฉ-processamento que foi usado durante o treinamento desse modelo. Aqui estรก como usar rapidamente um pipeline para classificar textos como positivos ou negativos:
```python
from transformers import pipeline
# Carregue o pipeline de classificaรงรฃo de texto
>>> classifier = pipeline("sentiment-analysis")
# Classifique o texto como positivo ou negativo
>>> classifier("Estamos muito felizes em apresentar o pipeline no repositรณrio dos transformers.")
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
A segunda linha de cรณdigo baixa e armazena em cache o modelo prรฉ-treinado usado pelo pipeline, enquanto a terceira linha o avalia no texto fornecido. Neste exemplo, a resposta รฉ "positiva" com uma confianรงa de 99,97%.
Muitas tarefas tรชm um `pipeline` prรฉ-treinado pronto para uso, nรฃo apenas em PNL, mas tambรฉm em visรฃo computacional e processamento de รกudio. Por exemplo, podemos facilmente extrair objetos detectados em uma imagem:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Aqui obtemos uma lista de objetos detectados na imagem, com uma caixa envolvendo o objeto e uma pontuaรงรฃo de confianรงa. Aqui estรก a imagem original ร esquerda, com as previsรตes exibidas ร direita:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Vocรช pode aprender mais sobre as tarefas suportadas pela API `pipeline` em [este tutorial](https://huggingface.co/docs/transformers/task_summary).
Alรฉm do `pipeline`, para baixar e usar qualquer um dos modelos prรฉ-treinados em sua tarefa especรญfica, tudo o que รฉ necessรกrio sรฃo trรชs linhas de cรณdigo. Aqui estรก a versรฃo em PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
E aqui estรก o cรณdigo equivalente para TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
O tokenizador รฉ responsรกvel por todo o prรฉ-processamento que o modelo prรฉ-treinado espera, e pode ser chamado diretamente em uma รบnica string (como nos exemplos acima) ou em uma lista. Ele produzirรก um dicionรกrio que vocรช pode usar no cรณdigo subsequente ou simplesmente passar diretamente para o seu modelo usando o operador de descompactaรงรฃo de argumentos **.
O modelo em si รฉ um [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ou um [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(dependendo do seu back-end) que vocรช pode usar como de costume. [Este tutorial](https://huggingface.co/docs/transformers/training) explica como integrar esse modelo em um ciclo de treinamento clรกssico do PyTorch ou TensorFlow, ou como usar nossa API `Trainer` para ajuste fino rรกpido em um novo conjunto de dados.
## Por que devo usar transformers?
1. Modelos state-of-the-art fรกceis de usar:
- Alto desempenho em compreensรฃo e geraรงรฃo de linguagem natural, visรฃo computacional e tarefas de รกudio.
- Barreira de entrada baixa para educadores e profissionais.
- Poucas abstraรงรตes visรญveis para o usuรกrio, com apenas trรชs classes para aprender.
- Uma API unificada para usar todos os nossos modelos prรฉ-treinados.
1. Menores custos de computaรงรฃo, menor pegada de carbono:
- Pesquisadores podem compartilhar modelos treinados em vez de treinar sempre do zero.
- Profissionais podem reduzir o tempo de computaรงรฃo e os custos de produรงรฃo.
- Dezenas de arquiteturas com mais de 60.000 modelos prรฉ-treinados em todas as modalidades.
1. Escolha o framework certo para cada parte da vida de um modelo:
- Treine modelos state-of-the-art em 3 linhas de cรณdigo.
- Mova um รบnico modelo entre frameworks TF2.0/PyTorch/JAX ร vontade.
- Escolha o framework certo de forma contรญnua para treinamento, avaliaรงรฃo e produรงรฃo.
1. Personalize facilmente um modelo ou um exemplo para atender ร s suas necessidades:
- Fornecemos exemplos para cada arquitetura para reproduzir os resultados publicados pelos autores originais.
- Os detalhes internos do modelo sรฃo expostos de maneira consistente.
- Os arquivos do modelo podem ser usados de forma independente da biblioteca para experimentos rรกpidos.
## Por que nรฃo devo usar transformers?
- Esta biblioteca nรฃo รฉ uma caixa de ferramentas modular para construir redes neurais. O cรณdigo nos arquivos do modelo nรฃo รฉ refatorado com abstraรงรตes adicionais de propรณsito, para que os pesquisadores possam iterar rapidamente em cada um dos modelos sem se aprofundar em abstraรงรตes/arquivos adicionais.
- A API de treinamento nรฃo รฉ projetada para funcionar com qualquer modelo, mas รฉ otimizada para funcionar com os modelos fornecidos pela biblioteca. Para loops de aprendizado de mรกquina genรฉricos, vocรช deve usar outra biblioteca (possivelmente, [Accelerate](https://huggingface.co/docs/accelerate)).
- Embora nos esforcemos para apresentar o maior nรบmero possรญvel de casos de uso, os scripts em nossa [pasta de exemplos](https://github.com/huggingface/transformers/tree/main/examples) sรฃo apenas isso: exemplos. ร esperado que eles nรฃo funcionem prontos para uso em seu problema especรญfico e que seja necessรกrio modificar algumas linhas de cรณdigo para adaptรก-los ร s suas necessidades.
### Com pip
Este repositรณrio รฉ testado no Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ e TensorFlow 2.6+.
Vocรช deve instalar o ๐ค Transformers em um [ambiente virtual](https://docs.python.org/3/library/venv.html). Se vocรช nรฃo estรก familiarizado com ambientes virtuais em Python, confira o [guia do usuรกrio](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
Primeiro, crie um ambiente virtual com a versรฃo do Python que vocรช vai usar e ative-o.
Em seguida, vocรช precisarรก instalar pelo menos um dos back-ends Flax, PyTorch ou TensorFlow.
Consulte a [pรกgina de instalaรงรฃo do TensorFlow](https://www.tensorflow.org/install/), a [pรกgina de instalaรงรฃo do PyTorch](https://pytorch.org/get-started/locally/#start-locally) e/ou [Flax](https://github.com/google/flax#quick-install) e [Jax](https://github.com/google/jax#installation) pรกginas de instalaรงรฃo para obter o comando de instalaรงรฃo especรญfico para a sua plataforma.
Quando um desses back-ends estiver instalado, o ๐ค Transformers pode ser instalado usando pip da seguinte forma:
```bash
pip install transformers
```
Se vocรช deseja experimentar com os exemplos ou precisa da versรฃo mais recente do cรณdigo e nรฃo pode esperar por um novo lanรงamento, vocรช deve instalar a [biblioteca a partir do cรณdigo-fonte](https://huggingface.co/docs/transformers/installation#installing-from-source).
### Com conda
O ๐ค Transformers pode ser instalado com conda da seguinte forma:
```bash
conda install conda-forge::transformers
```
> **_NOTA:_** Instalar `transformers` pelo canal `huggingface` estรก obsoleto.
Siga as pรกginas de instalaรงรฃo do Flax, PyTorch ou TensorFlow para ver como instalรก-los com conda.
Siga as pรกginas de instalaรงรฃo do Flax, PyTorch ou TensorFlow para ver como instalรก-los com o conda.
> **_NOTA:_** No Windows, vocรช pode ser solicitado a ativar o Modo de Desenvolvedor para aproveitar o cache. Se isso nรฃo for uma opรงรฃo para vocรช, por favor nos avise [neste problema](https://github.com/huggingface/huggingface_hub/issues/1062).
## Arquiteturas de Modelos
**[Todos os pontos de verificaรงรฃo de modelo](https://huggingface.co/models)** fornecidos pelo ๐ค Transformers sรฃo integrados de forma transparente do [model hub](https://huggingface.co/models) do huggingface.co, onde sรฃo carregados diretamente por [usuรกrios](https://huggingface.co/users) e [organizaรงรตes](https://huggingface.co/organizations).
Nรบmero atual de pontos de verificaรงรฃo: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers atualmente fornece as seguintes arquiteturas: veja [aqui](https://huggingface.co/docs/transformers/model_summary) para um resumo de alto nรญvel de cada uma delas.
Para verificar se cada modelo tem uma implementaรงรฃo em Flax, PyTorch ou TensorFlow, ou possui um tokenizador associado com a biblioteca ๐ค Tokenizers, consulte [esta tabela](https://huggingface.co/docs/transformers/index#supported-frameworks).
Essas implementaรงรตes foram testadas em vรกrios conjuntos de dados (veja os scripts de exemplo) e devem corresponder ao desempenho das implementaรงรตes originais. Vocรช pode encontrar mais detalhes sobre o desempenho na seรงรฃo de Exemplos da [documentaรงรฃo](https://github.com/huggingface/transformers/tree/main/examples).
## Saiba mais
| Seรงรฃo | Descriรงรฃo |
|-|-|
| [Documentaรงรฃo](https://huggingface.co/docs/transformers/) | Documentaรงรฃo completa da API e tutoriais |
| [Resumo de Tarefas](https://huggingface.co/docs/transformers/task_summary) | Tarefas suportadas pelo ๐ค Transformers |
| [Tutorial de Prรฉ-processamento](https://huggingface.co/docs/transformers/preprocessing) | Usando a classe `Tokenizer` para preparar dados para os modelos |
| [Treinamento e Ajuste Fino](https://huggingface.co/docs/transformers/training) | Usando os modelos fornecidos pelo ๐ค Transformers em um loop de treinamento PyTorch/TensorFlow e a API `Trainer` |
| [Tour Rรกpido: Scripts de Ajuste Fino/Utilizaรงรฃo](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de exemplo para ajuste fino de modelos em uma ampla gama de tarefas |
| [Compartilhamento e Envio de Modelos](https://huggingface.co/docs/transformers/model_sharing) | Envie e compartilhe seus modelos ajustados com a comunidade |
## Citaรงรฃo
Agora temos um [artigo](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que vocรช pode citar para a biblioteca ๐ค Transformers:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = out,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/SECURITY.md | # Security Policy
## Hugging Face Hub, remote artefacts, and remote code
Transformers is open-source software that is tightly coupled to the Hugging Face Hub. While you have the ability to use it
offline with pre-downloaded model weights, it provides a very simple way to download, use, and manage models locally.
When downloading artefacts that have been uploaded by others on any platform, you expose yourself to risks. Please
read below for the security recommendations in order to keep your runtime and local environment safe.
### Remote artefacts
Models uploaded on the Hugging Face Hub come in different formats. We heavily recommend uploading and downloading
models in the [`safetensors`](https://github.com/huggingface/safetensors) format (which is the default prioritized
by the transformers library), as developed specifically to prevent arbitrary code execution on your system.
To avoid loading models from unsafe formats(e.g. [pickle](https://docs.python.org/3/library/pickle.html), you should use the `use_safetenstors` parameter. If doing so, in the event that no .safetensors file is present, transformers will error when loading the model.
### Remote code
#### Modeling
Transformers supports many model architectures, but is also the bridge between your Python runtime and models that
are stored in model repositories on the Hugging Face Hub.
These models require the `trust_remote_code=True` parameter to be set when using them; please **always** verify
the content of the modeling files when using this argument. We recommend setting a revision in order to ensure you
protect yourself from updates on the repository.
#### Tools
Through the `Agent` framework, remote tools can be downloaded to be used by the Agent. You're to specify these tools
yourself, but please keep in mind that their code will be run on your machine if the Agent chooses to run them.
Please inspect the code of the tools before passing them to the Agent to protect your runtime and local setup.
## Reporting a Vulnerability
๐ค Please feel free to submit vulnerability reports to our private bug bounty program at https://hackerone.com/hugging_face. You'll need to request access to the program by emailing [email protected].
Note that you'll need to be invited to our program, so send us a quick email at [email protected] if you've found a vulnerability.
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/conftest.py | # Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tests directory-specific settings - this file is run automatically
# by pytest before any tests are run
import doctest
import sys
import warnings
from os.path import abspath, dirname, join
import _pytest
import pytest
from transformers.testing_utils import HfDoctestModule, HfDocTestParser
NOT_DEVICE_TESTS = {
"test_tokenization",
"test_processor",
"test_processing",
"test_beam_constraints",
"test_configuration_utils",
"test_data_collator",
"test_trainer_callback",
"test_trainer_utils",
"test_feature_extraction",
"test_image_processing",
"test_image_processor",
"test_image_transforms",
"test_optimization",
"test_retrieval",
"test_config",
"test_from_pretrained_no_checkpoint",
"test_keep_in_fp32_modules",
"test_gradient_checkpointing_backward_compatibility",
"test_gradient_checkpointing_enable_disable",
"test_save_load_fast_init_from_base",
"test_fast_init_context_manager",
"test_fast_init_tied_embeddings",
"test_save_load_fast_init_to_base",
"test_torch_save_load",
"test_initialization",
"test_forward_signature",
"test_model_common_attributes",
"test_model_main_input_name",
"test_correct_missing_keys",
"test_tie_model_weights",
"test_can_use_safetensors",
"test_load_save_without_tied_weights",
"test_tied_weights_keys",
"test_model_weights_reload_no_missing_tied_weights",
"test_pt_tf_model_equivalence",
"test_mismatched_shapes_have_properly_initialized_weights",
"test_matched_shapes_have_loaded_weights_when_some_mismatched_shapes_exist",
"test_model_is_small",
"test_tf_from_pt_safetensors",
"test_flax_from_pt_safetensors",
"ModelTest::test_pipeline_", # None of the pipeline tests from PipelineTesterMixin (of which XxxModelTest inherits from) are running on device
"ModelTester::test_pipeline_",
"/repo_utils/",
"/utils/",
"/tools/",
}
# allow having multiple repository checkouts and not needing to remember to rerun
# `pip install -e '.[dev]'` when switching between checkouts and running tests.
git_repo_path = abspath(join(dirname(__file__), "src"))
sys.path.insert(1, git_repo_path)
# silence FutureWarning warnings in tests since often we can't act on them until
# they become normal warnings - i.e. the tests still need to test the current functionality
warnings.simplefilter(action="ignore", category=FutureWarning)
def pytest_configure(config):
config.addinivalue_line(
"markers", "is_pt_tf_cross_test: mark test to run only when PT and TF interactions are tested"
)
config.addinivalue_line(
"markers", "is_pt_flax_cross_test: mark test to run only when PT and FLAX interactions are tested"
)
config.addinivalue_line("markers", "is_pipeline_test: mark test to run only when pipelines are tested")
config.addinivalue_line("markers", "is_staging_test: mark test to run only in the staging environment")
config.addinivalue_line("markers", "accelerate_tests: mark test that require accelerate")
config.addinivalue_line("markers", "tool_tests: mark the tool tests that are run on their specific schedule")
config.addinivalue_line("markers", "not_device_test: mark the tests always running on cpu")
def pytest_collection_modifyitems(items):
for item in items:
if any(test_name in item.nodeid for test_name in NOT_DEVICE_TESTS):
item.add_marker(pytest.mark.not_device_test)
def pytest_addoption(parser):
from transformers.testing_utils import pytest_addoption_shared
pytest_addoption_shared(parser)
def pytest_terminal_summary(terminalreporter):
from transformers.testing_utils import pytest_terminal_summary_main
make_reports = terminalreporter.config.getoption("--make-reports")
if make_reports:
pytest_terminal_summary_main(terminalreporter, id=make_reports)
def pytest_sessionfinish(session, exitstatus):
# If no tests are collected, pytest exists with code 5, which makes the CI fail.
if exitstatus == 5:
session.exitstatus = 0
# Doctest custom flag to ignore output.
IGNORE_RESULT = doctest.register_optionflag("IGNORE_RESULT")
OutputChecker = doctest.OutputChecker
class CustomOutputChecker(OutputChecker):
def check_output(self, want, got, optionflags):
if IGNORE_RESULT & optionflags:
return True
return OutputChecker.check_output(self, want, got, optionflags)
doctest.OutputChecker = CustomOutputChecker
_pytest.doctest.DoctestModule = HfDoctestModule
doctest.DocTestParser = HfDocTestParser
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/.coveragerc | [run]
source=transformers
omit =
# skip convertion scripts from testing for now
*/convert_*
*/__main__.py
[report]
exclude_lines =
pragma: no cover
raise
except
register_parameter | 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_vi.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<b>Tiแบฟng viแปt</b> |
</p>
</h4>
<h3 align="center">
<p>Cรดng nghแป Hแปc mรกy tiรชn tiแบฟn cho JAX, PyTorch vร TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers cung cแบฅp hร ng ngร n mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc ฤแป thแปฑc hiแปn cรกc nhiแปm vแปฅ trรชn cรกc modalities khรกc nhau nhฦฐ vฤn bแบฃn, hรฌnh แบฃnh vร รขm thanh.
Cรกc mรด hรฌnh nร y cรณ thแป ฤฦฐแปฃc รกp dแปฅng vร o:
* ๐ Vฤn bแบฃn, cho cรกc nhiแปm vแปฅ nhฦฐ phรขn loแบกi vฤn bแบฃn, trรญch xuแบฅt thรดng tin, trแบฃ lแปi cรขu hแปi, tรณm tแบฏt, dแปch thuแบญt vร sinh vฤn bแบฃn, trong hฦกn 100 ngรดn ngแปฏ.
* ๐ผ๏ธ Hรฌnh แบฃnh, cho cรกc nhiแปm vแปฅ nhฦฐ phรขn loแบกi hรฌnh แบฃnh, nhแบญn diแปn ฤแปi tฦฐแปฃng vร phรขn ฤoแบกn.
* ๐ฃ๏ธ รm thanh, cho cรกc nhiแปm vแปฅ nhฦฐ nhแบญn dแบกng giแปng nรณi vร phรขn loแบกi รขm thanh.
Cรกc mรด hรฌnh Transformer cลฉng cรณ thแป thแปฑc hiแปn cรกc nhiแปm vแปฅ trรชn **nhiแปu modalities kแบฟt hแปฃp**, nhฦฐ trแบฃ lแปi cรขu hแปi vแป bแบฃng, nhแบญn dแบกng kรฝ tแปฑ quang hแปc, trรญch xuแบฅt thรดng tin tแปซ tร i liแปu quรฉt, phรขn loแบกi video vร trแบฃ lแปi cรขu hแปi hรฌnh แบฃnh.
๐ค Transformers cung cแบฅp cรกc API ฤแป tแบฃi xuแปng vร sแปญ dแปฅng nhanh chรณng cรกc mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc ฤรณ trรชn vฤn bแบฃn cแปฅ thแป, ฤiแปu chแปnh chรบng trรชn tแบญp dแปฏ liแปu cแปงa riรชng bแบกn vร sau ฤรณ chia sแบป chรบng vแปi cแปng ฤแปng trรชn [model hub](https://huggingface.co/models) cแปงa chรบng tรดi. ฤแปng thแปi, mแปi module python xรกc ฤแปnh mแปt kiแบฟn trรบc lร hoร n toร n ฤแปc lแบญp vร cรณ thแป ฤฦฐแปฃc sแปญa ฤแปi ฤแป cho phรฉp thแปฑc hiแปn nhanh cรกc thรญ nghiแปm nghiรชn cแปฉu.
๐ค Transformers ฤฦฐแปฃc hแป trแปฃ bแปi ba thฦฐ viแปn hแปc sรขu phแป biแบฟn nhแบฅt โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) vร [TensorFlow](https://www.tensorflow.org/) โ vแปi tรญch hแปฃp mฦฐแปฃt mร giแปฏa chรบng. Viแปc huแบฅn luyแปn mรด hรฌnh cแปงa bแบกn vแปi mแปt thฦฐ viแปn trฦฐแปc khi tแบฃi chรบng ฤแป sแปญ dแปฅng trong suy luแบญn vแปi thฦฐ viแปn khรกc lร rแบฅt dแป
dร ng.
## Cรกc demo trแปฑc tuyแบฟn
Bแบกn cรณ thแป kiแปm tra hแบงu hแบฟt cรกc mรด hรฌnh cแปงa chรบng tรดi trแปฑc tiแบฟp trรชn trang cแปงa chรบng tแปซ [model hub](https://huggingface.co/models). Chรบng tรดi cลฉng cung cแบฅp [dแปch vแปฅ lฦฐu trแปฏ mรด hรฌnh riรชng tฦฐ, phiรชn bแบฃn vร API suy luแบญn](https://huggingface.co/pricing) cho cรกc mรด hรฌnh cรดng khai vร riรชng tฦฐ.
Dฦฐแปi ฤรขy lร mแปt sแป vรญ dแปฅ:
Trong Xแปญ lรฝ Ngรดn ngแปฏ Tแปฑ nhiรชn:
- [Hoร n thร nh tแปซ vแปฅng vแป tแปซ vแปi BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Nhแบญn dแบกng thแปฑc thแป ฤแบทt tรชn vแปi Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Tแบกo vฤn bแบฃn tแปฑ nhiรชn vแปi Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- [Suy luแบญn Ngรดn ngแปฏ Tแปฑ nhiรชn vแปi RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Tรณm tแบฏt vฤn bแบฃn vแปi BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Trแบฃ lแปi cรขu hแปi vแปi DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Dแปch vฤn bแบฃn vแปi T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
Trong Thแป giรกc Mรกy tรญnh:
- [Phรขn loแบกi hรฌnh แบฃnh vแปi ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Phรกt hiแปn ฤแปi tฦฐแปฃng vแปi DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Phรขn ฤoแบกn ngแปฏ nghฤฉa vแปi SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Phรขn ฤoแบกn toร n diแปn vแปi Mask2Former](https://huggingface.co/facebook/mask2former-swin-large-coco-panoptic)
- [ฦฏแปc lฦฐแปฃng ฤแป sรขu vแปi Depth Anything](https://huggingface.co/docs/transformers/main/model_doc/depth_anything)
- [Phรขn loแบกi video vแปi VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Phรขn ฤoแบกn toร n cแบงu vแปi OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
Trong รขm thanh:
- [Nhแบญn dแบกng giแปng nรณi tแปฑ ฤแปng vแปi Whisper](https://huggingface.co/openai/whisper-large-v3)
- [Phรกt hiแปn tแปซ khรณa vแปi Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Phรขn loแบกi รขm thanh vแปi Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
Trong cรกc nhiแปm vแปฅ ฤa phฦฐฦกng thแปฉc:
- [Trแบฃ lแปi cรขu hแปi vแป bแบฃng vแปi TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Trแบฃ lแปi cรขu hแปi hรฌnh แบฃnh vแปi ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Mรด tแบฃ hรฌnh แบฃnh vแปi LLaVa](https://huggingface.co/llava-hf/llava-1.5-7b-hf)
- [Phรขn loแบกi hรฌnh แบฃnh khรดng cแบงn nhรฃn vแปi SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384)
- [Trแบฃ lแปi cรขu hแปi vฤn bแบฃn tร i liแปu vแปi LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Phรขn loแบกi video khรดng cแบงn nhรฃn vแปi X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
- [Phรกt hiแปn ฤแปi tฦฐแปฃng khรดng cแบงn nhรฃn vแปi OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2)
- [Phรขn ฤoแบกn hรฌnh แบฃnh khรดng cแบงn nhรฃn vแปi CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)
- [Tแบกo mแบทt nแบก tแปฑ ฤแปng vแปi SAM](https://huggingface.co/docs/transformers/model_doc/sam)
## 100 dแปฑ รกn sแปญ dแปฅng Transformers
Transformers khรดng chแป lร mแปt bแป cรดng cแปฅ ฤแป sแปญ dแปฅng cรกc mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc: ฤรณ lร mแปt cแปng ฤแปng cรกc dแปฑ รกn xรขy dแปฑng xung quanh nรณ vร Hugging Face Hub. Chรบng tรดi muแปn Transformers giรบp cรกc nhร phรกt triแปn, nhร nghiรชn cแปฉu, sinh viรชn, giรกo sฦฐ, kแปน sฦฐ vร bแบฅt kแปณ ai khรกc xรขy dแปฑng nhแปฏng dแปฑ รกn mฦก ฦฐแปc cแปงa hแป.
ฤแป kแปท niแปm 100.000 sao cแปงa transformers, chรบng tรดi ฤรฃ quyแบฟt ฤแปnh tแบญp trung vร o cแปng ฤแปng vร tแบกo ra trang [awesome-transformers](./awesome-transformers.md) liแปt kรช 100 dแปฑ รกn tuyแปt vแปi ฤฦฐแปฃc xรขy dแปฑng xung quanh transformers.
Nแบฟu bแบกn sแป hแปฏu hoแบทc sแปญ dแปฅng mแปt dแปฑ รกn mร bแบกn tin rแบฑng nรชn ฤฦฐแปฃc thรชm vร o danh sรกch, vui lรฒng mแป mแปt PR ฤแป thรชm nรณ!
## Nแบฟu bแบกn ฤang tรฌm kiแบฟm hแป trแปฃ tรนy chแปnh tแปซ ฤแปi ngลฉ Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Hร nh trรฌnh nhanh
ฤแป ngay lแบญp tแปฉc sแปญ dแปฅng mแปt mรด hรฌnh trรชn mแปt ฤแบงu vร o cแปฅ thแป (vฤn bแบฃn, hรฌnh แบฃnh, รขm thanh, ...), chรบng tรดi cung cแบฅp API `pipeline`. Pipelines nhรณm mแปt mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc vแปi quรก trรฌnh tiแปn xแปญ lรฝ ฤรฃ ฤฦฐแปฃc sแปญ dแปฅng trong quรก trรฌnh huแบฅn luyแปn cแปงa mรด hรฌnh ฤรณ. Dฦฐแปi ฤรขy lร cรกch sแปญ dแปฅng nhanh mแปt pipeline ฤแป phรขn loแบกi vฤn bแบฃn tรญch cแปฑc so vแปi tiรชu cแปฑc:
```python
>>> from transformers import pipeline
# Cแบฅp phรกt mแปt pipeline cho phรขn tรญch cแบฃm xรบc
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
Dรฒng code thแปฉ hai tแบฃi xuแปng vร lฦฐu trแปฏ bแป mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn ฤฦฐแปฃc sแปญ dแปฅng bแปi pipeline, trong khi dรฒng thแปฉ ba ฤรกnh giรก nรณ trรชn vฤn bแบฃn ฤรฃ cho. แป ฤรขy, cรขu trแบฃ lแปi lร "tรญch cแปฑc" vแปi ฤแป tin cแบญy lร 99,97%.
Nhiแปu nhiแปm vแปฅ cรณ sแบตn mแปt `pipeline` ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc, trong NLP nhฦฐng cลฉng trong thแป giรกc mรกy tรญnh vร giแปng nรณi. Vรญ dแปฅ, chรบng ta cรณ thแป dแป
dร ng trรญch xuแบฅt cรกc ฤแปi tฦฐแปฃng ฤฦฐแปฃc phรกt hiแปn trong mแปt hรฌnh แบฃnh:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Tแบฃi xuแปng mแปt hรฌnh แบฃnh vแปi nhแปฏng con mรจo dแป
thฦฐฦกng
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Cแบฅp phรกt mแปt pipeline cho phรกt hiแปn ฤแปi tฦฐแปฃng
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
แป ฤรขy, chรบng ta nhแบญn ฤฦฐแปฃc mแปt danh sรกch cรกc ฤแปi tฦฐแปฃng ฤฦฐแปฃc phรกt hiแปn trong hรฌnh แบฃnh, vแปi mแปt hแปp bao quanh ฤแปi tฦฐแปฃng vร mแปt ฤiแปm ฤรกnh giรก ฤแป tin cแบญy. ฤรขy lร hรฌnh แบฃnh gแปc แป bรชn trรกi, vแปi cรกc dแปฑ ฤoรกn hiแปn thแป แป bรชn phแบฃi:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Bแบกn cรณ thแป tรฌm hiแปu thรชm vแป cรกc nhiแปm vแปฅ ฤฦฐแปฃc hแป trแปฃ bแปi API `pipeline` trong [hฦฐแปng dแบซn nร y](https://huggingface.co/docs/transformers/task_summary).
Ngoร i `pipeline`, ฤแป tแบฃi xuแปng vร sแปญ dแปฅng bแบฅt kแปณ mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc nร o cho nhiแปm vแปฅ cแปฅ thแป cแปงa bแบกn, chแป cแบงn ba dรฒng code. ฤรขy lร phiรชn bแบฃn PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
Vร ฤรขy lร mรฃ tฦฐฦกng ฤฦฐฦกng cho TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
Tokenizer lร thร nh phแบงn chแปu trรกch nhiแปm cho viแปc tiแปn xแปญ lรฝ mร mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc mong ฤแปฃi vร cรณ thแป ฤฦฐแปฃc gแปi trแปฑc tiแบฟp trรชn mแปt chuแปi ฤฦกn (nhฦฐ trong cรกc vรญ dแปฅ trรชn) hoแบทc mแปt danh sรกch. Nรณ sแบฝ xuแบฅt ra mแปt tแปซ ฤiแปn mร bแบกn cรณ thแป sแปญ dแปฅng trong mรฃ phแปฅ thuแปc hoแบทc ฤฦกn giแบฃn lร truyแปn trแปฑc tiแบฟp cho mรด hรฌnh cแปงa bแบกn bแบฑng cรกch sแปญ dแปฅng toรกn tแปญ ** ฤแป giแบฃi nรฉn ฤแปi sแป.
Chรญnh mรด hรฌnh lร mแปt [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) thรดng thฦฐแปng hoแบทc mแปt [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (tรนy thuแปc vร o backend cแปงa bแบกn) mร bแบกn cรณ thแป sแปญ dแปฅng nhฦฐ bรฌnh thฦฐแปng. [Hฦฐแปng dแบซn nร y](https://huggingface.co/docs/transformers/training) giแบฃi thรญch cรกch tรญch hแปฃp mแปt mรด hรฌnh nhฦฐ vแบญy vร o mแปt vรฒng lแบทp huแบฅn luyแปn cแป ฤiแปn PyTorch hoแบทc TensorFlow, hoแบทc cรกch sแปญ dแปฅng API `Trainer` cแปงa chรบng tรดi ฤแป tinh chแปnh nhanh chรณng trรชn mแปt bแป dแปฏ liแปu mแปi.
## Tแบกi sao tรดi nรชn sแปญ dแปฅng transformers?
1. Cรกc mรด hรฌnh tiรชn tiแบฟn dแป
sแปญ dแปฅng:
- Hiแปu suแบฅt cao trong viแปc hiแปu vร tแบกo ra ngรดn ngแปฏ tแปฑ nhiรชn, thแป giรกc mรกy tรญnh vร รขm thanh.
- Ngฦฐแปกng vร o thแบฅp cho giแบฃng viรชn vร ngฦฐแปi thแปฑc hร nh.
- รt trแปซu tฦฐแปฃng dร nh cho ngฦฐแปi dรนng vแปi chแป ba lแปp hแปc.
- Mแปt API thแปng nhแบฅt ฤแป sแปญ dแปฅng tแบฅt cแบฃ cรกc mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc cแปงa chรบng tรดi.
2. Giแบฃm chi phรญ tรญnh toรกn, lร m giแบฃm lฦฐแปฃng khรญ thแบฃi carbon:
- Cรกc nhร nghiรชn cแปฉu cรณ thแป chia sแบป cรกc mรด hรฌnh ฤรฃ ฤฦฐแปฃc huแบฅn luyแปn thay vรฌ luรดn luรดn huแบฅn luyแปn lแบกi.
- Ngฦฐแปi thแปฑc hร nh cรณ thแป giแบฃm thแปi gian tรญnh toรกn vร chi phรญ sแบฃn xuแบฅt.
- Hร ng chแปฅc kiแบฟn trรบc vแปi hฦกn 400.000 mรด hรฌnh ฤฦฐแปฃc huแบฅn luyแปn trฦฐแปc trรชn tแบฅt cแบฃ cรกc phฦฐฦกng phรกp.
3. Lแปฑa chแปn framework phรน hแปฃp cho mแปi giai ฤoแบกn cแปงa mรด hรฌnh:
- Huแบฅn luyแปn cรกc mรด hรฌnh tiรชn tiแบฟn chแป trong 3 dรฒng code.
- Di chuyแปn mแปt mรด hรฌnh duy nhแบฅt giแปฏa cรกc framework TF2.0/PyTorch/JAX theo รฝ muแปn.
- Dแป
dร ng chแปn framework phรน hแปฃp cho huแบฅn luyแปn, ฤรกnh giรก vร sแบฃn xuแบฅt.
4. Dแป
dร ng tรนy chแปnh mแปt mรด hรฌnh hoแบทc mแปt vรญ dแปฅ theo nhu cแบงu cแปงa bแบกn:
- Chรบng tรดi cung cแบฅp cรกc vรญ dแปฅ cho mแปi kiแบฟn trรบc ฤแป tรกi tแบกo kแบฟt quแบฃ ฤฦฐแปฃc cรดng bแป bแปi cรกc tรกc giแบฃ gแปc.
- Cรกc thร nh phแบงn nแปi tแบกi cแปงa mรด hรฌnh ฤฦฐแปฃc tiแบฟt lแป mแปt cรกch nhแบฅt quรกn nhแบฅt cรณ thแป.
- Cรกc tแปp mรด hรฌnh cรณ thแป ฤฦฐแปฃc sแปญ dแปฅng ฤแปc lแบญp vแปi thฦฐ viแปn ฤแป thแปฑc hiแปn cรกc thแปญ nghiแปm nhanh chรณng.
## Tแบกi sao tรดi khรดng nรชn sแปญ dแปฅng transformers?
- Thฦฐ viแปn nร y khรดng phแบฃi lร mแปt bแป cรดng cแปฅ modul cho cรกc khแปi xรขy dแปฑng mแบกng neural. Mรฃ trong cรกc tแปp mรด hรฌnh khรดng ฤฦฐแปฃc tรกi cแบฅu trรบc vแปi cรกc trแปซu tฦฐแปฃng bแป sung mแปt cรกch cแป รฝ, ฤแป cรกc nhร nghiรชn cแปฉu cรณ thแป lแบทp nhanh trรชn tแปซng mรด hรฌnh mร khรดng cแบงn ฤร o sรขu vร o cรกc trแปซu tฦฐแปฃng/tแปp bแป sung.
- API huแบฅn luyแปn khรดng ฤฦฐแปฃc thiแบฟt kแบฟ ฤแป hoแบกt ฤแปng trรชn bแบฅt kแปณ mรด hรฌnh nร o, mร ฤฦฐแปฃc tแปi ฦฐu hรณa ฤแป hoแบกt ฤแปng vแปi cรกc mรด hรฌnh ฤฦฐแปฃc cung cแบฅp bแปi thฦฐ viแปn. ฤแปi vแปi vรฒng lแบทp hแปc mรกy chung, bแบกn nรชn sแปญ dแปฅng mแปt thฦฐ viแปn khรกc (cรณ thแป lร [Accelerate](https://huggingface.co/docs/accelerate)).
- Mแบทc dรน chรบng tรดi cแป gแบฏng trรฌnh bร y cร ng nhiแปu trฦฐแปng hแปฃp sแปญ dแปฅng cร ng tแปt, nhฦฐng cรกc tแบญp lแปnh trong thฦฐ mแปฅc [examples](https://github.com/huggingface/transformers/tree/main/examples) chแป lร vรญ dแปฅ. Dแปฑ kiแบฟn rแบฑng chรบng sแบฝ khรดng hoแบกt ฤแปng ngay tแปฉc khแบฏc trรชn vแบฅn ฤแป cแปฅ thแป cแปงa bแบกn vร bแบกn sแบฝ phแบฃi thay ฤแปi mแปt sแป dรฒng mรฃ ฤแป thรญch nghi vแปi nhu cแบงu cแปงa bแบกn.
## Cร i ฤแบทt
### Sแปญ dแปฅng pip
Thฦฐ viแปn nร y ฤฦฐแปฃc kiแปm tra trรชn Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ vร TensorFlow 2.6+.
Bแบกn nรชn cร i ฤแบทt ๐ค Transformers trong mแปt [mรดi trฦฐแปng แบฃo Python](https://docs.python.org/3/library/venv.html). Nแบฟu bแบกn chฦฐa quen vแปi mรดi trฦฐแปng แบฃo Python, hรฃy xem [hฦฐแปng dแบซn sแปญ dแปฅng](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
Trฦฐแปc tiรชn, tแบกo mแปt mรดi trฦฐแปng แบฃo vแปi phiรชn bแบฃn Python bแบกn sแบฝ sแปญ dแปฅng vร kรญch hoแบกt nรณ.
Sau ฤรณ, bแบกn sแบฝ cแบงn cร i ฤแบทt รญt nhแบฅt mแปt trong sแป cรกc framework Flax, PyTorch hoแบทc TensorFlow.
Vui lรฒng tham khแบฃo [trang cร i ฤแบทt TensorFlow](https://www.tensorflow.org/install/), [trang cร i ฤแบทt PyTorch](https://pytorch.org/get-started/locally/#start-locally) vร /hoแบทc [Flax](https://github.com/google/flax#quick-install) vร [Jax](https://github.com/google/jax#installation) ฤแป biแบฟt lแปnh cร i ฤแบทt cแปฅ thแป cho nแปn tแบฃng cแปงa bแบกn.
Khi ฤรฃ cร i ฤแบทt mแปt trong cรกc backend ฤรณ, ๐ค Transformers cรณ thแป ฤฦฐแปฃc cร i ฤแบทt bแบฑng pip nhฦฐ sau:
```bash
pip install transformers
```
Nแบฟu bแบกn muแปn thแปฑc hiแปn cรกc vรญ dแปฅ hoแบทc cแบงn phiรชn bแบฃn mแปi nhแบฅt cแปงa mรฃ vร khรดng thแป chแป ฤแปฃi cho mแปt phiรชn bแบฃn mแปi, bแบกn phแบฃi [cร i ฤแบทt thฦฐ viแปn tแปซ nguแปn](https://huggingface.co/docs/transformers/installation#installing-from-source).
### Vแปi conda
๐ค Transformers cรณ thแป ฤฦฐแปฃc cร i ฤแบทt bแบฑng conda nhฦฐ sau:
```shell script
conda install conda-forge::transformers
```
> **_GHI CHร:_** Cร i ฤแบทt `transformers` tแปซ kรชnh `huggingface` ฤรฃ bแป lแปi thแปi.
Hรฃy lร m theo trang cร i ฤแบทt cแปงa Flax, PyTorch hoแบทc TensorFlow ฤแป xem cรกch cร i ฤแบทt chรบng bแบฑng conda.
> **_GHI CHร:_** Trรชn Windows, bแบกn cรณ thแป ฤฦฐแปฃc yรชu cแบงu kรญch hoแบกt Chแบฟ ฤแป phรกt triแปn ฤแป tแบญn dแปฅng viแปc lฦฐu cache. Nแบฟu ฤiแปu nร y khรดng phแบฃi lร mแปt lแปฑa chแปn cho bแบกn, hรฃy cho chรบng tรดi biแบฟt trong [vแบฅn ฤแป nร y](https://github.com/huggingface/huggingface_hub/issues/1062).
## Kiแบฟn trรบc mรด hรฌnh
**[Tแบฅt cแบฃ cรกc ฤiแปm kiแปm tra mรด hรฌnh](https://huggingface.co/models)** ฤฦฐแปฃc cung cแบฅp bแปi ๐ค Transformers ฤฦฐแปฃc tรญch hแปฃp mแปt cรกch mฦฐแปฃt mร tแปซ trung tรขm mรด hรฌnh huggingface.co [model hub](https://huggingface.co/models), nฦกi chรบng ฤฦฐแปฃc tแบฃi lรชn trแปฑc tiแบฟp bแปi [ngฦฐแปi dรนng](https://huggingface.co/users) vร [tแป chแปฉc](https://huggingface.co/organizations).
Sแป lฦฐแปฃng ฤiแปm kiแปm tra hiแปn tแบกi: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers hiแปn ฤang cung cแบฅp cรกc kiแบฟn trรบc sau ฤรขy: xem [แป ฤรขy](https://huggingface.co/docs/transformers/model_summary) ฤแป cรณ mแปt tรณm tแบฏt tแปng quan vแป mแปi kiแบฟn trรบc.
ฤแป kiแปm tra xem mแปi mรด hรฌnh cรณ mแปt phiรชn bแบฃn thแปฑc hiแปn trong Flax, PyTorch hoแบทc TensorFlow, hoแบทc cรณ mแปt tokenizer liรชn quan ฤฦฐแปฃc hแป trแปฃ bแปi thฦฐ viแปn ๐ค Tokenizers, vui lรฒng tham khแบฃo [bแบฃng nร y](https://huggingface.co/docs/transformers/index#supported-frameworks).
Nhแปฏng phiรชn bแบฃn nร y ฤรฃ ฤฦฐแปฃc kiแปm tra trรชn mแปt sแป tแบญp dแปฏ liแปu (xem cรกc tแบญp lแปnh vรญ dแปฅ) vร nรชn tฦฐฦกng ฤฦฐฦกng vแปi hiแปu suแบฅt cแปงa cรกc phiรชn bแบฃn gแปc. Bแบกn cรณ thแป tรฌm thแบฅy thรชm thรดng tin vแป hiแปu suแบฅt trong phแบงn Vรญ dแปฅ cแปงa [tร i liแปu](https://github.com/huggingface/transformers/tree/main/examples).
## Tรฌm hiแปu thรชm
| Phแบงn | Mรด tแบฃ |
|-|-|
| [Tร i liแปu](https://huggingface.co/docs/transformers/) | Toร n bแป tร i liแปu API vร hฦฐแปng dแบซn |
| [Tรณm tแบฏt nhiแปm vแปฅ](https://huggingface.co/docs/transformers/task_summary) | Cรกc nhiแปm vแปฅ ฤฦฐแปฃc hแป trแปฃ bแปi ๐ค Transformers |
| [Hฦฐแปng dแบซn tiแปn xแปญ lรฝ](https://huggingface.co/docs/transformers/preprocessing) | Sแปญ dแปฅng lแปp `Tokenizer` ฤแป chuแบฉn bแป dแปฏ liแปu cho cรกc mรด hรฌnh |
| [Huแบฅn luyแปn vร ฤiแปu chแปnh](https://huggingface.co/docs/transformers/training) | Sแปญ dแปฅng cรกc mรด hรฌnh ฤฦฐแปฃc cung cแบฅp bแปi ๐ค Transformers trong vรฒng lแบทp huแบฅn luyแปn PyTorch/TensorFlow vร API `Trainer` |
| [Hฦฐแปng dแบซn nhanh: ฤiแปu chแปnh/sแปญ dแปฅng cรกc kแปch bแบฃn](https://github.com/huggingface/transformers/tree/main/examples) | Cรกc kแปch bแบฃn vรญ dแปฅ ฤแป ฤiแปu chแปnh mรด hรฌnh trรชn nhiแปu nhiแปm vแปฅ khรกc nhau |
| [Chia sแบป vร tแบฃi lรชn mรด hรฌnh](https://huggingface.co/docs/transformers/model_sharing) | Tแบฃi lรชn vร chia sแบป cรกc mรด hรฌnh ฤรฃ ฤiแปu chแปnh cแปงa bแบกn vแปi cแปng ฤแปng |
## Trรญch dแบซn
Bรขy giแป chรบng ta cรณ mแปt [bร i bรกo](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) mร bแบกn cรณ thแป trรญch dแบซn cho thฦฐ viแปn ๐ค Transformers:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_ja.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!---
A useful guide for English-Traditional Japanese translation of Hugging Face documentation
- Use square quotes, e.g.,ใๅผ็จใ
Dictionary
API: API(็ฟป่จณใใชใ)
add: ่ฟฝๅ
checkpoint: ใใงใใฏใใคใณใ
code: ใณใผใ
community: ใณใใฅใใใฃ
confidence: ไฟก้ ผๅบฆ
dataset: ใใผใฟใปใใ
documentation: ใใญใฅใกใณใ
example: ไพ
finetune: ๅพฎ่ชฟๆด
Hugging Face: Hugging Face(็ฟป่จณใใชใ)
implementation: ๅฎ่ฃ
inference: ๆจ่ซ
library: ใฉใคใใฉใช
module: ใขใธใฅใผใซ
NLP/Natural Language Processing: NLPใจ่กจ็คบใใใๅ ดๅใฏ็ฟป่จณใใใใNatural Language Processingใจ่กจ็คบใใใๅ ดๅใฏ็ฟป่จณใใใ
online demos: ใชใณใฉใคใณใใข
pipeline: pipeline(็ฟป่จณใใชใ)
pretrained/pretrain: ๅญฆ็ฟๆธใฟ
Python data structures (e.g., list, set, dict): ใชในใใใปใใใใใฃใฏใทใงใใชใจ่จณใใใๆฌๅผงๅ
ใฏๅๆ่ฑ่ช
repository: repository(็ฟป่จณใใชใ)
summary: ๆฆ่ฆ
token-: token-(็ฟป่จณใใชใ)
Trainer: Trainer(็ฟป่จณใใชใ)
transformer: transformer(็ฟป่จณใใชใ)
tutorial: ใใฅใผใใชใขใซ
user: ใฆใผใถ
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<b>ๆฅๆฌ่ช</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>JAXใPyTorchใTensorFlowใฎใใใฎๆๅ
็ซฏๆฉๆขฐๅญฆ็ฟ</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐คTransformersใฏใใใญในใใ่ฆ่ฆใ้ณๅฃฐใชใฉใฎ็ฐใชใใขใใชใใฃใซๅฏพใใฆใฟในใฏใๅฎ่กใใใใใซใไบๅใซๅญฆ็ฟใใใๆฐๅใฎใขใใซใๆไพใใพใใ
ใใใใฎใขใใซใฏๆฌกใฎใใใชๅ ดๅใซ้ฉ็จใงใใพใ:
* ๐ ใใญในใใฏใใใญในใใฎๅ้กใๆ
ๅ ฑๆฝๅบใ่ณชๅๅฟ็ญใ่ฆ็ดใ็ฟป่จณใใใญในใ็ๆใชใฉใฎใฟในใฏใฎใใใซใ100ไปฅไธใฎ่จ่ชใซๅฏพๅฟใใฆใใพใใ
* ๐ผ๏ธ ็ปๅๅ้กใ็ฉไฝๆคๅบใใปใฐใกใณใใผใทใงใณใชใฉใฎใฟในใฏใฎใใใฎ็ปๅใ
* ๐ฃ๏ธ ้ณๅฃฐใฏใ้ณๅฃฐ่ช่ญใ้ณๅฃฐๅ้กใชใฉใฎใฟในใฏใซไฝฟ็จใใพใใ
ใใฉใณในใใฉใผใใผใขใใซใฏใใใผใใซ่ณชๅๅฟ็ญใๅ
ๅญฆๆๅญ่ช่ญใในใญใฃใณๆๆธใใใฎๆ
ๅ ฑๆฝๅบใใใใชๅ้กใ่ฆ่ฆ็่ณชๅๅฟ็ญใชใฉใ**่คๆฐใฎใขใใชใใฃใ็ตใฟๅใใใ**ใฟในใฏใๅฎ่กๅฏ่ฝใงใใ
๐คTransformersใฏใไธใใใใใใญในใใซๅฏพใใฆใใใใฎไบๅๅญฆ็ฟใใใใขใใซใ็ด ๆฉใใใฆใณใญใผใใใฆไฝฟ็จใใใใชใ่ช่บซใฎใใผใฟใปใใใงใใใใๅพฎ่ชฟๆดใใ็งใใกใฎ[model hub](https://huggingface.co/models)ใงใณใใฅใใใฃใจๅ
ฑๆใใใใใฎAPIใๆไพใใพใใๅๆใซใใขใผใญใใฏใใฃใๅฎ็พฉใใๅPythonใขใธใฅใผใซใฏๅฎๅ
จใซในใฟใณใใขใญใณใงใใใ่ฟ
้ใช็ ็ฉถๅฎ้จใๅฏ่ฝใซใใใใใซๅคๆดใใใใจใใงใใพใใ
๐คTransformersใฏ[Jax](https://jax.readthedocs.io/en/latest/)ใ[PyTorch](https://pytorch.org/)ใ[TensorFlow](https://www.tensorflow.org/)ใจใใ3ๅคงใใฃใผใใฉใผใใณใฐใฉใคใใฉใชใผใซๆฏใใใใใใใใใฎใฉใคใใฉใชใใทใผใ ใฌในใซ็ตฑๅใใฆใใพใใ็ๆนใงใขใใซใๅญฆ็ฟใใฆใใใใใ็ๆนใงๆจ่ซ็จใซใญใผใใใใฎใฏ็ฐกๅใชใใจใงใใ
## ใชใณใฉใคใณใใข
[model hub](https://huggingface.co/models)ใใใใปใจใใฉใฎใขใใซใฎใใผใธใง็ดๆฅใในใใใใใจใใงใใพใใใพใใใใใชใใฏใขใใซใใใฉใคใใผใใขใใซใซๅฏพใใฆใ[ใใฉใคใใผใใขใใซใฎใในใใฃใณใฐใใใผใธใงใใณใฐใๆจ่ซAPI](https://huggingface.co/pricing)ใๆไพใใฆใใพใใ
ไปฅไธใฏใใฎไธไพใงใ:
่ช็ถ่จ่ชๅฆ็ใซใฆ:
- [BERTใซใใใในใฏใใฏใผใ่ฃๅฎ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Electraใซใใๅๅๅฎไฝ่ช่ญ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [GPT-2ใซใใใใญในใ็ๆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [RoBERTaใซใใ่ช็ถ่จ่ชๆจ่ซ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [BARTใซใใ่ฆ็ด](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [DistilBERTใซใใ่ณชๅๅฟ็ญ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [T5ใซใใ็ฟป่จณ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
ใณใณใใฅใผใฟใใธใงใณใซใฆ:
- [ViTใซใใ็ปๅๅ้ก](https://huggingface.co/google/vit-base-patch16-224)
- [DETRใซใใ็ฉไฝๆคๅบ](https://huggingface.co/facebook/detr-resnet-50)
- [SegFormerใซใใใปใใณใใฃใใฏใปใฐใกใณใใผใทใงใณ](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [DETRใซใใใใใใใฃใใฏใปใฐใกใณใใผใทใงใณ](https://huggingface.co/facebook/detr-resnet-50-panoptic)
ใชใผใใฃใชใซใฆ:
- [Wav2Vec2ใซใใ่ชๅ้ณๅฃฐ่ช่ญ](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Wav2Vec2ใซใใใญใผใฏใผใๆค็ดข](https://huggingface.co/superb/wav2vec2-base-superb-ks)
ใใซใใขใผใใซใชใฟในใฏใซใฆ:
- [ViLTใซใใ่ฆ่ฆ็่ณชๅๅฟ็ญ](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
Hugging Faceใใผใ ใซใใฃใฆไฝใใใ **[ใใฉใณในใใฉใผใใผใไฝฟใฃใๆธใ่พผใฟ](https://transformer.huggingface.co)** ใฏใใใฎใชใใธใใชใฎใใญในใ็ๆๆฉ่ฝใฎๅ
ฌๅผใใขใงใใใ
## Hugging Faceใใผใ ใซใใใซในใฟใ ใปใตใใผใใใๅธๆใฎๅ ดๅ
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## ใฏใคใใฏใใขใผ
ไธใใใใๅ
ฅๅ๏ผใใญในใใ็ปๅใ้ณๅฃฐใ...๏ผใซๅฏพใใฆใใใซใขใใซใไฝฟใใใใซใๆใ
ใฏ`pipeline`ใจใใAPIใๆไพใใฆใใใพใใpipelineใฏใๅญฆ็ฟๆธใฟใฎใขใใซใจใใใฎใขใใซใฎๅญฆ็ฟๆใซไฝฟ็จใใใๅๅฆ็ใใฐใซใผใๅใใใใฎใงใใไปฅไธใฏใ่ฏๅฎ็ใชใใญในใใจๅฆๅฎ็ใชใใญในใใๅ้กใใใใใซpipelineใไฝฟ็จใใๆนๆณใงใ:
```python
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
2่ก็ฎใฎใณใผใใงใฏใpipelineใงไฝฟ็จใใใไบๅๅญฆ็ฟๆธใฟใขใใซใใใฆใณใญใผใใใฆใญใฃใใทใฅใใ3่ก็ฎใงใฏไธใใใใใใญในใใซๅฏพใใฆใใฎใขใใซใ่ฉไพกใใพใใใใใงใฏใ็ญใใฏ99.97%ใฎไฟก้ ผๅบฆใงใใใธใใฃใใใงใใ
่ช็ถ่จ่ชๅฆ็ใ ใใงใชใใใณใณใใฅใผใฟใใธใงใณใ้ณๅฃฐๅฆ็ใซใใใฆใใๅคใใฎใฟในใฏใซใฏใใใใใ่จ็ทดใใใ`pipeline`ใ็จๆใใใฆใใใไพใใฐใ็ปๅใใๆคๅบใใใ็ฉไฝใ็ฐกๅใซๆฝๅบใใใใจใใงใใ:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
ใใใงใฏใ็ปๅใใๆคๅบใใใใชใใธใงใฏใใฎใชในใใๅพใใใใชใใธใงใฏใใๅฒใใใใฏในใจไฟก้ ผๅบฆในใณใขใ่กจ็คบใใใพใใๅทฆๅดใๅ
็ปๅใๅณๅดใไบๆธฌ็ตๆใ่กจ็คบใใใใฎใงใ:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
[ใใฎใใฅใผใใชใขใซ](https://huggingface.co/docs/transformers/task_summary)ใงใฏใ`pipeline`APIใงใตใใผใใใใฆใใใฟในใฏใซใคใใฆ่ฉณใใ่ชฌๆใใฆใใพใใ
`pipeline`ใซๅ ใใฆใไธใใใใใฟในใฏใซๅญฆ็ฟๆธใฟใฎใขใใซใใใฆใณใญใผใใใฆไฝฟ็จใใใใใซๅฟ
่ฆใชใฎใฏใ3่กใฎใณใผใใ ใใงใใไปฅไธใฏPyTorchใฎใใผใธใงใณใงใ:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
ใใใฆใใกใใฏTensorFlowใจๅ็ญใฎใณใผใใจใชใใพใ:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
ใใผใฏใใคใถใฏๅญฆ็ฟๆธใฟใขใใซใๆๅพ
ใใใในใฆใฎๅๅฆ็ใๆ
ๅฝใใๅไธใฎๆๅญๅ (ไธ่จใฎไพใฎใใใซ) ใพใใฏใชในใใซๅฏพใใฆ็ดๆฅๅผใณๅบใใใจใใงใใพใใใใใฏไธๆตใฎใณใผใใงไฝฟ็จใงใใ่พๆธใๅบๅใใพใใใพใใๅ็ดใซ ** ๅผๆฐๅฑ้ๆผ็ฎๅญใไฝฟ็จใใฆใขใใซใซ็ดๆฅๆธกใใใจใใงใใพใใ
ใขใใซ่ชไฝใฏ้ๅธธใฎ[Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ใพใใฏ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (ใใใฏใจใณใใซใใฃใฆ็ฐใชใ)ใงใ้ๅธธ้ใไฝฟ็จใใใใจใๅฏ่ฝใงใใ[ใใฎใใฅใผใใชใขใซ](https://huggingface.co/docs/transformers/training)ใงใฏใใใฎใใใชใขใใซใๅพๆฅใฎPyTorchใTensorFlowใฎๅญฆ็ฟใซใผใใซ็ตฑๅใใๆนๆณใใ็งใใกใฎ`Trainer`APIใไฝฟใฃใฆๆฐใใใใผใฟใปใใใง็ด ๆฉใๅพฎ่ชฟๆดใ่กใๆนๆณใซใคใใฆ่ชฌๆใใพใใ
## ใชใtransformersใไฝฟใๅฟ
่ฆใใใใฎใงใใใใ๏ผ
1. ไฝฟใใใใๆๆฐใขใใซ:
- ่ช็ถ่จ่ช็่งฃใป็ๆใใณใณใใฅใผใฟใใธใงใณใใชใผใใฃใชใฎๅใฟในใฏใง้ซใใใใฉใผใใณในใ็บๆฎใใพใใ
- ๆ่ฒ่
ใๅฎๅ่
ใซใจใฃใฆใฎไฝใๅๅ
ฅ้ๅฃใ
- ๅญฆ็ฟใใใฏใฉในใฏ3ใคใ ใใงใใฆใผใถใ็ด้ขใใๆฝ่ฑกๅใฏใปใจใใฉใใใพใใใ
- ๅญฆ็ฟๆธใฟใขใใซใๅฉ็จใใใใใฎ็ตฑไธใใใAPIใ
1. ไฝใ่จ็ฎใณในใใๅฐใชใใซใผใใณใใใใใชใณใ:
- ็ ็ฉถ่
ใฏใๅธธใซๅใใฌใผใใณใฐใ่กใใฎใงใฏใชใใใใฌใผใใณใฐใใใใขใใซใๅ
ฑๆใใใใจใใงใใพใใ
- ๅฎๅๅฎถใฏใ่จ็ฎๆ้ใ็็ฃใณในใใๅๆธใใใใจใใงใใพใใ
- ใในใฆใฎใขใใชใใฃใซใใใฆใ60,000ไปฅไธใฎไบๅๅญฆ็ฟๆธใฟใขใใซใๆใคๆฐๅคใใฎใขใผใญใใฏใใฃใๆไพใใพใใ
1. ใขใใซใฎใฉใคใใฟใคใ ใฎใใใใ้จๅใง้ฉๅใชใใฌใผใ ใฏใผใฏใ้ธๆๅฏ่ฝ:
- 3่กใฎใณใผใใงๆๅ
็ซฏใฎใขใใซใใใฌใผใใณใฐใ
- TF2.0/PyTorch/JAXใใฌใผใ ใฏใผใฏ้ใง1ใคใฎใขใใซใ่ชๅจใซ็งปๅใใใใ
- ๅญฆ็ฟใ่ฉไพกใ็็ฃใซ้ฉใใใใฌใผใ ใฏใผใฏใใทใผใ ใฌในใซ้ธๆใงใใพใใ
1. ใขใใซใใตใณใใซใใใผใบใซๅใใใฆ็ฐกๅใซใซในใฟใใคใบๅฏ่ฝ:
- ๅ่่
ใ็บ่กจใใ็ตๆใๅ็พใใใใใซใๅใขใผใญใใฏใใฃใฎไพใๆไพใใฆใใพใใ
- ใขใใซๅ
้จใฏๅฏ่ฝใช้ใไธ่ฒซใใฆๅ
ฌ้ใใใฆใใพใใ
- ใขใใซใใกใคใซใฏใฉใคใใฉใชใจใฏ็ฌ็ซใใฆๅฉ็จใใใใจใใงใใ่ฟ
้ใชๅฎ้จใๅฏ่ฝใงใใ
## ใชใtransformersใไฝฟใฃใฆใฏใใใชใใฎใงใใใใ๏ผ
- ใใฎใฉใคใใฉใชใฏใใใฅใผใฉใซใใใใฎใใใฎใใซใใฃใณใฐใใญใใฏใฎใขใธใฅใผใซๅผใใผใซใใใฏในใงใฏใใใพใใใใขใใซใใกใคใซใฎใณใผใใฏใ็ ็ฉถ่
ใ่ฟฝๅ ใฎๆฝ่ฑกๅ/ใใกใคใซใซ้ฃใณ่พผใใใจใชใใๅใขใใซใ็ด ๆฉใๅๅพฉใงใใใใใซใๆๅณ็ใซ่ฟฝๅ ใฎๆฝ่ฑกๅใงใชใใกใฏใฟใชใณใฐใใใฆใใพใใใ
- ๅญฆ็ฟAPIใฏใฉใฎใใใชใขใใซใงใๅไฝใใใใใงใฏใชใใใฉใคใใฉใชใๆไพใใใขใใซใงๅไฝใใใใใซๆ้ฉๅใใใฆใใพใใไธ่ฌ็ใชๆฉๆขฐๅญฆ็ฟใฎใซใผใใซใฏใๅฅใฎใฉใคใใฉใช(ใใใใ[Accelerate](https://huggingface.co/docs/accelerate))ใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
- ็งใใกใฏใงใใใ ใๅคใใฎไฝฟ็จไพใ็ดนไปใใใใๅชๅใใฆใใพใใใ[examples ใใฉใซใ](https://github.com/huggingface/transformers/tree/main/examples) ใซใใในใฏใชใใใฏใใใพใงไพใงใใใใชใใฎ็นๅฎใฎๅ้กใซๅฏพใใฆใใใซๅไฝใใใใใงใฏใชใใใใชใใฎใใผใบใซๅใใใใใใซๆฐ่กใฎใณใผใใๅคๆดใใๅฟ
่ฆใใใใใจใไบๆณใใใพใใ
## ใคใณในใใผใซ
### pipใซใฆ
ใใฎใชใใธใใชใฏใPython 3.8+, Flax 0.4.1+, PyTorch 1.11+, TensorFlow 2.6+ ใงใในใใใใฆใใพใใ
๐คTransformersใฏ[ไปฎๆณ็ฐๅข](https://docs.python.org/3/library/venv.html)ใซใคใณในใใผใซใใๅฟ
่ฆใใใใพใใPythonใฎไปฎๆณ็ฐๅขใซๆ
ฃใใฆใใชใๅ ดๅใฏใ[ใฆใผใถใผใฌใคใ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ็ขบ่ชใใฆใใ ใใใ
ใพใใไฝฟ็จใใใใผใธใงใณใฎPythonใงไปฎๆณ็ฐๅขใไฝๆใใใขใฏใใฃใใผใใใพใใ
ใใฎๅพใFlax, PyTorch, TensorFlowใฎใใกๅฐใชใใจใ1ใคใใคใณในใใผใซใใๅฟ
่ฆใใใใพใใ
[TensorFlowใคใณในใใผใซใใผใธ](https://www.tensorflow.org/install/)ใ[PyTorchใคใณในใใผใซใใผใธ](https://pytorch.org/get-started/locally/#start-locally)ใ[Flax](https://github.com/google/flax#quick-install)ใ[Jax](https://github.com/google/jax#installation)ใคใณในใใผใซใใผใธใงใใไฝฟใใฎใใฉใใใใฉใผใ ๅฅใฎใคใณในใใผใซใณใใณใใๅ็
งใใฆใใ ใใใ
ใใใใฎใใใฏใจใณใใฎใใใใใใคใณในใใผใซใใใฆใใๅ ดๅใ๐คTransformersใฏไปฅไธใฎใใใซpipใไฝฟ็จใใฆใคใณในใใผใซใใใใจใใงใใพใ:
```bash
pip install transformers
```
ใใใตใณใใซใ่ฉฆใใใใใพใใฏใณใผใใฎๆๅ
็ซฏใๅฟ
่ฆใงใๆฐใใใชใชใผในใๅพ
ใฆใชใๅ ดๅใฏใ[ใฉใคใใฉใชใใฝใผในใใใคใณในใใผใซ](https://huggingface.co/docs/transformers/installation#installing-from-source)ใใๅฟ
่ฆใใใใพใใ
### condaใซใฆ
๐คTransformersใฏไปฅไธใฎใใใซcondaใไฝฟใฃใฆ่จญ็ฝฎใใใใจใใงใใพใ:
```shell script
conda install conda-forge::transformers
```
> **_ๆณจๆ:_** `huggingface` ใใฃใณใใซใใ `transformers` ใใคใณในใใผใซใใใใจใฏ้ๆจๅฅจใงใใ
FlaxใPyTorchใTensorFlowใcondaใงใคใณในใใผใซใใๆนๆณใฏใใใใใใฎใคใณในใใผใซใใผใธใซๅพใฃใฆใใ ใใใ
> **_ๆณจๆ:_** Windowsใงใฏใใญใฃใใทใฅใฎๆฉๆตใๅใใใใใซใใใใญใใใผใขใผใใๆๅนใซใใใใไฟใใใใใจใใใใพใใใใฎใใใชๅ ดๅใฏใ[ใใฎissue](https://github.com/huggingface/huggingface_hub/issues/1062)ใงใ็ฅใใใใ ใใใ
## ใขใใซใขใผใญใใฏใใฃ
๐คTransformersใๆไพใใ **[ๅ
จใขใใซใใงใใฏใใคใณใ](https://huggingface.co/models)** ใฏใ[ใฆใผใถใผ](https://huggingface.co/users)ใ[็ต็น](https://huggingface.co/organizations)ใซใใฃใฆ็ดๆฅใขใใใญใผใใใใhuggingface.co [model hub](https://huggingface.co)ใใใทใผใ ใฌในใซ็ตฑๅใใใฆใใพใใ
็พๅจใฎใใงใใฏใใคใณใๆฐ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐คTransformersใฏ็พๅจใไปฅไธใฎใขใผใญใใฏใใฃใๆไพใใฆใใพใ: ใใใใใฎใใคใฌใใซใช่ฆ็ดใฏ[ใใกใ](https://huggingface.co/docs/transformers/model_summary)ใๅ็
งใใฆใใ ใใ.
ๅใขใใซใFlaxใPyTorchใTensorFlowใงๅฎ่ฃ
ใใใฆใใใใ๐คTokenizersใฉใคใใฉใชใซๆฏใใใใ้ข้ฃใใผใฏใใคใถใๆใฃใฆใใใใฏใ[ใใฎ่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใๅ็
งใใฆใใ ใใใ
ใใใใฎๅฎ่ฃ
ใฏใใใคใใฎใใผใฟใปใใใงใในใใใใฆใใ(ใตใณใใซในใฏใชใใใๅ็
ง)ใใชใชใธใใซใฎๅฎ่ฃ
ใฎๆง่ฝใจไธ่ดใใใฏใใงใใใๆง่ฝใฎ่ฉณ็ดฐใฏ[documentation](https://github.com/huggingface/transformers/tree/main/examples)ใฎExamplesใปใฏใทใงใณใง่ฆใใใจใใงใใพใใ
## ใใใซ่ฉณใใ
| ใปใฏใทใงใณ | ๆฆ่ฆ |
|-|-|
| [ใใญใฅใกใณใ](https://huggingface.co/docs/transformers/) | ๅฎๅ
จใชAPIใใญใฅใกใณใใจใใฅใผใใชใขใซ |
| [ใฟในใฏๆฆ่ฆ](https://huggingface.co/docs/transformers/task_summary) | ๐คTransformersใใตใใผใใใใฟในใฏ |
| [ๅๅฆ็ใใฅใผใใชใขใซ](https://huggingface.co/docs/transformers/preprocessing) | ใขใใซ็จใฎใใผใฟใๆบๅใใใใใซ`Tokenizer`ใฏใฉในใไฝฟ็จ |
| [ใใฌใผใใณใฐใจๅพฎ่ชฟๆด](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlowใฎๅญฆ็ฟใซใผใใจ`Trainer`APIใง๐คTransformersใๆไพใใใขใใซใไฝฟ็จ |
| [ใฏใคใใฏใใขใผ: ๅพฎ่ชฟๆด/ไฝฟ็จๆนๆณในใฏใชใใ](https://github.com/huggingface/transformers/tree/main/examples) | ๆงใ
ใชใฟในใฏใงใขใใซใฎๅพฎ่ชฟๆดใ่กใใใใฎในใฏใชใใไพ |
| [ใขใใซใฎๅ
ฑๆใจใขใใใญใผใ](https://huggingface.co/docs/transformers/model_sharing) | ๅพฎ่ชฟๆดใใใขใใซใใขใใใญใผใใใฆใณใใฅใใใฃใงๅ
ฑๆใใ |
| [ใใคใฐใฌใผใทใงใณ](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`ใพใใฏ`pytorch-pretrained-bert`ใใ๐คTransformers ใซ็งป่กใใ |
## ๅผ็จ
๐ค ใใฉใณในใใฉใผใใผใฉใคใใฉใชใซๅผ็จใงใใ[่ซๆ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ใๅบๆฅใพใใ:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_es.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<b>Espaรฑol</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>Lo รบltimo de Machine Learning para JAX, PyTorch y TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers aporta miles de modelos preentrenados para realizar tareas en diferentes modalidades como texto, visiรณn, y audio.
Estos modelos pueden ser aplicados en:
* ๐ Texto, para tareas como clasificaciรณn de texto, extracciรณn de informaciรณn, responder preguntas, resumir, traducir, generaciรณn de texto, en mรกs de 100 idiomas.
* ๐ผ๏ธ Imรกgenes, para tareas como clasificaciรณn de imรกgenes, detecciรณn the objetos, y segmentaciรณn.
* ๐ฃ๏ธ Audio, para tareas como reconocimiento de voz y clasificaciรณn de audio.
Los modelos de Transformer tambiรฉn pueden realizar tareas en **muchas modalidades combinadas**, como responder preguntas, reconocimiento de carรกcteres รณpticos,extracciรณn de informaciรณn de documentos escaneados, clasificaciรณn de video, y respuesta de preguntas visuales.
๐ค Transformers aporta APIs para descargar rรกpidamente y usar estos modelos preentrenados en un texto dado, afinarlos en tus propios sets de datos y compartirlos con la comunidad en nuestro [centro de modelos](https://huggingface.co/models). Al mismo tiempo, cada mรณdulo de Python que define una arquitectura es completamente independiente y se puede modificar para permitir experimentos de investigaciรณn rรกpidos.
๐ค Transformers estรก respaldado por las tres bibliotecas de deep learning mรกs populares โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) y [TensorFlow](https://www.tensorflow.org/) โ con una perfecta integraciรณn entre ellos. Es sencillo entrenar sus modelos con uno antes de cargarlos para la inferencia con el otro.
## Demostraciones en lรญnea
Puedes probar la mayorรญa de nuestros modelos directamente en sus pรกginas desde el [centro de modelos](https://huggingface.co/models). Tambiรฉn ofrecemos [alojamiento de modelos privados, control de versiones y una API de inferencia](https://huggingface.co/pricing) para modelos pรบblicos y privados.
Aquรญ hay algunos ejemplos:
En procesamiento del lenguaje natural:
- [Terminaciรณn de palabras enmascaradas con BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Reconocimiento del nombre de la entidad con Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Generaciรณn de texto con GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [Inferencia del lenguaje natural con RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Resumen con BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Responder a preguntas con DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Traducciรณn con T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
En visiรณn de ordenador:
- [Clasificaciรณn de imรกgenes con ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Detecciรณn de objetos con DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Segmentaciรณn semรกntica con SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Segmentaciรณn panรณptica con DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic)
- [Segmentaciรณn Universal con OneFormer (Segmentaciรณn Semรกntica, de Instancia y Panรณptica con un solo modelo)](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
En Audio:
- [Reconocimiento de voz automรกtico con Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Detecciรณn de palabras clave con Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
En tareas multimodales:
- [Respuesta visual a preguntas con ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
**[Escribe con Transformer](https://transformer.huggingface.co)**, construido por el equipo de Hugging Face, es la demostraciรณn oficial de las capacidades de generaciรณn de texto de este repositorio.
## Si estรก buscando soporte personalizado del equipo de Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Tour rรกpido
Para usar inmediatamente un modelo en una entrada determinada (texto, imagen, audio, ...), proporcionamos la API de `pipeline`. Los pipelines agrupan un modelo previamente entrenado con el preprocesamiento que se usรณ durante el entrenamiento de ese modelo. Aquรญ se explica cรณmo usar rรกpidamente un pipeline para clasificar textos positivos frente a negativos:
```python
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
La segunda lรญnea de cรณdigo descarga y almacena en cachรฉ el modelo previamente entrenado que usa la canalizaciรณn, mientras que la tercera lo evalรบa en el texto dado. Aquรญ la respuesta es "positiva" con una confianza del 99,97%.
Muchas tareas tienen un `pipeline` preentrenado listo para funcionar, en NLP pero tambiรฉn en visiรณn por ordenador y habla. Por ejemplo, podemos extraer fรกcilmente los objetos detectados en una imagen:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object_detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Aquรญ obtenemos una lista de objetos detectados en la imagen, con un cuadro que rodea el objeto y una puntuaciรณn de confianza. Aquรญ estรก la imagen original a la derecha, con las predicciones mostradas a la izquierda:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Puedes obtener mรกs informaciรณn sobre las tareas admitidas por la API de `pipeline` en [este tutorial](https://huggingface.co/docs/transformers/task_summary).
Ademรกs de `pipeline`, para descargar y usar cualquiera de los modelos previamente entrenados en su tarea dada, todo lo que necesita son tres lรญneas de cรณdigo. Aquรญ estรก la versiรณn de PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
Y aquรญ estรก el cรณdigo equivalente para TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
El tokenizador es responsable de todo el preprocesamiento que espera el modelo preentrenado y se puede llamar directamente en una sola cadena (como en los ejemplos anteriores) o en una lista. Este darรก como resultado un diccionario que puedes usar en el cรณdigo descendente o simplemente pasarlo directamente a su modelo usando el operador de desempaquetado de argumento **.
El modelo en si es un [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) normal o un [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (dependiendo De tu backend) que puedes usar de forma habitual. [Este tutorial](https://huggingface.co/docs/transformers/training) explica cรณmo integrar un modelo de este tipo en un ciclo de entrenamiento PyTorch o TensorFlow clรกsico, o como usar nuestra API `Trainer` para ajustar rรกpidamente un nuevo conjunto de datos.
## ยฟPor quรฉ debo usar transformers?
1. Modelos de รบltima generaciรณn fรกciles de usar:
- Alto rendimiento en comprensiรณn y generaciรณn de lenguaje natural, visiรณn artificial y tareas de audio.
- Baja barrera de entrada para educadores y profesionales.
- Pocas abstracciones de cara al usuario con solo tres clases para aprender.
- Una API unificada para usar todos nuestros modelos preentrenados.
1. Menores costes de cรณmputo, menor huella de carbono:
- Los investigadores pueden compartir modelos entrenados en lugar de siempre volver a entrenar.
- Los profesionales pueden reducir el tiempo de cรณmputo y los costos de producciรณn.
- Docenas de arquitecturas con mรกs de 60 000 modelos preentrenados en todas las modalidades.
1. Elija el marco adecuado para cada parte de la vida รบtil de un modelo:
- Entrene modelos de รบltima generaciรณn en 3 lรญneas de cรณdigo.
- Mueva un solo modelo entre los marcos TF2.0/PyTorch/JAX a voluntad.
- Elija sin problemas el marco adecuado para la formaciรณn, la evaluaciรณn y la producciรณn.
1. Personalice fรกcilmente un modelo o un ejemplo segรบn sus necesidades:
- Proporcionamos ejemplos de cada arquitectura para reproducir los resultados publicados por sus autores originales..
- Los internos del modelo estรกn expuestos lo mรกs consistentemente posible..
- Los archivos modelo se pueden usar independientemente de la biblioteca para experimentos rรกpidos.
## ยฟPor quรฉ no deberรญa usar transformers?
- Esta biblioteca no es una caja de herramientas modular de bloques de construcciรณn para redes neuronales. El cรณdigo en los archivos del modelo no se refactoriza con abstracciones adicionales a propรณsito, de modo que los investigadores puedan iterar rรกpidamente en cada uno de los modelos sin sumergirse en abstracciones/archivos adicionales.
- La API de entrenamiento no estรก diseรฑada para funcionar en ningรบn modelo, pero estรก optimizada para funcionar con los modelos proporcionados por la biblioteca. Para bucles genรฉricos de aprendizaje automรกtico, debe usar otra biblioteca (posiblemente, [Accelerate](https://huggingface.co/docs/accelerate)).
- Si bien nos esforzamos por presentar tantos casos de uso como sea posible, los scripts en nuestra [carpeta de ejemplos](https://github.com/huggingface/transformers/tree/main/examples) son solo eso: ejemplos. Se espera que no funcionen de forma inmediata en su problema especรญfico y que deba cambiar algunas lรญneas de cรณdigo para adaptarlas a sus necesidades.
## Instalaciรณn
### Con pip
Este repositorio estรก probado en Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ y TensorFlow 2.6+.
Deberรญas instalar ๐ค Transformers en un [entorno virtual](https://docs.python.org/3/library/venv.html). Si no estas familiarizado con los entornos virtuales de Python, consulta la [guรญa de usuario](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
Primero, crea un entorno virtual con la versiรณn de Python que vas a usar y actรญvalo.
Luego, deberรกs instalar al menos uno entre Flax, PyTorch o TensorFlow.
Por favor, ve a la [pรกgina de instalaciรณn de TensorFlow](https://www.tensorflow.org/install/), [pรกgina de instalaciรณn de PyTorch](https://pytorch.org/get-started/locally/#start-locally) y/o las pรกginas de instalaciรณn de [Flax](https://github.com/google/flax#quick-install) y [Jax](https://github.com/google/jax#installation) con respecto al comando de instalaciรณn especรญfico para tu plataforma.
Cuando se ha instalado uno de esos backends, los ๐ค Transformers se pueden instalar usando pip de la siguiente manera:
```bash
pip install transformers
```
Si deseas jugar con los ejemplos o necesitas la รบltima versiรณn del cรณdigo y no puedes esperar a una nueva versiรณn, tienes que [instalar la librerรญa de la fuente](https://huggingface.co/docs/transformers/installation#installing-from-source).
### Con conda
๐ค Transformers se puede instalar usando conda de la siguiente manera:
```shell script
conda install conda-forge::transformers
```
> **_NOTA:_** Instalar `transformers` desde el canal `huggingface` estรก obsoleto.
Sigue las pรกginas de instalaciรณn de Flax, PyTorch o TensorFlow para ver cรณmo instalarlos con conda.
> **_NOTA:_** En Windows, es posible que se le pida que active el modo de desarrollador para beneficiarse del almacenamiento en cachรฉ. Si esta no es una opciรณn para usted, hรกganoslo saber en [esta issue](https://github.com/huggingface/huggingface_hub/issues/1062).
## Arquitecturas modelo
**[Todos los puntos de control del modelo](https://huggingface.co/models)** aportados por ๐ค Transformers estรกn perfectamente integrados desde huggingface.co [Centro de modelos](https://huggingface.co) donde son subidos directamente por los [usuarios](https://huggingface.co/users) y [organizaciones](https://huggingface.co/organizations).
Nรบmero actual de puntos de control: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers actualmente proporciona las siguientes arquitecturas: ver [aquรญ](https://huggingface.co/docs/transformers/model_summary) para un resumen de alto nivel de cada uno de ellas.
Para comprobar si cada modelo tiene una implementaciรณn en Flax, PyTorch o TensorFlow, o tiene un tokenizador asociado respaldado por la librerรญa ๐ค Tokenizers, ve a [esta tabla](https://huggingface.co/docs/transformers/index#supported-frameworks).
Estas implementaciones se han probado en varios conjuntos de datos (consulte los scripts de ejemplo) y deberรญan coincidir con el rendimiento de las implementaciones originales. Puede encontrar mรกs detalles sobre el rendimiento en la secciรณn Examples de la [documentaciรณn](https://github.com/huggingface/transformers/tree/main/examples).
## Aprender mรกs
| Secciรณn | Descripciรณn |
|-|-|
| [Documentaciรณn](https://huggingface.co/docs/transformers/) | Toda la documentaciรณn de la API y tutoriales |
| [Resumen de tareas](https://huggingface.co/docs/transformers/task_summary) | Tareas soportadas ๐ค Transformers |
| [Tutorial de preprocesamiento](https://huggingface.co/docs/transformers/preprocessing) | Usando la clase `Tokenizer` para preparar datos para los modelos |
| [Entrenamiento y puesta a punto](https://huggingface.co/docs/transformers/training) | Usando los modelos aportados por ๐ค Transformers en un bucle de entreno de PyTorch/TensorFlow y la API de `Trainer` |
| [Recorrido rรกpido: secuencias de comandos de ajuste/uso](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de ejemplo para ajustar modelos en una amplia gama de tareas |
| [Compartir y subir modelos](https://huggingface.co/docs/transformers/model_sharing) | Carga y comparte tus modelos perfeccionados con la comunidad |
| [Migraciรณn](https://huggingface.co/docs/transformers/migration) | Migra a ๐ค Transformers desde `pytorch-transformers` o `pytorch-pretrained-bert` |
## Citaciรณn
Ahora nosotros tenemos un [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que puedes citar para la librerรญa de ๐ค Transformers:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_te.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<b>เฐคเฑเฐฒเฑเฐเฑ</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>JAX, PyTorch เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow เฐเฑเฐธเฐ เฐ
เฐคเฑเฐฏเฐพเฐงเฑเฐจเฐฟเฐ เฐฏเฐเฐคเฑเฐฐ เฐ
เฐญเฑเฐฏเฐพเฐธเฐ</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐเฑเฐเฑเฐธเฑเฐเฑ, เฐตเฐฟเฐเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐกเฐฟเฐฏเฑ เฐตเฐเฐเฐฟ เฐตเฐฟเฐญเฐฟเฐจเฑเฐจ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐชเฑ เฐเฐพเฐธเฑเฐเฑโเฐฒเฐจเฑ เฐจเฐฟเฐฐเฑเฐตเฐนเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐตเฑเฐฒเฐพเฐฆเฐฟ เฐฎเฑเฐเฐฆเฑเฐเฐพ เฐถเฐฟเฐเฑเฐทเฐฃ เฐชเฑเฐเฐฆเฐฟเฐจ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐ
เฐเฐฆเฐฟเฐธเฑเฐคเฐพเฐฏเฐฟ.
เฐ เฐจเฐฎเฑเฐจเฐพเฐฒเฑ เฐตเฐฐเฑเฐคเฐฟเฐเฐเฐตเฐเฑเฐเฑ:
* ๐ เฐเฑเฐเฑเฐธเฑเฐเฑ, 100เฐเฐฟ เฐชเฑเฐเฐพ เฐญเฐพเฐทเฐฒเฑเฐฒเฑ เฐเฑเฐเฑเฐธเฑเฐเฑ เฐเฑเฐฒเฐพเฐธเฐฟเฐซเฐฟเฐเฑเฐทเฐจเฑ, เฐเฐจเฑเฐซเฐฐเฑเฐฎเฑเฐทเฐจเฑ เฐเฐเฑเฐธเฑโเฐเฑเฐฐเฐพเฐเฑเฐทเฐจเฑ, เฐชเฑเฐฐเฐถเฑเฐจเฐฒเฐเฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐพเฐฒเฑ, เฐธเฐพเฐฐเฐพเฐเฐถเฐ, เฐ
เฐจเฑเฐตเฐพเฐฆเฐ, เฐเฑเฐเฑเฐธเฑเฐเฑ เฐเฐจเฐฐเฑเฐทเฐจเฑ เฐตเฐเฐเฐฟ เฐชเฐจเฑเฐฒ เฐเฑเฐธเฐ.
* ๐ผ๏ธ เฐเฐฎเฑเฐเฑโเฐฒเฑ, เฐเฐฎเฑเฐเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ, เฐเฐฌเฑเฐเฑเฐเฑเฐเฑ เฐกเฐฟเฐเฑเฐเฑเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐธเฑเฐเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ เฐตเฐเฐเฐฟ เฐชเฐจเฑเฐฒ เฐเฑเฐธเฐ.
* ๐ฃ๏ธ เฐเฐกเฐฟเฐฏเฑ, เฐธเฑเฐชเฑเฐเฑ เฐฐเฐฟเฐเฐเฑเฐจเฐฟเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐกเฐฟเฐฏเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ เฐตเฐเฐเฐฟ เฐชเฐจเฑเฐฒ เฐเฑเฐธเฐ.
เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฑ เฐเฑเฐฌเฑเฐฒเฑ เฐเฑเฐตเฐถเฑเฐเฐจเฑ เฐเฐจเฑเฐธเฐฐเฑ เฐเฑเฐฏเฐกเฐ, เฐเฐชเฑเฐเฐฟเฐเฐฒเฑ เฐเฑเฐฏเฐพเฐฐเฑเฐเฑเฐเฐฐเฑ เฐฐเฐฟเฐเฐเฑเฐจเฐฟเฐทเฐจเฑ, เฐธเฑเฐเฐพเฐจเฑ เฐเฑเฐธเฐฟเฐจ เฐกเฐพเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑโเฐฒ เฐจเฑเฐเฐกเฐฟ เฐเฐจเฑเฐซเฐฐเฑเฐฎเฑเฐทเฐจเฑ เฐเฐเฑเฐธเฑโเฐเฑเฐฐเฐพเฐเฑเฐทเฐจเฑ, เฐตเฑเฐกเฐฟเฐฏเฑ เฐเฑเฐฒเฐพเฐธเฐฟเฐซเฐฟเฐเฑเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐฟเฐเฑเฐตเฐฒเฑ เฐเฑเฐตเฐถเฑเฐเฐจเฑ เฐเฐจเฑเฐธเฐฐเฑ เฐเฑเฐฏเฐกเฐ เฐตเฐเฐเฐฟ **เฐ
เฐจเฑเฐ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฐคเฑ เฐเฐฒเฐฟเฐชเฐฟ** เฐชเฐจเฑเฐฒเฐจเฑ เฐเฑเฐกเฐพ เฐเฑเฐฏเฐเฐฒเฐตเฑ.
๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐ
เฐเฐฆเฐฟเฐเฐเฐฟเฐจ เฐเฑเฐเฑเฐธเฑเฐเฑโเฐฒเฑ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐคเฑเฐตเฐฐเฐเฐพ เฐกเฑเฐจเฑโเฐฒเฑเฐกเฑ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ, เฐตเฐพเฐเฐฟเฐจเฐฟ เฐฎเฑ เฐธเฑเฐตเฐเฐค เฐกเฑเฐเฐพเฐธเฑเฐเฑโเฐฒเฐฒเฑ เฐซเฑเฐจเฑ-เฐเฑเฐฏเฑเฐจเฑ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐพเฐเฐฟเฐจเฐฟ เฐฎเฐพ [เฐฎเฑเฐกเฐฒเฑ เฐนเฐฌเฑ](https://huggingface.co/models)เฐฒเฑ เฐธเฐเฐเฐเฐคเฑ เฐญเฐพเฐเฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ API เฐฒเฐจเฑ เฐ
เฐเฐฆเฐฟเฐธเฑเฐคเฑเฐเฐฆเฐฟ. เฐ
เฐฆเฑ เฐธเฐฎเฐฏเฐเฐฒเฑ, เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑโเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐตเฐเฐฟเฐเฐเฑ เฐชเฑเฐฐเฐคเฐฟ เฐชเฑเฐฅเฐพเฐจเฑ เฐฎเฐพเฐกเฑเฐฏเฑเฐฒเฑ เฐชเฑเฐฐเฑเฐคเฐฟเฐเฐพ เฐธเฑเฐตเฐคเฐเฐคเฑเฐฐเฐเฐเฐพ เฐเฐเฐเฑเฐเฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฐฟเฐถเฑเฐงเฐจ เฐชเฑเฐฐเฐฏเฑเฐเฐพเฐฒเฐจเฑ เฐชเฑเฐฐเฐพเฐฐเฐเฐญเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐธเฐตเฐฐเฐฟเฐเฐเฐตเฐเฑเฐเฑ.
๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐเฑ เฐฎเฑเฐกเฑ เฐ
เฐคเฑเฐฏเฐเฐค เฐชเฑเฐฐเฐเฐพเฐฆเฐฐเฐฃ เฐชเฑเฐเฐฆเฐฟเฐจ เฐกเฑเฐชเฑ เฐฒเฑเฐฐเฑเฐจเฐฟเฐเฐเฑ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑเฐฒเฑ เฐเฐจเฑเฐจเฐพเฐฏเฐฟ โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) เฐฎเฐฐเฐฟเฐฏเฑ [TensorFlow](https://www.tensorflow.org/) โ เฐตเฐพเฐเฐฟ เฐฎเฐงเฑเฐฏ เฐ
เฐคเฑเฐเฑเฐฒเฑ เฐฒเฑเฐจเฐฟ เฐเฐเฑเฐเฐฐเฐฃเฐคเฑ. เฐฎเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐเฐเฐฆเฐพเฐจเฐฟเฐคเฑ เฐฎเฐฐเฑเฐเฐฆเฐพเฐจเฐฟเฐคเฑ เฐ
เฐจเฑเฐฎเฐฟเฐคเฐฟ เฐเฑเฐธเฐ เฐฒเฑเฐกเฑ เฐเฑเฐธเฑ เฐฎเฑเฐเฐฆเฑ เฐตเฐพเฐเฐฟเฐเฐฟ เฐถเฐฟเฐเฑเฐทเฐฃ เฐเฐตเฑเฐตเฐกเฐ เฐเฐพเฐฒเฐพ เฐธเฑเฐฒเฐญเฐ.
## เฐเฐจเฑโเฐฒเฑเฐจเฑ เฐกเฑเฐฎเฑเฐฒเฑ
เฐฎเฑเฐฐเฑ [เฐฎเฑเฐกเฐฒเฑ เฐนเฐฌเฑ](https://huggingface.co/models) เฐจเฑเฐเฐกเฐฟ เฐฎเฐพ เฐฎเฑเฐกเฐณเฑเฐฒเฐฒเฑ เฐเฐพเฐฒเฐพ เฐตเฐฐเฐเฑ เฐตเฐพเฐเฐฟ เฐชเฑเฐเฑเฐฒเฐฒเฑ เฐจเฑเฐฐเฑเฐเฐพ เฐชเฐฐเฑเฐเฑเฐทเฐฟเฐเฐเฐตเฐเฑเฐเฑ. เฐฎเฑเฐฎเฑ เฐชเฐฌเฑเฐฒเฐฟเฐเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฑเฐตเฑเฐเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒ เฐเฑเฐธเฐ [เฐชเฑเฐฐเฑเฐตเฑเฐเฑ เฐฎเฑเฐกเฐฒเฑ เฐนเฑเฐธเฑเฐเฐฟเฐเฐเฑ, เฐธเฐเฐธเฑเฐเฐฐเฐฃ & เฐ
เฐจเฑเฐฎเฐฟเฐคเฐฟ API](https://huggingface.co/pricing)เฐจเฐฟ เฐเฑเฐกเฐพ เฐ
เฐเฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ.
เฐเฐเฑเฐเฐก เฐเฑเฐจเฑเฐจเฐฟ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒเฑ เฐเฐจเฑเฐจเฐพเฐฏเฐฟ:
เฐธเฐนเฐ เฐญเฐพเฐทเฐพ เฐชเฑเฐฐเฐพเฐธเฑเฐธเฐฟเฐเฐเฑโเฐฒเฑ:
- [BERT เฐคเฑ เฐฎเฐพเฐธเฑเฐเฑโเฐกเฑ เฐตเฐฐเฑเฐกเฑ เฐเฐเฐชเฑเฐฒเฑเฐทเฐจเฑ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Electra เฐคเฑ เฐชเฑเฐฐเฑ เฐเฐเฐเฐฟเฐเฑ เฐเฑเฐฐเฑเฐคเฐฟเฐเฐชเฑ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [GPT-2 เฐคเฑ เฐเฑเฐเฑเฐธเฑเฐเฑ เฐเฐจเฐฐเฑเฐทเฐจเฑ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [RoBERTa เฐคเฑ เฐธเฐนเฐ เฐญเฐพเฐทเฐพ เฐ
เฐจเฑเฐฎเฐฟเฐคเฐฟ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+Lost.+Nobody+lost+any+animal)
- [BART เฐคเฑ เฐธเฐพเฐฐเฐพเฐเฐถเฐ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [DistilBERT เฐคเฑ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [T5 เฐคเฑ เฐ
เฐจเฑเฐตเฐพเฐฆเฐ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
เฐเฐเฐชเฑเฐฏเฑเฐเฐฐเฑ เฐฆเฑเฐทเฑเฐเฐฟเฐฒเฑ:
- [VIT เฐคเฑ เฐเฐฟเฐคเฑเฐฐ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ](https://huggingface.co/google/vit-base-patch16-224)
- [DETR เฐคเฑ เฐเฐฌเฑเฐเฑเฐเฑเฐเฑ เฐกเฐฟเฐเฑเฐเฑเฐทเฐจเฑ](https://huggingface.co/facebook/detr-resnet-50)
- [SegFormer เฐคเฑ เฐธเฑเฐฎเฐพเฐเฐเฐฟเฐเฑ เฐธเฑเฐเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [MaskFormer เฐคเฑ เฐชเฐพเฐจเฑเฐชเฑเฐเฐฟเฐเฑ เฐธเฑเฐเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [DPT เฐคเฑ เฐฒเฑเฐคเฑ เฐ
เฐเฐเฐจเฐพ](https://huggingface.co/docs/transformers/model_doc/dpt)
- [VideoMAE เฐคเฑ เฐตเฑเฐกเฐฟเฐฏเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ](https://huggingface.co/docs/transformers/model_doc/videomae)
- [OneFormer เฐคเฑ เฐฏเฑเฐจเฐฟเฐตเฐฐเฑเฐธเฐฒเฑ เฐธเฑเฐเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
เฐเฐกเฐฟเฐฏเฑเฐฒเฑ:
- [Wav2Vec2 เฐคเฑ เฐเฐเฑเฐฎเฑเฐเฐฟเฐเฑ เฐธเฑเฐชเฑเฐเฑ เฐฐเฐฟเฐเฐเฑเฐจเฐฟเฐทเฐจเฑ](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Wav2Vec2 เฐคเฑ เฐเฑเฐตเฐฐเฑเฐกเฑ เฐธเฑเฐชเฐพเฐเฐฟเฐเฐเฑ](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [เฐเฐกเฐฟเฐฏเฑ เฐธเฑเฐชเฑเฐเฑเฐเฑเฐฐเฑเฐเฑเฐฐเฐพเฐฎเฑ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐคเฑ เฐเฐกเฐฟเฐฏเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
เฐฎเฐฒเฑเฐเฑเฐฎเฑเฐกเฐฒเฑ เฐเฐพเฐธเฑเฐเฑโเฐฒเฐฒเฑ:
- [TAPAS เฐคเฑ เฐเฑเฐฌเฑเฐฒเฑ เฐชเฑเฐฐเฐถเฑเฐจ เฐธเฐฎเฐพเฐงเฐพเฐจเฐพเฐฒเฑ](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [ViLT เฐคเฑ เฐฆเฑเฐถเฑเฐฏเฐฎเฐพเฐจ เฐชเฑเฐฐเฐถเฑเฐจเฐเฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [CLIP เฐคเฑ เฐเฑเฐฐเฑ-เฐทเฐพเฐเฑ เฐเฐฎเฑเฐเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ](https://huggingface.co/openai/clip-vit-large-patch14)
- [LayoutLM เฐคเฑ เฐกเฐพเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑ เฐชเฑเฐฐเฐถเฑเฐจเฐเฑ เฐธเฐฎเฐพเฐงเฐพเฐจเฐ](https://huggingface.co/impira/layoutlm-document-qa)
- [X-CLIP เฐคเฑ เฐเฑเฐฐเฑ-เฐทเฐพเฐเฑ เฐตเฑเฐกเฐฟเฐฏเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฃ](https://huggingface.co/docs/transformers/model_doc/xclip)
## เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฟ 100 เฐชเฑเฐฐเฐพเฐเฑเฐเฑเฐเฑเฐฒเฑ
เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐเฑเฐฒเฑโเฐเฐฟเฐเฑ เฐเฐเฐเฑ เฐเฐเฑเฐเฑเฐต: เฐเฐฆเฐฟ เฐฆเฐพเฐจเฐฟ เฐเฑเฐเฑเฐเฑ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐเฐเฐฟเฐจ เฐชเฑเฐฐเฐพเฐเฑเฐเฑเฐเฑโเฐฒ เฐธเฐเฐเฐ เฐฎเฐฐเฐฟเฐฏเฑ
เฐนเฐเฑเฐเฐฟเฐเฐเฑ เฐซเฑเฐธเฑ เฐนเฐฌเฑ. เฐกเฑเฐตเฐฒเฐชเฐฐเฑโเฐฒเฑ, เฐชเฐฐเฐฟเฐถเฑเฐงเฐเฑเฐฒเฑ, เฐตเฐฟเฐฆเฑเฐฏเฐพเฐฐเฑเฐฅเฑเฐฒเฑ, เฐชเฑเฐฐเฑเฐซเฑเฐธเฐฐเฑโเฐฒเฑ, เฐเฐเฐเฐจเฑเฐฐเฑเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐตเฐฐเฐฟเฐจเฑเฐจเฐพ เฐ
เฐจเฑเฐฎเฐคเฐฟเฐเฐเฑเฐฒเฐพ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐฎเฑเฐฎเฑ เฐเฑเฐฐเฑเฐเฑเฐเฐเฑเฐจเฑเฐจเฐพเฐฎเฑ
เฐตเฐพเฐฐเฐฟ เฐเฐฒเฐฒ เฐชเฑเฐฐเฐพเฐเฑเฐเฑเฐเฑเฐฒเฐจเฑ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ.
เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒ 100,000 เฐจเฐเฑเฐทเฐคเฑเฐฐเฐพเฐฒเฐจเฑ เฐเฐฐเฑเฐชเฑเฐเฑเฐตเฐกเฐพเฐจเฐฟเฐเฐฟ, เฐฎเฑเฐฎเฑ เฐธเฑเฐชเฐพเฐเฑโเฐฒเฑเฐเฑโเฐจเฐฟ เฐเฐเฐเฐพเฐฒเฐจเฐฟ เฐจเฐฟเฐฐเฑเฐฃเฐฏเฐฟเฐเฐเฑเฐเฑเฐจเฑเฐจเฐพเฐฎเฑ
เฐธเฐเฐเฐ, เฐฎเฐฐเฐฟเฐฏเฑ เฐฎเฑเฐฎเฑ 100 เฐเฐพเฐฌเฐฟเฐคเฐพเฐฒเฐจเฑ เฐเฐฒเฐฟเฐเฐฟ เฐเฐจเฑเฐจ [awesome-transformers](./awesome-transformers.md) เฐชเฑเฐเฑเฐจเฐฟ เฐธเฑเฐทเฑเฐเฐฟเฐเฐเฐพเฐฎเฑ.
เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒ เฐชเฐฐเฐฟเฐธเฐฐเฐพเฐฒเฑเฐฒเฑ เฐ
เฐฆเฑเฐญเฑเฐคเฐฎเฑเฐจ เฐชเฑเฐฐเฐพเฐเฑเฐเฑเฐเฑเฐฒเฑ เฐจเฐฟเฐฐเฑเฐฎเฐฟเฐเฐเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ.
เฐเฐพเฐฌเฐฟเฐคเฐพเฐฒเฑ เฐญเฐพเฐเฐฎเฐจเฐฟ เฐฎเฑเฐฐเฑ เฐตเฐฟเฐถเฑเฐตเฐธเฐฟเฐเฐเฑ เฐชเฑเฐฐเฐพเฐเฑเฐเฑเฐเฑโเฐจเฑ เฐฎเฑเฐฐเฑ เฐเฐฒเฐฟเฐเฐฟ เฐเฐเฐเฑ เฐฒเฑเฐฆเฐพ เฐเฐชเฐฏเฑเฐเฐฟเฐธเฑเฐคเฑเฐเฐเฑ, เฐฆเฐฏเฐเฑเฐธเฐฟ เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐเฑเฐกเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ PRเฐจเฐฟ เฐคเฑเฐฐเฐตเฐเฐกเฐฟ!
## เฐฎเฑเฐฐเฑ เฐนเฐเฑเฐเฐฟเฐเฐเฑ เฐซเฑเฐธเฑ เฐเฑเฐฎเฑ เฐจเฑเฐเฐกเฐฟ เฐ
เฐจเฑเฐเฑเฐฒ เฐฎเฐฆเฑเฐฆเฐคเฑ เฐเฑเฐธเฐ เฐเฑเฐธเฑเฐคเฑเฐจเฑเฐจเฐเฑเฐฒเฐฏเฐฟเฐคเฑ
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐเฐจ
เฐเฐเฑเฐเฐฟเฐจ เฐเฐจเฑโเฐชเฑเฐเฑ (เฐเฑเฐเฑเฐธเฑเฐเฑ, เฐเฐฎเฑเฐเฑ, เฐเฐกเฐฟเฐฏเฑ, ...)เฐชเฑ เฐคเฐเฑเฐทเฐฃเฐฎเฑ เฐฎเฑเฐกเฐฒเฑโเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ, เฐฎเฑเฐฎเฑ `pipeline` API เฐจเฐฟ เฐ
เฐเฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ. เฐชเฑเฐชเฑโเฐฒเฑเฐจเฑโเฐฒเฑ เฐ เฐฎเฑเฐกเฐฒเฑ เฐถเฐฟเฐเฑเฐทเฐฃ เฐธเฐฎเฐฏเฐเฐฒเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฟเฐจ เฐชเฑเฐฐเฑเฐชเฑเฐฐเฐพเฐธเฑเฐธเฐฟเฐเฐเฑโเฐคเฑ เฐเฑเฐกเฐฟเฐจ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐจเฑ เฐธเฐฎเฑเฐนเฐชเฐฐเฑเฐธเฑเฐคเฐพเฐฏเฐฟ. เฐธเฐพเฐจเฑเฐเฑเฐฒ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐฐเฐคเฐฟเฐเฑเฐฒ เฐชเฐพเฐ เฐพเฐฒเฐจเฑ เฐตเฐฐเฑเฐเฑเฐเฐฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐชเฑเฐชเฑโเฐฒเฑเฐจเฑโเฐจเฑ เฐคเฑเฐตเฐฐเฐเฐพ เฐเฐฒเฐพ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐพเฐฒเฑ เฐเฐเฑเฐเฐก เฐเฐเฐฆเฐฟ:
```python
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
เฐฐเฑเฐเฐกเฐต เฐฒเฑเฐจเฑ เฐเฑเฐกเฑ เฐกเฑเฐจเฑโเฐฒเฑเฐกเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐชเฑเฐชเฑโเฐฒเฑเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฑ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐจเฑ เฐเฐพเฐทเฑ เฐเฑเฐธเฑเฐคเฑเฐเฐฆเฐฟ, เฐฎเฑเฐกเฐตเฐฆเฐฟ เฐเฐเฑเฐเฐฟเฐจ เฐเฑเฐเฑเฐธเฑเฐเฑโเฐชเฑ เฐฎเฑเฐฒเฑเฐฏเฐพเฐเฐเฐจเฐ เฐเฑเฐธเฑเฐคเฑเฐเฐฆเฐฟ. เฐเฐเฑเฐเฐก เฐธเฐฎเฐพเฐงเฐพเฐจเฐ 99.97% เฐตเฐฟเฐถเฑเฐตเฐพเฐธเฐเฐคเฑ "เฐชเฐพเฐเฐฟเฐเฐฟเฐตเฑ".
เฐเฐพเฐฒเฐพ เฐชเฐจเฑเฐฒเฑ NLPเฐฒเฑ เฐเฐพเฐจเฑ เฐเฐเฐชเฑเฐฏเฑเฐเฐฐเฑ เฐตเฐฟเฐเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐธเฑเฐชเฑเฐเฑโเฐฒเฑ เฐเฑเฐกเฐพ เฐฎเฑเฐเฐฆเฑเฐเฐพ เฐถเฐฟเฐเฑเฐทเฐฃ เฐชเฑเฐเฐฆเฐฟเฐจ `pipeline` เฐธเฐฟเฐฆเฑเฐงเฐเฐเฐพ เฐเฐจเฑเฐจเฐพเฐฏเฐฟ. เฐเฐฆเฐพเฐนเฐฐเฐฃเฐเฑ, เฐฎเฐจเฐ เฐเฐฟเฐคเฑเฐฐเฐเฐฒเฑ เฐเฑเฐฐเฑเฐคเฐฟเฐเฐเฐฟเฐจ เฐตเฐธเฑเฐคเฑเฐตเฑเฐฒเฐจเฑ เฐธเฑเฐฒเฐญเฐเฐเฐพ เฐธเฐเฐเฑเฐฐเฐนเฐฟเฐเฐเฐตเฐเฑเฐเฑ:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
เฐเฐเฑเฐเฐก เฐฎเฐจเฐ เฐเฐฌเฑเฐเฑเฐเฑเฐเฑ เฐเฑเฐเฑเฐเฑ เฐเฐจเฑเฐจ เฐฌเฐพเฐเฑเฐธเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐพเฐจเฑเฐซเฐฟเฐกเฑเฐจเฑเฐธเฑ เฐธเฑเฐเฑเฐฐเฑโเฐคเฑ เฐเฐฟเฐคเฑเฐฐเฐเฐฒเฑ เฐเฑเฐฐเฑเฐคเฐฟเฐเฐเฐฌเฐกเฐฟเฐจ เฐตเฐธเฑเฐคเฑเฐตเฑเฐฒ เฐเฐพเฐฌเฐฟเฐคเฐพเฐจเฑ เฐชเฑเฐเฐฆเฑเฐคเฐพเฐฎเฑ. เฐเฐเฑเฐเฐก เฐเฐกเฐฎเฐตเฑเฐชเฑเฐจ เฐเฐจเฑเฐจ เฐ
เฐธเฐฒเฑ เฐเฐฟเฐคเฑเฐฐเฐ, เฐเฑเฐกเฐฟเฐตเฑเฐชเฑเฐจ เฐ
เฐเฐเฐจเฐพเฐฒเฑ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐเฐเฐฌเฐกเฐคเฐพเฐฏเฐฟ:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
เฐฎเฑเฐฐเฑ [เฐ เฐเฑเฐฏเฑเฐเฑเฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/task_summary)เฐฒเฑ `pipeline` API เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑเฐฐเฑเฐเฑ เฐเฑเฐธเฑ เฐเฐพเฐธเฑเฐเฑโเฐฒ เฐเฑเฐฐเฐฟเฐเฐเฐฟ เฐฎเฐฐเฐฟเฐเฐค เฐคเฑเฐฒเฑเฐธเฑเฐเฑเฐตเฐเฑเฐเฑ.
`pipeline`เฐคเฑ เฐชเฐพเฐเฑ, เฐฎเฑเฐฐเฑ เฐเฐเฑเฐเฐฟเฐจ เฐเฐพเฐธเฑเฐเฑโเฐฒเฑ เฐเฐฆเฑเฐจเฐพ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐกเฑเฐจเฑโเฐฒเฑเฐกเฑ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ, เฐฆเฑเฐจเฐฟเฐเฐฟ เฐฎเฑเฐกเฑ เฐฒเฑเฐจเฑเฐฒ เฐเฑเฐกเฑ เฐธเฐฐเฐฟเฐชเฑเฐคเฑเฐเฐฆเฐฟ. เฐเฐเฑเฐเฐก PyTorch เฐตเฑเฐฐเฑเฐทเฐจเฑ เฐเฐเฐฆเฐฟ:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow เฐเฐฟ เฐธเฐฎเฐพเฐจเฐฎเฑเฐจ เฐเฑเฐกเฑ เฐเฐเฑเฐเฐก เฐเฐเฐฆเฐฟ:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
เฐชเฑเฐฐเฐฟเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑ เฐเฐถเฐฟเฐเฐเฑ เฐ
เฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฑเฐชเฑเฐฐเฐพเฐธเฑเฐธเฐฟเฐเฐเฑโเฐฒเฐเฑ เฐเฑเฐเฑเฐจเฑเฐเฐฐเฑ เฐฌเฐพเฐงเฑเฐฏเฐค เฐตเฐนเฐฟเฐธเฑเฐคเฑเฐเฐฆเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐจเฑเฐฐเฑเฐเฐพ เฐเฐเฑ เฐธเฑเฐเฑเฐฐเฐฟเฐเฐเฑ (เฐชเฑ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐฒเฑ เฐตเฐฒเฑ) เฐฒเฑเฐฆเฐพ เฐเฐพเฐฌเฐฟเฐคเฐพเฐชเฑ เฐเฐพเฐฒเฑ เฐเฑเฐฏเฐตเฐเฑเฐเฑ. เฐเฐฆเฐฟ เฐฎเฑเฐฐเฑ เฐกเฑเฐจเฑโเฐธเฑเฐเฑเฐฐเฑเฐฎเฑ เฐเฑเฐกเฑโเฐฒเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐเฐฒ เฐจเฐฟเฐเฐเฐเฑเฐตเฑเฐจเฐฟ เฐ
เฐตเฑเฐเฑโเฐชเฑเฐเฑ เฐเฑเฐธเฑเฐคเฑเฐเฐฆเฐฟ เฐฒเฑเฐฆเฐพ ** เฐเฐฐเฑเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑ เฐ
เฐจเฑโเฐชเฑเฐฏเฐพเฐเฐฟเฐเฐเฑ เฐเฐชเฐฐเฑเฐเฐฐเฑโเฐจเฐฟ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฟ เฐจเฑเฐฐเฑเฐเฐพ เฐฎเฑ เฐฎเฑเฐกเฐฒเฑโเฐเฐฟ เฐชเฐเฐชเฑเฐคเฑเฐเฐฆเฐฟ.
เฐฎเฑเฐกเฐฒเฑ เฐเฑเฐกเฐพ เฐธเฐพเฐงเฐพเฐฐเฐฃ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) เฐฒเฑเฐฆเฐพ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (เฐฎเฑ เฐฌเฑเฐฏเฐพเฐเฑเฐเฐกเฑโเฐจเฐฟ เฐฌเฐเฑเฐเฐฟ) เฐฎเฑเฐฐเฑ เฐฎเฐพเฐฎเฑเฐฒเฑเฐเฐพ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐตเฐเฑเฐเฑ. [เฐ เฐเฑเฐฏเฑเฐเฑเฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/training) เฐ
เฐเฑเฐตเฐเฐเฐฟ เฐฎเฑเฐกเฐฒเฑโเฐจเฐฟ เฐเฑเฐฒเฐพเฐธเฐฟเฐเฑ PyTorch เฐฒเฑเฐฆเฐพ TensorFlow เฐเฑเฐฐเฑเฐจเฐฟเฐเฐเฑ เฐฒเฑเฐชเฑโเฐฒเฑ เฐเฐฒเฐพ เฐเฐเฐเฐฟเฐเฑเฐฐเฑเฐเฑ เฐเฑเฐฏเฐพเฐฒเฑ เฐฒเฑเฐฆเฐพ เฐฎเฐพ `Trainer` API เฐจเฐฟ เฐเฐฒเฐพ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐพเฐฒเฑ เฐตเฐฟเฐตเฐฐเฐฟเฐธเฑเฐคเฑเฐเฐฆเฐฟ เฐเฑเฐคเฑเฐค เฐกเฑเฐเฐพเฐธเฑเฐเฑ.
## เฐจเฑเฐจเฑ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐเฐเฐฆเฑเฐเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐพเฐฒเฐฟ?
1. เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐธเฑเฐฒเฐญเฐฎเฑเฐจ เฐธเฑเฐเฑเฐเฑ เฐเฐซเฑ เฐฆเฐฟ เฐเฐฐเฑเฐเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฑ:
- เฐธเฐนเฐ เฐญเฐพเฐทเฐพ เฐ
เฐตเฐเฐพเฐนเฐจ & เฐเฐคเฑเฐชเฐคเฑเฐคเฐฟ, เฐเฐเฐชเฑเฐฏเฑเฐเฐฐเฑ เฐฆเฑเฐทเฑเฐเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐกเฐฟเฐฏเฑ เฐชเฐจเฑเฐฒเฐชเฑ เฐ
เฐงเฐฟเฐ เฐชเฐจเฐฟเฐคเฑเฐฐเฑ.
- เฐตเฐฟเฐฆเฑเฐฏเฐพเฐตเฑเฐคเฑเฐคเฐฒเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐ
เฐญเฑเฐฏเฐพเฐธเฐเฑเฐฒ เฐชเฑเฐฐเฐตเฑเฐถเฐพเฐจเฐฟเฐเฐฟ เฐคเฐเฑเฐเฑเฐต เฐ
เฐตเฐฐเฑเฐงเฐ.
- เฐคเฑเฐฒเฑเฐธเฑเฐเฑเฐตเฐกเฐพเฐจเฐฟเฐเฐฟ เฐเฑเฐตเฐฒเฐ เฐฎเฑเฐกเฑ เฐคเฐฐเฐเฐคเฑเฐฒเฐคเฑ เฐเฑเฐจเฑเฐจเฐฟ เฐตเฐฟเฐจเฐฟเฐฏเฑเฐเฐฆเฐพเฐฐเฑ-เฐฎเฑเฐ เฐธเฐเฐเฑเฐฐเฐนเฐฃเฐฒเฑ.
- เฐฎเฐพ เฐ
เฐจเฑเฐจเฐฟ เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐ เฐเฑเฐธเฐ เฐเฐเฑเฐเฑเฐค API.
2. เฐคเฐเฑเฐเฑเฐต เฐเฐฃเฐจ เฐเฐฐเฑเฐเฑเฐฒเฑ, เฐเฐฟเฐจเฑเฐจ เฐเฐพเฐฐเฑเฐฌเฐจเฑ เฐชเฐพเฐฆเฐฎเฑเฐฆเฑเฐฐ:
- เฐชเฐฐเฐฟเฐถเฑเฐงเฐเฑเฐฒเฑ เฐเฐฒเฑเฐฒเฐชเฑเฐชเฑเฐกเฑ เฐฎเฐณเฑเฐฒเฑ เฐถเฐฟเฐเฑเฐทเฐฃ เฐชเฑเฐเฐฆเฑ เฐฌเฐฆเฑเฐฒเฑ เฐถเฐฟเฐเฑเฐทเฐฃ เฐชเฑเฐเฐฆเฐฟเฐจ เฐจเฐฎเฑเฐจเฐพเฐฒเฐจเฑ เฐชเฐเฐเฑเฐเฑเฐตเฐเฑเฐเฑ.
- เฐ
เฐญเฑเฐฏเฐพเฐธเฐเฑเฐฒเฑ เฐเฐฃเฐจ เฐธเฐฎเฐฏเฐพเฐจเฑเฐจเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐเฐฐเฑเฐเฑเฐฒเฐจเฑ เฐคเฐเฑเฐเฐฟเฐเฐเฐเฐฒเฐฐเฑ.
- เฐ
เฐจเฑเฐจเฐฟ เฐชเฐฆเฑเฐงเฐคเฑเฐฒเฑเฐฒเฑ 60,000 เฐเฐเฐเฑ เฐเฐเฑเฐเฑเฐต เฐชเฑเฐฐเฑเฐเฑเฐฐเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐคเฑ เฐกเฐเฐจเฑเฐฒ เฐเฑเฐฆเฑเฐฆเฑ เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑโเฐฒเฑ.
3. เฐฎเฑเฐกเฐฒเฑ เฐเฑเฐตเฐฟเฐคเฐเฐพเฐฒเฐเฐฒเฑ เฐชเฑเฐฐเฐคเฐฟ เฐญเฐพเฐเฐพเฐจเฐฟเฐเฐฟ เฐธเฐฐเฑเฐจ เฐซเฑเฐฐเฑเฐฎเฑโเฐตเฐฐเฑเฐเฑโเฐจเฑ เฐเฐเฐเฑเฐเฑเฐเฐกเฐฟ:
- 3 เฐฒเฑเฐจเฑเฐฒ เฐเฑเฐกเฑโเฐฒเฑ เฐธเฑเฐเฑเฐเฑ เฐเฐซเฑ เฐฆเฐฟ เฐเฐฐเฑเฐเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐเฑ เฐถเฐฟเฐเฑเฐทเฐฃ เฐเฐตเฑเฐตเฐเฐกเฐฟ.
- TF2.0/PyTorch/JAX เฐซเฑเฐฐเฑเฐฎเฑโเฐตเฐฐเฑเฐเฑโเฐฒ เฐฎเฐงเฑเฐฏ เฐเฐเฑ เฐฎเฑเฐกเฐฒเฑโเฐจเฑ เฐเฐทเฑเฐเฐพเฐจเฑเฐธเฐพเฐฐเฐเฐเฐพ เฐคเฐฐเฐฒเฐฟเฐเฐเฐเฐกเฐฟ.
- เฐถเฐฟเฐเฑเฐทเฐฃ, เฐฎเฑเฐฒเฑเฐฏเฐพเฐเฐเฐจเฐ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐเฑเฐธเฐ เฐธเฐฐเฑเฐจ เฐซเฑเฐฐเฑเฐฎเฑโเฐตเฐฐเฑเฐเฑโเฐจเฑ เฐธเฐเฐพเฐตเฑเฐเฐพ เฐเฐเฐเฑเฐเฑเฐเฐกเฐฟ.
4. เฐฎเฑ เฐ
เฐตเฐธเฐฐเฐพเฐฒเฐเฑ เฐ
เฐจเฑเฐเฑเฐฃเฐเฐเฐพ เฐฎเฑเฐกเฐฒเฑ เฐฒเฑเฐฆเฐพ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐจเฑ เฐธเฑเฐฒเฐญเฐเฐเฐพ เฐ
เฐจเฑเฐเฑเฐฒเฑเฐเฐฐเฐฟเฐเฐเฐเฐกเฐฟ:
- เฐชเฑเฐฐเฐคเฐฟ เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑ เฐฆเฐพเฐจเฐฟ เฐ
เฐธเฐฒเฑ เฐฐเฐเฐฏเฐฟเฐคเฐฒเฑ เฐชเฑเฐฐเฐเฑเฐฐเฐฟเฐเฐเฐฟเฐจ เฐซเฐฒเฐฟเฐคเฐพเฐฒเฐจเฑ เฐชเฑเฐจเฐฐเฑเฐคเฑเฐชเฐคเฑเฐคเฐฟ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐฎเฑเฐฎเฑ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐจเฑ เฐ
เฐเฐฆเฐฟเฐธเฑเฐคเฐพเฐฎเฑ.
- เฐฎเฑเฐกเฐฒเฑ เฐเฐเฐเฐฐเฑเฐจเฐฒเฑโเฐฒเฑ เฐตเฑเฐฒเฑเฐจเฐเฐค เฐธเฑเฐฅเฐฟเฐฐเฐเฐเฐพ เฐฌเฐนเฐฟเฐฐเฑเฐเฐคเฐฎเฐตเฑเฐคเฐพเฐฏเฐฟ.
- เฐถเฑเฐเฑเฐฐ เฐชเฑเฐฐเฐฏเฑเฐเฐพเฐฒ เฐเฑเฐธเฐ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑ เฐจเฑเฐเฐกเฐฟ เฐธเฑเฐตเฐคเฐเฐคเฑเฐฐเฐเฐเฐพ เฐฎเฑเฐกเฐฒเฑ เฐซเฑเฐฒเฑโเฐฒเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐตเฐเฑเฐเฑ.
## เฐจเฑเฐจเฑ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐเฐเฐฆเฑเฐเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐเฑเฐกเฐฆเฑ?
- เฐ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑ เฐจเฑเฐฏเฑเฐฐเฐฒเฑ เฐจเฑเฐเฑโเฐฒ เฐเฑเฐธเฐ เฐฌเฐฟเฐฒเฑเฐกเฐฟเฐเฐเฑ เฐฌเฑเฐฒเฐพเฐเฑโเฐฒ เฐฎเฐพเฐกเฑเฐฏเฑเฐฒเฐฐเฑ เฐเฑเฐฒเฑโเฐฌเฐพเฐเฑเฐธเฑ เฐเฐพเฐฆเฑ. เฐฎเฑเฐกเฐฒเฑ เฐซเฑเฐฒเฑโเฐฒเฐฒเฑเฐจเฐฟ เฐเฑเฐกเฑ เฐเฐฆเฑเฐฆเฑเฐถเฐชเฑเฐฐเฑเฐตเฐเฐเฐเฐพ เฐ
เฐฆเฐจเฐชเฑ เฐธเฐเฐเฑเฐฐเฐนเฐฃเฐฒเฐคเฑ เฐฐเฑเฐซเฑเฐฏเฐพเฐเฑเฐเฐฐเฐฟเฐเฐเฑ เฐเฑเฐฏเฐฌเฐกเฐฆเฑ, เฐคเฐฆเฑเฐตเฐพเฐฐเฐพ เฐชเฐฐเฐฟเฐถเฑเฐงเฐเฑเฐฒเฑ เฐ
เฐฆเฐจเฐชเฑ เฐธเฐเฐเฑเฐฐเฐนเฐฃเฐฒเฑ/เฐซเฑเฐณเฑเฐฒเฐฒเฑเฐเฐฟ เฐชเฑเฐฐเฐตเฑเฐถเฐฟเฐเฐเฐเฑเฐเฐกเฐพ เฐชเฑเฐฐเฐคเฐฟ เฐฎเฑเฐกเฐฒเฑโเฐชเฑ เฐคเฑเฐตเฐฐเฐเฐพ เฐฎเฐณเฑเฐฒเฐฟเฐเฐเฐเฐฒเฐฐเฑ.
- เฐถเฐฟเฐเฑเฐทเฐฃ API เฐ เฐฎเฑเฐกเฐฒเฑโเฐฒเฑ เฐชเฐจเฐฟ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐเฐฆเฑเฐฆเฑเฐถเฐฟเฐเฐเฐฌเฐกเฐฒเฑเฐฆเฑ เฐเฐพเฐจเฑ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑ เฐ
เฐเฐฆเฐฟเฐเฐเฐฟเฐจ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐคเฑ เฐชเฐจเฐฟ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ เฐเฐชเฑเฐเฐฟเฐฎเฑเฐเฑ เฐเฑเฐฏเฐฌเฐกเฐฟเฐเฐฆเฐฟ. เฐธเฐพเฐงเฐพเฐฐเฐฃ เฐฎเฑเฐทเฐฟเฐจเฑ เฐฒเฑเฐฐเฑเฐจเฐฟเฐเฐเฑ เฐฒเฑเฐชเฑโเฐฒ เฐเฑเฐธเฐ, เฐฎเฑเฐฐเฑ เฐฎเฐฐเฑเฐ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑเฐจเฐฟ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐพเฐฒเฐฟ (เฐฌเฐนเฑเฐถเฐพ, [Accelerate](https://huggingface.co/docs/accelerate)).
- เฐฎเฑเฐฎเฑ เฐตเฑเฐฒเฑเฐจเฐจเฑเฐจเฐฟ เฐเฐเฑเฐเฑเฐต เฐตเฐฟเฐจเฐฟเฐฏเฑเฐ เฐธเฐเฐฆเฐฐเฑเฐญเฐพเฐฒเฐจเฑ เฐชเฑเฐฐเฐฆเฐฐเฑเฐถเฐฟเฐเฐเฐกเฐพเฐจเฐฟเฐเฐฟ เฐชเฑเฐฐเฐฏเฐคเฑเฐจเฐฟเฐธเฑเฐคเฑเฐจเฑเฐจเฐชเฑเฐชเฑเฐกเฑ, เฐฎเฐพ [เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒ เฐซเฑเฐฒเฑเฐกเฐฐเฑ](https://github.com/huggingface/transformers/tree/main/examples)เฐฒเฑเฐจเฐฟ เฐธเฑเฐเฑเฐฐเฐฟเฐชเฑเฐเฑโเฐฒเฑ เฐเฑเฐตเฐฒเฐ: เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒเฑ. เฐฎเฑ เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐ เฐธเฐฎเฐธเฑเฐฏเฐชเฑ เฐ
เฐตเฐฟ เฐชเฐจเฐฟ เฐเฑเฐฏเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐตเฐพเฐเฐฟเฐจเฐฟ เฐฎเฑ เฐ
เฐตเฐธเฐฐเฐพเฐฒเฐเฑ เฐ
เฐจเฑเฐเฑเฐฃเฐเฐเฐพ เฐฎเฐพเฐฐเฑเฐเฑเฐเฑเฐตเฐกเฐพเฐจเฐฟเฐเฐฟ เฐฎเฑเฐฐเฑ เฐเฑเฐจเฑเฐจเฐฟ เฐเฑเฐกเฑ เฐฒเฑเฐจเฑโเฐฒเฐจเฑ เฐฎเฐพเฐฐเฑเฐเฐตเฐฒเฐธเฐฟ เฐเฐเฐเฑเฐเฐฆเฐฟ.
## เฐธเฐเฐธเฑเฐฅเฐพเฐชเฐจ
### เฐชเฐฟเฐชเฑ เฐคเฑ
เฐ เฐฐเฐฟเฐชเฑเฐเฐฟเฐเฐฐเฑ เฐชเฑเฐฅเฐพเฐจเฑ 3.8+, เฐซเฑเฐฒเฐพเฐเฑเฐธเฑ 0.4.1+, PyTorch 1.11+ เฐฎเฐฐเฐฟเฐฏเฑ TensorFlow 2.6+เฐฒเฑ เฐชเฐฐเฑเฐเฑเฐทเฐฟเฐเฐเฐฌเฐกเฐฟเฐเฐฆเฐฟ.
เฐฎเฑเฐฐเฑ [เฐตเฐฐเฑเฐเฑเฐตเฐฒเฑ เฐตเฐพเฐคเฐพเฐตเฐฐเฐฃเฐ](https://docs.python.org/3/library/venv.html)เฐฒเฑ ๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐพเฐฒเฐฟ. เฐฎเฑเฐเฑ เฐชเฑเฐฅเฐพเฐจเฑ เฐตเฐฐเฑเฐเฑเฐตเฐฒเฑ เฐชเฐฐเฐฟเฐธเฐฐเฐพเฐฒ เฐเฑเฐฐเฐฟเฐเฐเฐฟ เฐคเฑเฐฒเฐฟเฐฏเฐเฑเฐเฐเฑ, [เฐฏเฑเฐเฐฐเฑ เฐเฑเฐกเฑ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) เฐเฑเฐกเฐเฐกเฐฟ.
เฐฎเฑเฐเฐฆเฑเฐเฐพ, เฐฎเฑเฐฐเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฌเฑเฐคเฑเฐจเฑเฐจ เฐชเฑเฐฅเฐพเฐจเฑ เฐตเฑเฐฐเฑเฐทเฐจเฑโเฐคเฑ เฐตเฐฐเฑเฐเฑเฐตเฐฒเฑ เฐตเฐพเฐคเฐพเฐตเฐฐเฐฃเฐพเฐจเฑเฐจเฐฟ เฐธเฑเฐทเฑเฐเฐฟเฐเฐเฐเฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐธเฐเฑเฐฐเฐฟเฐฏเฐ เฐเฑเฐฏเฐเฐกเฐฟ.
เฐ
เฐชเฑเฐชเฑเฐกเฑ, เฐฎเฑเฐฐเฑ เฐซเฑเฐฒเฐพเฐเฑเฐธเฑ, เฐชเฑเฐเฐพเฐฐเฑเฐเฑ เฐฒเฑเฐฆเฐพ เฐเฑเฐจเฑเฐธเฐฐเฑโเฐซเฑเฐฒเฑเฐฒเฑ เฐเฐจเฑเฐธเฐ เฐเฐเฐฆเฐพเฐจเฐฟเฐจเฐฟ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐพเฐฒเฐฟ.
เฐฆเฐฏเฐเฑเฐธเฐฟ [TensorFlow เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑเฐทเฐจเฑ เฐชเฑเฐเฑ](https://www.tensorflow.org/install/), [PyTorch เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑเฐทเฐจเฑ เฐชเฑเฐเฑ](https://pytorch.org/get-started/locally/#start-locally) เฐฎเฐฐเฐฟเฐฏเฑ/เฐจเฐฟ เฐเฑเฐกเฐเฐกเฐฟ เฐฒเฑเฐฆเฐพ เฐฎเฑ เฐชเฑเฐฒเฐพเฐเฑโเฐซเฐพเฐฐเฐฎเฑ เฐเฑเฐธเฐ เฐจเฐฟเฐฐเฑเฐฆเฐฟเฐทเฑเฐ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑเฐทเฐจเฑ เฐเฐฎเฐพเฐเฐกเฑโเฐเฑ เฐธเฐเฐฌเฐเฐงเฐฟเฐเฐเฐฟ [Flax](https://github.com/google/flax#quick-install) เฐฎเฐฐเฐฟเฐฏเฑ [Jax](https://github.com/google/jax#installation) เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑเฐทเฐจเฑ เฐชเฑเฐเฑเฐฒเฑ .
เฐ เฐฌเฑเฐฏเฐพเฐเฑเฐเฐกเฑโเฐฒเฐฒเฑ เฐเฐเฐเฐฟ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐฌเฐกเฐฟเฐจเฐชเฑเฐชเฑเฐกเฑ, ๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐ เฐเฑเฐฐเฐฟเฐเฐฆเฐฟ เฐตเฐฟเฐงเฐเฐเฐพ เฐชเฐฟเฐชเฑโเฐจเฐฟ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฟ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐตเฐเฑเฐเฑ:
```bash
pip install transformers
```
เฐฎเฑเฐฐเฑ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒเฐคเฑ เฐชเฑเฐฒเฑ เฐเฑเฐฏเฐพเฐฒเฐจเฑเฐเฑเฐเฐเฑ เฐฒเฑเฐฆเฐพ เฐเฑเฐกเฑ เฐฏเฑเฐเฑเฐ เฐฌเฑเฐฒเฑเฐกเฐฟเฐเฐเฑ เฐเฐกเฑเฐเฑ เฐ
เฐตเฐธเฐฐเฐ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฑเฐคเฑเฐค เฐตเฐฟเฐกเฑเฐฆเฐฒ เฐเฑเฐธเฐ เฐตเฑเฐเฐฟ เฐเฐเฐกเฐฒเฑเฐเฐชเฑเฐคเฑ, เฐฎเฑเฐฐเฑ เฐคเฐชเฑเฐชเฐจเฐฟเฐธเฐฐเฐฟเฐเฐพ [เฐฎเฑเฐฒเฐ เฐจเฑเฐเฐกเฐฟ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑเฐจเฐฟ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐพเฐฒเฐฟ](https://huggingface.co/docs/transformers/installation#installing-from-source).
### เฐเฑเฐเฐกเฐพ เฐคเฑ
๐ค เฐเฐฟเฐเฐฆเฐฟ เฐตเฐฟเฐงเฐเฐเฐพ เฐเฑเฐเฐกเฐพ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐฟ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒเฐจเฑ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐตเฐเฑเฐเฑ:
```shell script
conda install conda-forge::transformers
```
> **_เฐเฐฎเฐจเฐฟเฐ:_** `huggingface` เฐเฐพเฐจเฑเฐฒเฑ เฐจเฑเฐเฐกเฐฟ `transformers` เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐกเฐ เฐชเฑเฐฐเฐพเฐคเฐจเฐเฐเฐพ เฐเฐเฐฆเฐฟ.
Flax, PyTorch เฐฒเฑเฐฆเฐพ TensorFlow เฐฏเฑเฐเฑเฐ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑเฐทเฐจเฑ เฐชเฑเฐเฑเฐฒเฐจเฑ เฐเฑเฐเฐกเฐพเฐคเฑ เฐเฐฒเฐพ เฐเฐจเฑโเฐธเฑเฐเฐพเฐฒเฑ เฐเฑเฐฏเฐพเฐฒเฑ เฐเฑเฐกเฐเฐพเฐจเฐฟเฐเฐฟ เฐตเฐพเฐเฐฟเฐจเฐฟ เฐ
เฐจเฑเฐธเฐฐเฐฟเฐเฐเฐเฐกเฐฟ.
> **_เฐเฐฎเฐจเฐฟเฐ:_** Windowsเฐฒเฑ, เฐเฐพเฐทเฐฟเฐเฐเฑ เฐจเฑเฐเฐกเฐฟ เฐชเฑเฐฐเฐฏเฑเฐเฐจเฐ เฐชเฑเฐเฐฆเฑเฐเฐฆเฑเฐเฑ เฐฎเฑเฐฐเฑ เฐกเฑเฐตเฐฒเฐชเฐฐเฑ เฐฎเฑเฐกเฑโเฐจเฐฟ เฐธเฐเฑเฐฐเฐฟเฐฏเฐ เฐเฑเฐฏเฐฎเฐจเฐฟ เฐชเฑเฐฐเฐพเฐเฐชเฑเฐเฑ เฐเฑเฐฏเฐฌเฐกเฐตเฐเฑเฐเฑ. เฐเฐฆเฐฟ เฐฎเฑเฐเฑ เฐเฐเฐชเฐฟเฐ เฐเฐพเฐเฐชเฑเฐคเฑ, เฐฆเฐฏเฐเฑเฐธเฐฟ [เฐ เฐธเฐเฐเฐฟเฐ](https://github.com/huggingface/huggingface_hub/issues/1062)เฐฒเฑ เฐฎเฐพเฐเฑ เฐคเฑเฐฒเฐฟเฐฏเฐเฑเฐฏเฐเฐกเฐฟ.
## เฐฎเฑเฐกเฐฒเฑ เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑเฐฒเฑ
**[เฐ
เฐจเฑเฐจเฐฟ เฐฎเฑเฐกเฐฒเฑ เฐเฑเฐเฑโเฐชเฐพเฐฏเฐฟเฐเฐเฑโเฐฒเฑ](https://huggingface.co/models)** ๐ค เฐ
เฐเฐฆเฐฟเฐเฐเฐฟเฐจ เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ huggingface.co [model hub](https://huggingface.co/models) เฐจเฑเฐเฐกเฐฟ เฐธเฐเฐพเฐตเฑเฐเฐพ เฐเฐเฑเฐเฑเฐคเฐ เฐเฑเฐฏเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ [users](https://huggingface.co/users) เฐฎเฐฐเฐฟเฐฏเฑ [organizations](https://huggingface.co/organizations) เฐฆเฑเฐตเฐพเฐฐเฐพ เฐจเฑเฐฐเฑเฐเฐพ เฐ
เฐชเฑโเฐฒเฑเฐกเฑ เฐเฑเฐฏเฐฌเฐกเฐคเฐพเฐฏเฐฟ.
เฐชเฑเฐฐเฐธเฑเฐคเฑเฐค เฐคเฐจเฐฟเฐเฑ เฐเฑเฐเฐฆเฑเฐฐเฐพเฐฒ เฐธเฐเฐเฑเฐฏ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฐธเฑเฐคเฑเฐคเฐ เฐเฐฟเฐเฐฆเฐฟ เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑโเฐฒเฐจเฑ เฐ
เฐเฐฆเฐเฑเฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฏเฐฟ: เฐตเฐพเฐเฐฟเฐฒเฑ เฐชเฑเฐฐเฐคเฐฟ เฐเฐเฑเฐเฐเฐฟ เฐเฐจเฑเฐจเฐค เฐธเฑเฐฅเฐพเฐฏเฐฟ เฐธเฐพเฐฐเฐพเฐเฐถเฐ เฐเฑเฐธเฐ [เฐเฐเฑเฐเฐก](https://huggingface.co/docs/transformers/model_summary) เฐเฑเฐกเฐเฐกเฐฟ.
เฐ เฐ
เฐฎเฐฒเฑเฐฒเฑ เฐ
เฐจเฑเฐ เฐกเฑเฐเฐพเฐธเฑเฐเฑโเฐฒเฐฒเฑ เฐชเฐฐเฑเฐเฑเฐทเฐฟเฐเฐเฐฌเฐกเฑเฐกเฐพเฐฏเฐฟ (เฐเฐฆเฐพเฐนเฐฐเฐฃ เฐธเฑเฐเฑเฐฐเฐฟเฐชเฑเฐเฑโเฐฒเฐจเฑ เฐเฑเฐกเฐเฐกเฐฟ) เฐฎเฐฐเฐฟเฐฏเฑ เฐ
เฐธเฐฒเฑเฐจ เฐ
เฐฎเฐฒเฑเฐฒ เฐชเฐจเฐฟเฐคเฑเฐฐเฑเฐคเฑ เฐธเฐฐเฐฟเฐชเฑเฐฒเฐพเฐฒเฐฟ. เฐฎเฑเฐฐเฑ [เฐกเฐพเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ](https://github.com/huggingface/transformers/tree/main/examples) เฐฏเฑเฐเฑเฐ เฐเฐฆเฐพเฐนเฐฐเฐฃเฐฒ เฐตเฐฟเฐญเฐพเฐเฐเฐฒเฑ เฐชเฐจเฐฟเฐคเฑเฐฐเฑเฐชเฑ เฐฎเฐฐเฐฟเฐจเฑเฐจเฐฟ เฐตเฐฟเฐตเฐฐเฐพเฐฒเฐจเฑ เฐเฐจเฑเฐเฑเฐจเฐตเฐเฑเฐเฑ.
## เฐเฐเฐเฐพ เฐจเฑเฐฐเฑเฐเฑเฐเฑ
| เฐตเฐฟเฐญเฐพเฐเฐ | เฐตเฐฟเฐตเฐฐเฐฃ |
|-|-|
| [เฐกเฐพเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ](https://huggingface.co/docs/transformers/) | เฐชเฑเฐฐเฑเฐคเฐฟ API เฐกเฐพเฐเฑเฐฏเฑเฐฎเฑเฐเฐเฑเฐทเฐจเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐเฑเฐฏเฑเฐเฑเฐฐเฐฟเฐฏเฐฒเฑเฐธเฑ |
| [เฐเฐพเฐธเฑเฐเฑ เฐธเฐพเฐฐเฐพเฐเฐถเฐ](https://huggingface.co/docs/transformers/task_summary) | ๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑโเฐฒ เฐฆเฑเฐตเฐพเฐฐเฐพ เฐธเฐชเฑเฐฐเฑเฐเฑ เฐเฑเฐฏเฐฌเฐกเฐฟเฐจ เฐตเฐฟเฐงเฑเฐฒเฑ |
| [เฐชเฑเฐฐเฑเฐชเฑเฐฐเฐพเฐธเฑเฐธเฐฟเฐเฐเฑ เฐเฑเฐฏเฑเฐเฑเฐฐเฐฟเฐฏเฐฒเฑ](https://huggingface.co/docs/transformers/preprocessing) | เฐฎเฑเฐกเฐฒเฑโเฐฒ เฐเฑเฐธเฐ เฐกเฑเฐเฐพเฐจเฑ เฐธเฐฟเฐฆเฑเฐงเฐ เฐเฑเฐฏเฐกเฐพเฐจเฐฟเฐเฐฟ `Tokenizer` เฐเฑเฐฒเฐพเฐธเฑโเฐจเฐฟ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐ |
| [เฐเฑเฐฐเฑเฐจเฐฟเฐเฐเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐซเฑเฐจเฑ-เฐเฑเฐฏเฑเฐจเฐฟเฐเฐเฑ](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow เฐเฑเฐฐเฑเฐจเฐฟเฐเฐเฑ เฐฒเฑเฐชเฑ เฐฎเฐฐเฐฟเฐฏเฑ `Trainer` APIเฐฒเฑ ๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐ
เฐเฐฆเฐฟเฐเฐเฐฟเฐจ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐเฐชเฐฏเฑเฐเฐฟเฐเฐเฐกเฐ |
| [เฐคเฑเฐตเฐฐเฐฟเฐค เฐชเฐฐเฑเฐฏเฐเฐจ: เฐซเฑเฐจเฑ-เฐเฑเฐฏเฑเฐจเฐฟเฐเฐเฑ/เฐฏเฑเฐธเฑเฐเฑ เฐธเฑเฐเฑเฐฐเฐฟเฐชเฑเฐเฑโเฐฒเฑ](https://github.com/huggingface/transformers/tree/main/examples) | เฐตเฐฟเฐธเฑเฐคเฑเฐค เฐถเฑเฐฐเฑเฐฃเฐฟ เฐเฐพเฐธเฑเฐเฑโเฐฒเฐชเฑ เฐซเฑเฐจเฑ-เฐเฑเฐฏเฑเฐจเฐฟเฐเฐเฑ เฐฎเฑเฐกเฐฒเฑเฐธเฑ เฐเฑเฐธเฐ เฐเฐฆเฐพเฐนเฐฐเฐฃ เฐธเฑเฐเฑเฐฐเฐฟเฐชเฑเฐเฑโเฐฒเฑ |
| [เฐฎเฑเฐกเฐฒเฑ เฐญเฐพเฐเฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ เฐฎเฐฐเฐฟเฐฏเฑ เฐ
เฐชเฑโเฐฒเฑเฐกเฑ เฐเฑเฐฏเฐกเฐ](https://huggingface.co/docs/transformers/model_sharing) | เฐเฐฎเฑเฐฏเฑเฐจเฐฟเฐเฑเฐคเฑ เฐฎเฑ เฐซเฑเฐจเฑ-เฐเฑเฐฏเฑเฐจเฑเฐกเฑ เฐฎเฑเฐกเฐฒเฑโเฐฒเฐจเฑ เฐ
เฐชเฑโเฐฒเฑเฐกเฑ เฐเฑเฐฏเฐเฐกเฐฟ เฐฎเฐฐเฐฟเฐฏเฑ เฐญเฐพเฐเฐธเฑเฐตเฐพเฐฎเฑเฐฏเฐ เฐเฑเฐฏเฐเฐกเฐฟ |
## เฐ
เฐจเฑเฐฒเฑเฐเฐจเฐ
๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐธเฑ เฐฒเฑเฐฌเฑเฐฐเฐฐเฑ เฐเฑเฐธเฐ เฐฎเฑเฐฐเฑ เฐเฐฆเฐนเฐฐเฐฟเฐเฐเฐเฐฒ [เฐชเฑเฐชเฐฐเฑ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) เฐเฐชเฑเฐชเฑเฐกเฑ เฐฎเฐพ เฐตเฐฆเฑเฐฆ เฐเฐเฐฆเฐฟ:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/CITATION.cff | cff-version: "1.2.0"
date-released: 2020-10
message: "If you use this software, please cite it using these metadata."
title: "Transformers: State-of-the-Art Natural Language Processing"
url: "https://github.com/huggingface/transformers"
authors:
- family-names: Wolf
given-names: Thomas
- family-names: Debut
given-names: Lysandre
- family-names: Sanh
given-names: Victor
- family-names: Chaumond
given-names: Julien
- family-names: Delangue
given-names: Clement
- family-names: Moi
given-names: Anthony
- family-names: Cistac
given-names: Perric
- family-names: Ma
given-names: Clara
- family-names: Jernite
given-names: Yacine
- family-names: Plu
given-names: Julien
- family-names: Xu
given-names: Canwen
- family-names: "Le Scao"
given-names: Teven
- family-names: Gugger
given-names: Sylvain
- family-names: Drame
given-names: Mariama
- family-names: Lhoest
given-names: Quentin
- family-names: Rush
given-names: "Alexander M."
preferred-citation:
type: conference-paper
authors:
- family-names: Wolf
given-names: Thomas
- family-names: Debut
given-names: Lysandre
- family-names: Sanh
given-names: Victor
- family-names: Chaumond
given-names: Julien
- family-names: Delangue
given-names: Clement
- family-names: Moi
given-names: Anthony
- family-names: Cistac
given-names: Perric
- family-names: Ma
given-names: Clara
- family-names: Jernite
given-names: Yacine
- family-names: Plu
given-names: Julien
- family-names: Xu
given-names: Canwen
- family-names: "Le Scao"
given-names: Teven
- family-names: Gugger
given-names: Sylvain
- family-names: Drame
given-names: Mariama
- family-names: Lhoest
given-names: Quentin
- family-names: Rush
given-names: "Alexander M."
booktitle: "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations"
month: 10
start: 38
end: 45
title: "Transformers: State-of-the-Art Natural Language Processing"
year: 2020
publisher: "Association for Computational Linguistics"
url: "https://www.aclweb.org/anthology/2020.emnlp-demos.6"
address: "Online"
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/Makefile | .PHONY: deps_table_update modified_only_fixup extra_style_checks quality style fixup fix-copies test test-examples
# make sure to test the local checkout in scripts and not the pre-installed one (don't use quotes!)
export PYTHONPATH = src
check_dirs := examples tests src utils
exclude_folders := examples/research_projects
modified_only_fixup:
$(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs)))
@if test -n "$(modified_py_files)"; then \
echo "Checking/fixing $(modified_py_files)"; \
ruff check $(modified_py_files) --fix --exclude $(exclude_folders); \
ruff format $(modified_py_files) --exclude $(exclude_folders);\
else \
echo "No library .py files were modified"; \
fi
# Update src/transformers/dependency_versions_table.py
deps_table_update:
@python setup.py deps_table_update
deps_table_check_updated:
@md5sum src/transformers/dependency_versions_table.py > md5sum.saved
@python setup.py deps_table_update
@md5sum -c --quiet md5sum.saved || (printf "\nError: the version dependency table is outdated.\nPlease run 'make fixup' or 'make style' and commit the changes.\n\n" && exit 1)
@rm md5sum.saved
# autogenerating code
autogenerate_code: deps_table_update
# Check that the repo is in a good state
repo-consistency:
python utils/check_copies.py
python utils/check_table.py
python utils/check_dummies.py
python utils/check_repo.py
python utils/check_inits.py
python utils/check_config_docstrings.py
python utils/check_config_attributes.py
python utils/check_doctest_list.py
python utils/update_metadata.py --check-only
python utils/check_docstrings.py
python utils/check_support_list.py
# this target runs checks on all files
quality:
@python -c "from transformers import *" || (echo '๐จ import failed, this means you introduced unprotected imports! ๐จ'; exit 1)
ruff check $(check_dirs) setup.py conftest.py
ruff format --check $(check_dirs) setup.py conftest.py
python utils/custom_init_isort.py --check_only
python utils/sort_auto_mappings.py --check_only
python utils/check_doc_toc.py
# Format source code automatically and check is there are any problems left that need manual fixing
extra_style_checks:
python utils/custom_init_isort.py
python utils/sort_auto_mappings.py
python utils/check_doc_toc.py --fix_and_overwrite
# this target runs checks on all files and potentially modifies some of them
style:
ruff check $(check_dirs) setup.py conftest.py --fix --exclude $(exclude_folders)
ruff format $(check_dirs) setup.py conftest.py --exclude $(exclude_folders)
${MAKE} autogenerate_code
${MAKE} extra_style_checks
# Super fast fix and check target that only works on relevant modified files since the branch was made
fixup: modified_only_fixup extra_style_checks autogenerate_code repo-consistency
# Make marked copies of snippets of codes conform to the original
fix-copies:
python utils/check_copies.py --fix_and_overwrite
python utils/check_table.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite
python utils/check_doctest_list.py --fix_and_overwrite
python utils/check_docstrings.py --fix_and_overwrite
# Run tests for the library
test:
python -m pytest -n auto --dist=loadfile -s -v ./tests/
# Run tests for examples
test-examples:
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/
# Run tests for SageMaker DLC release
test-sagemaker: # install sagemaker dependencies in advance with pip install .[sagemaker]
TEST_SAGEMAKER=True python -m pytest -n auto -s -v ./tests/sagemaker
# Release stuff
pre-release:
python utils/release.py
pre-patch:
python utils/release.py --patch
post-release:
python utils/release.py --post_release
post-patch:
python utils/release.py --post_release --patch
build-release:
rm -rf dist
rm -rf build
python setup.py bdist_wheel
python setup.py sdist
python utils/check_build.py
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_ru.md | <!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<b>ะ ัััะบะธะน</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
<p>
</h4>
<h3 align="center">
<p>ะกะพะฒัะตะผะตะฝะฝะพะต ะผะฐัะธะฝะฝะพะต ะพะฑััะตะฝะธะต ะดะปั JAX, PyTorch ะธ TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers ะฟัะตะดะพััะฐะฒะปัะตั ัััััะธ ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน ะดะปั ะฒัะฟะพะปะฝะตะฝะธั ัะฐะทะปะธัะฝัั
ะทะฐะดะฐั, ัะฐะบะธั
ะบะฐะบ ัะตะบัั, ะทัะตะฝะธะต ะธ ะฐัะดะธะพ.
ะญัะธ ะผะพะดะตะปะธ ะผะพะณัั ะฑััั ะฟัะธะผะตะฝะตะฝั ะบ:
* ๐ ะขะตะบััั ะดะปั ัะฐะบะธั
ะทะฐะดะฐั, ะบะฐะบ ะบะปะฐััะธัะธะบะฐัะธั ัะตะบััะพะฒ, ะธะทะฒะปะตัะตะฝะธะต ะธะฝัะพัะผะฐัะธะธ, ะพัะฒะตัั ะฝะฐ ะฒะพะฟัะพัั, ะพะฑะพะฑัะตะฝะธะต, ะฟะตัะตะฒะพะด, ะณะตะฝะตัะฐัะธั ัะตะบััะพะฒ ะฝะฐ ะฑะพะปะตะต ัะตะผ 100 ัะทัะบะฐั
.
* ๐ผ๏ธ ะะทะพะฑัะฐะถะตะฝะธัะผ ะดะปั ะทะฐะดะฐั ะบะปะฐััะธัะธะบะฐัะธะธ ะธะทะพะฑัะฐะถะตะฝะธะน, ะพะฑะฝะฐััะถะตะฝะธั ะพะฑัะตะบัะพะฒ ะธ ัะตะณะผะตะฝัะฐัะธะธ.
* ๐ฃ๏ธ ะัะดะธะพ ะดะปั ะทะฐะดะฐั ัะฐัะฟะพะทะฝะฐะฒะฐะฝะธั ัะตัะธ ะธ ะบะปะฐััะธัะธะบะฐัะธะธ ะฐัะดะธะพ.
ะะพะดะตะปะธ transformers ัะฐะบะถะต ะผะพะณัั ะฒัะฟะพะปะฝััั ะฝะตัะบะพะปัะบะพ ะทะฐะดะฐั, ัะฐะบะธะต ะบะฐะบ ะพัะฒะตัั ะฝะฐ ัะฐะฑะปะธัะฝัะต ะฒะพะฟัะพัั, ัะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ะพะฟัะธัะตัะบะธั
ัะธะผะฒะพะปะพะฒ, ะธะทะฒะปะตัะตะฝะธะต ะธะฝัะพัะผะฐัะธะธ ะธะท ะพััะบะฐะฝะธัะพะฒะฐะฝะฝัั
ะดะพะบัะผะตะฝัะพะฒ, ะบะปะฐััะธัะธะบะฐัะธั ะฒะธะดะตะพ ะธ ะพัะฒะตัั ะฝะฐ ะฒะธะทัะฐะปัะฝัะต ะฒะพะฟัะพัั.
๐ค Transformers ะฟัะตะดะพััะฐะฒะปัะตั API ะดะปั ะฑััััะพะน ะทะฐะณััะทะบะธ ะธ ะธัะฟะพะปัะทะพะฒะฐะฝะธั ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน, ะธั
ัะพะฝะบะพะน ะฝะฐัััะพะนะบะธ ะฝะฐ ัะพะฑััะฒะตะฝะฝัั
ะดะฐัะฐัะตัะฐั
ะธ ะฟะพัะปะตะดัััะตะณะพ ะฒะทะฐะธะผะพะดะตะนััะฒะธั ะธะผะธ ั ัะพะพะฑัะตััะฒะพะผ ะฝะฐ ะฝะฐัะตะผ [ัะฐะนัะต](https://huggingface.co/models). ะ ัะพ ะถะต ะฒัะตะผั ะบะฐะถะดัะน python ะผะพะดัะปั, ะพะฟัะตะดะตะปัััะธะน ะฐัั
ะธัะตะบัััั, ะฟะพะปะฝะพัััั ะฐะฒัะพะฝะพะผะตะฝ ะธ ะผะพะถะตั ะฑััั ะผะพะดะธัะธัะธัะพะฒะฐะฝ ะดะปั ะฟัะพะฒะตะดะตะฝะธั ะฑัััััั
ะธััะปะตะดะพะฒะฐัะตะปััะบะธั
ัะบัะฟะตัะธะผะตะฝัะพะฒ.
๐ค Transformers ะพะฟะธัะฐะตััั ะฝะฐ ััะธ ัะฐะผัะต ะฟะพะฟัะปััะฝัะต ะฑะธะฑะปะธะพัะตะบะธ ะณะปัะฑะพะบะพะณะพ ะพะฑััะตะฝะธั - [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ะธ [TensorFlow](https://www.tensorflow.org/) - ะธ ะปะตะณะบะพ ะธะฝัะตะณัะธััะตััั ะผะตะถะดั ะฝะธะผะธ. ะญัะพ ะฟะพะทะฒะพะปัะตั ะปะตะณะบะพ ะพะฑััะฐัั ะผะพะดะตะปะธ ั ะฟะพะผะพััั ะพะดะฝะพะน ะธะท ะฝะธั
, ะฐ ะทะฐัะตะผ ะทะฐะณััะถะฐัั ะธั
ะดะปั ะฒัะฒะพะดะพะฒ ั ะฟะพะผะพััั ะดััะณะพะน.
## ะะฝะปะฐะนะฝ ะดะตะผะพะฝัััะฐัะธั
ะะพะปััะธะฝััะฒะพ ะฝะฐัะธั
ะผะพะดะตะปะตะน ะผะพะถะฝะพ ะฟัะพัะตััะธัะพะฒะฐัั ะฝะตะฟะพััะตะดััะฒะตะฝะฝะพ ะฝะฐ ะธั
ัััะฐะฝะธัะฐั
ั [ัะฐะนัะฐ](https://huggingface.co/models). ะั ัะฐะบะถะต ะฟัะตะดะปะฐะณะฐะตะผ [ะฟัะธะฒัะฐะฝัะน ั
ะพััะธะฝะณ ะผะพะดะตะปะตะน, ะบะพะฝััะพะปั ะฒะตััะธะน ะธ API ะดะปั ะฒัะฒะพะดะพะฒ](https://huggingface.co/pricing) ะดะปั ะฟัะฑะปะธัะฝัั
ะธ ัะฐััะฝัั
ะผะพะดะตะปะตะน.
ะะพั ะฝะตัะบะพะปัะบะพ ะฟัะธะผะตัะพะฒ:
ะ ะพะฑะปะฐััะธ NLP ( ะะฑัะฐะฑะพัะบะฐ ัะตะบััะพะฒ ะฝะฐ ะตััะตััะฒะตะฝะฝะพะผ ัะทัะบะต ):
- [ะะฐัะบะธัะพะฒะฐะฝะฝะพะต ะทะฐะฟะพะปะฝะตะฝะธะต ัะปะพะฒ ั ะฟะพะผะพััั BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [ะ ะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ัััะฝะพััะตะน ั ะฟะพะผะพััั Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [ะะตะฝะตัะฐัะธั ัะตะบััะฐ ั ะฟะพะผะพััั GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [ะัะฒะพะดั ะฝะฐ ะตััะตััะฒะตะฝะฝะพะผ ัะทัะบะต ั ะฟะพะผะพััั RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [ะะฑะพะฑัะตะฝะธะต ั ะฟะพะผะพััั BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [ะัะฒะตัั ะฝะฐ ะฒะพะฟัะพัั ั ะฟะพะผะพััั DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [ะะตัะตะฒะพะด ั ะฟะพะผะพััั T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
ะ ะพะฑะปะฐััะธ ะบะพะผะฟัััะตัะฝะพะณะพ ะทัะตะฝะธั:
- [ะะปะฐััะธัะธะบะฐัะธั ะธะทะพะฑัะฐะถะตะฝะธะน ั ะฟะพะผะพััั ViT](https://huggingface.co/google/vit-base-patch16-224)
- [ะะฑะฝะฐััะถะตะฝะธะต ะพะฑัะตะบัะพะฒ ั ะฟะพะผะพััั DETR](https://huggingface.co/facebook/detr-resnet-50)
- [ะกะตะผะฐะฝัะธัะตัะบะฐั ัะตะณะผะตะฝัะฐัะธั ั ะฟะพะผะพััั SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [ะกะตะณะผะตะฝัะฐัะธั ะฟะฐะฝะพะฟัะธะบัะผะฐ ั ะฟะพะผะพััั MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [ะัะตะฝะบะฐ ะณะปัะฑะธะฝั ั ะฟะพะผะพััั DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
- [ะะปะฐััะธัะธะบะฐัะธั ะฒะธะดะตะพ ั ะฟะพะผะพััั VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [ะฃะฝะธะฒะตััะฐะปัะฝะฐั ัะตะณะผะตะฝัะฐัะธั ั ะฟะพะผะพััั OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
ะ ะพะฑะปะฐััะธ ะทะฒัะบะฐ:
- [ะะฒัะพะผะฐัะธัะตัะบะพะต ัะฐัะฟะพะทะฝะฐะฒะฐะฝะธะต ัะตัะธ ั ะฟะพะผะพััั Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [ะะพะธัะบ ะบะปััะตะฒัั
ัะปะพะฒ ั ะฟะพะผะพััั Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [ะะปะฐััะธัะธะบะฐัะธั ะฐัะดะธะพะดะฐะฝะฝัั
ั ะฟะพะผะพััั ััะฐัะฝัะพัะผะตัะฐ ะฐัะดะธะพัะฟะตะบััะพะณัะฐะผะผ](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
ะ ะผัะปััะธะผะพะดะฐะปัะฝัั
ะทะฐะดะฐัะฐั
:
- [ะัะฒะตัั ะฝะฐ ะฒะพะฟัะพัั ะฟะพ ัะฐะฑะปะธัะต ั ะฟะพะผะพััั TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [ะะธะทัะฐะปัะฝัะต ะพัะฒะตัั ะฝะฐ ะฒะพะฟัะพัั ั ะฟะพะผะพััั ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Zero-shot ะบะปะฐััะธัะธะบะฐัะธั ะธะทะพะฑัะฐะถะตะฝะธะน ั ะฟะพะผะพััั CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
- [ะัะฒะตัั ะฝะฐ ะฒะพะฟัะพัั ะฟะพ ะดะพะบัะผะตะฝัะฐะผ ั ะฟะพะผะพััั LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Zero-shot ะบะปะฐััะธัะธะบะฐัะธั ะฒะธะดะตะพ ั ะฟะพะผะพััั X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
## 100 ะฟัะพะตะบัะพะฒ, ะธัะฟะพะปัะทัััะธั
Transformers
Transformers - ััะพ ะฝะต ะฟัะพััะพ ะฝะฐะฑะพั ะธะฝััััะผะตะฝัะพะฒ ะดะปั ะธัะฟะพะปัะทะพะฒะฐะฝะธั ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน: ััะพ ัะพะพะฑัะตััะฒะพ ะฟัะพะตะบัะพะฒ, ัะพะทะดะฐะฝะฝะพะต ะฝะฐ ะตะณะพ ะพัะฝะพะฒะต, ะธ
Hugging Face Hub. ะั ั
ะพัะธะผ, ััะพะฑั Transformers ะฟะพะทะฒะพะปะธะป ัะฐะทัะฐะฑะพััะธะบะฐะผ, ะธััะปะตะดะพะฒะฐัะตะปัะผ, ัััะดะตะฝัะฐะผ, ะฟัะพัะตััะพัะฐะผ, ะธะฝะถะตะฝะตัะฐะผ ะธ ะฒัะตะผ ะถะตะปะฐััะธะผ
ัะพะทะดะฐะฒะฐัั ะฟัะพะตะบัั ัะฒะพะตะน ะผะตััั.
ะงัะพะฑั ะพัะฟัะฐะทะดะฝะพะฒะฐัั 100 ััััั ะทะฒะตะทะด Transformers, ะผั ัะตัะธะปะธ ัะดะตะปะฐัั ะฐะบัะตะฝั ะฝะฐ ัะพะพะฑัะตััะฒะต, ะธ ัะพะทะดะฐะปะธ ัััะฐะฝะธัั [awesome-transformers](./awesome-transformers.md), ะฝะฐ ะบะพัะพัะพะน ะฟะตัะตัะธัะปะตะฝั 100
ะฝะตะฒะตัะพััะฝัั
ะฟัะพะตะบัะพะฒ, ัะพะทะดะฐะฝะฝัั
ั ะฟะพะผะพััั transformers.
ะัะปะธ ะฒั ัะฒะปัะตัะตัั ะฒะปะฐะดะตะปััะตะผ ะธะปะธ ะฟะพะปัะทะพะฒะฐัะตะปะตะผ ะฟัะพะตะบัะฐ, ะบะพัะพััะน, ะฟะพ ะฒะฐัะตะผั ะผะฝะตะฝะธั, ะดะพะปะถะตะฝ ะฑััั ะฒะบะปััะตะฝ ะฒ ััะพั ัะฟะธัะพะบ, ะฟะพะถะฐะปัะนััะฐ, ะพัะบัะพะนัะต PR ะดะปั ะตะณะพ ะดะพะฑะฐะฒะปะตะฝะธั!
## ะัะปะธ ะฒั ั
ะพัะธัะต ะฟะพะปััะธัั ะธะฝะดะธะฒะธะดัะฐะปัะฝัั ะฟะพะดะดะตัะถะบั ะพั ะบะพะผะฐะฝะดั Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## ะัััััะน ะณะฐะนะด
ะะปั ะธัะฟะพะปัะทะพะฒะฐะฝะธั ะผะพะดะตะปะธ ะฝะฐ ะทะฐะดะฐะฝะฝะพะผ ะฒั
ะพะดะต (ัะตะบัั, ะธะทะพะฑัะฐะถะตะฝะธะต, ะทะฒัะบ, ...) ะผั ะฟัะตะดะพััะฐะฒะปัะตะผ API `pipeline`. ะะพะฝะฒะตะนะตัั ะพะฑัะตะดะธะฝััั ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั ะผะพะดะตะปั ั ะฟัะตะฟัะพัะตััะธะฝะณะพะผ, ะบะพัะพััะน ะธัะฟะพะปัะทะพะฒะฐะปัั ะฟัะธ ะตะต ะพะฑััะตะฝะธะธ. ะะพั ะบะฐะบ ะผะพะถะฝะพ ะฑััััะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะพะฝะฒะตะนะตั ะดะปั ะบะปะฐััะธัะธะบะฐัะธะธ ะฟะพะปะพะถะธัะตะปัะฝัั
ะธ ะพััะธัะฐัะตะปัะฝัั
ัะตะบััะพะฒ:
```python
>>> from transformers import pipeline
# ะัะดะตะปะตะฝะธะต ะบะพะฝะฒะตะนะตัะฐ ะดะปั ะฐะฝะฐะปะธะทะฐ ะฝะฐัััะพะตะฝะธะน
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('ะั ะพัะตะฝั ัะฐะดั ะฟัะตะดััะฐะฒะธัั ะบะพะฝะฒะตะนะตั ะฒ transformers.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
ะัะพัะฐั ัััะพะบะฐ ะบะพะดะฐ ะทะฐะณััะถะฐะตั ะธ ะบััะธััะตั ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั ะผะพะดะตะปั, ะธัะฟะพะปัะทัะตะผัั ะบะพะฝะฒะตะนะตัะพะผ, ะฐ ััะตััั ะพัะตะฝะธะฒะฐะตั ะตะต ะฝะฐ ะทะฐะดะฐะฝะฝะพะผ ัะตะบััะต. ะะดะตัั ะพัะฒะตั "POSITIVE" ั ัะฒะตัะตะฝะฝะพัััั 99,97%.
ะะพ ะผะฝะพะณะธั
ะทะฐะดะฐัะฐั
, ะบะฐะบ ะฒ ะะะ, ัะฐะบ ะธ ะฒ ะบะพะผะฟัััะตัะฝะพะผ ะทัะตะฝะธะธ ะธ ัะตัะธ, ัะถะต ะตััั ะณะพัะพะฒัะน `pipeline`. ะะฐะฟัะธะผะตั, ะผั ะผะพะถะตะผ ะปะตะณะบะพ ะธะทะฒะปะตัั ะพะฑะฝะฐััะถะตะฝะฝัะต ะพะฑัะตะบัั ะฝะฐ ะธะทะพะฑัะฐะถะตะฝะธะธ:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# ะกะบะฐัะธะฒะฐะตะผ ะธะทะพะฑัะฐะถะตะฝะธะต ั ะผะธะปัะผะธ ะบะพัะธะบะฐะผะธ
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# ะัะดะตะปะตะฝะธะต ะบะพะฝะฒะตะนะตัะฐ ะดะปั ะพะฑะฝะฐััะถะตะฝะธั ะพะฑัะตะบัะพะฒ
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
ะะดะตัั ะผั ะฟะพะปััะฐะตะผ ัะฟะธัะพะบ ะพะฑัะตะบัะพะฒ, ะพะฑะฝะฐััะถะตะฝะฝัั
ะฝะฐ ะธะทะพะฑัะฐะถะตะฝะธะธ, ั ัะฐะผะบะพะน ะฒะพะบััะณ ะพะฑัะตะบัะฐ ะธ ะพัะตะฝะบะพะน ะดะพััะพะฒะตัะฝะพััะธ. ะกะปะตะฒะฐ - ะธัั
ะพะดะฝะพะต ะธะทะพะฑัะฐะถะตะฝะธะต, ัะฟัะฐะฒะฐ ะฟัะพะณะฝะพะทั:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
ะะพะดัะพะฑะฝะตะต ะพ ะทะฐะดะฐัะฐั
, ะฟะพะดะดะตัะถะธะฒะฐะตะผัั
API `pipeline`, ะผะพะถะฝะพ ัะทะฝะฐัั ะฒ [ััะพะผ ััะตะฑะฝะพะผ ะฟะพัะพะฑะธะธ](https://huggingface.co/docs/transformers/task_sum)
ะ ะดะพะฟะพะปะฝะตะฝะธะต ะบ `pipeline`, ะดะปั ะทะฐะณััะทะบะธ ะธ ะธัะฟะพะปัะทะพะฒะฐะฝะธั ะปัะฑะพะน ะธะท ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน ะฒ ะทะฐะดะฐะฝะฝะพะน ะทะฐะดะฐัะต ะดะพััะฐัะพัะฝะพ ััะตั
ัััะพะบ ะบะพะดะฐ. ะะพั ะฒะตััะธั ะดะปั PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("ะัะธะฒะตั ะผะธั!", return_tensors="pt")
>>> outputs = model(**inputs)
```
ะ ะฒะพั ัะบะฒะธะฒะฐะปะตะฝัะฝัะน ะบะพะด ะดะปั TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("ะัะธะฒะตั ะผะธั!", return_tensors="tf")
>>> outputs = model(**inputs)
```
ะขะพะบะตะฝะธะทะฐัะพั ะพัะฒะตัะฐะตั ะทะฐ ะฒัั ะฟัะตะดะฒะฐัะธัะตะปัะฝัั ะพะฑัะฐะฑะพัะบั, ะบะพัะพััั ะพะถะธะดะฐะตั ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝะฐั ะผะพะดะตะปั, ะธ ะผะพะถะตั ะฑััั ะฒัะทะฒะฐะฝ ะฝะตะฟะพััะตะดััะฒะตะฝะฝะพ ั ะฟะพะผะพััั ะพะดะฝะพะน ัััะพะบะธ (ะบะฐะบ ะฒ ะฟัะธะฒะตะดะตะฝะฝัั
ะฒััะต ะฟัะธะผะตัะฐั
) ะธะปะธ ะฝะฐ ัะฟะธัะบะต. ะ ัะตะทัะปััะฐัะต ะฑัะดะตั ะฟะพะปััะตะฝ ัะปะพะฒะฐัั, ะบะพัะพััะน ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะฒ ะฟะพัะปะตะดัััะตะผ ะบะพะดะต ะธะปะธ ะฟัะพััะพ ะฝะฐะฟััะผัั ะฟะตัะตะดะฐัั ะฒ ะผะพะดะตะปั ั ะฟะพะผะพััั ะพะฟะตัะฐัะพัะฐ ัะฐัะฟะฐะบะพะฒะบะธ ะฐัะณัะผะตะฝัะพะฒ **.
ะกะฐะผะฐ ะผะพะดะตะปั ะฟัะตะดััะฐะฒะปัะตั ัะพะฑะพะน ะพะฑััะฝัะน [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ะธะปะธ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ะธัะฟะพะปัะทัะตะผะพะณะพ ะฑัะบะตะฝะดะฐ), ะบะพัะพััะน ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะฐะบ ะพะฑััะฝะพ. [ะ ััะพะผ ััะบะพะฒะพะดััะฒะต](https://huggingface.co/docs/transformers/training) ัะฐััะบะฐะทัะฒะฐะตััั, ะบะฐะบ ะธะฝัะตะณัะธัะพะฒะฐัั ัะฐะบัั ะผะพะดะตะปั ะฒ ะบะปะฐััะธัะตัะบะธะน ัะธะบะป ะพะฑััะตะฝะธั PyTorch ะธะปะธ TensorFlow, ะธะปะธ ะบะฐะบ ะธัะฟะพะปัะทะพะฒะฐัั ะฝะฐั API `Trainer` ะดะปั ะฑััััะพะน ัะพะฝะบะพะน ะฝะฐัััะพะนะบะธ ะฝะฐ ะฝะพะฒะพะผ ะดะฐัะฐัะตัะต.
## ะะพัะตะผั ะฝะตะพะฑั
ะพะดะธะผะพ ะธัะฟะพะปัะทะพะฒะฐัั transformers?
1. ะัะพัััะต ะฒ ะธัะฟะพะปัะทะพะฒะฐะฝะธะธ ัะพะฒัะตะผะตะฝะฝัะต ะผะพะดะตะปะธ:
- ะััะพะบะฐั ะฟัะพะธะทะฒะพะดะธัะตะปัะฝะพััั ะฒ ะทะฐะดะฐัะฐั
ะฟะพะฝะธะผะฐะฝะธั ะธ ะณะตะฝะตัะฐัะธะธ ะตััะตััะฒะตะฝะฝะพะณะพ ัะทัะบะฐ, ะบะพะผะฟัััะตัะฝะพะณะพ ะทัะตะฝะธั ะธ ะฐัะดะธะพ.
- ะะธะทะบะธะน ะฒั
ะพะดะฝะพะน ะฑะฐััะตั ะดะปั ะฟัะตะฟะพะดะฐะฒะฐัะตะปะตะน ะธ ะฟัะฐะบัะธะบะพะฒ.
- ะะตะฑะพะปััะพะต ะบะพะปะธัะตััะฒะพ ะฐะฑัััะฐะบัะธะน ะดะปั ะฟะพะปัะทะพะฒะฐัะตะปั ะธ ะฒัะตะณะพ ััะธ ะบะปะฐััะฐ ะดะปั ะธะทััะตะฝะธั.
- ะะดะธะฝัะน API ะดะปั ะธัะฟะพะปัะทะพะฒะฐะฝะธั ะฒัะตั
ะฝะฐัะธั
ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน.
1. ะะพะปะตะต ะฝะธะทะบะธะต ะฒััะธัะปะธัะตะปัะฝัะต ะทะฐััะฐัั, ะผะตะฝััะธะน "ัะณะปะตัะพะดะฝัะน ัะปะตะด":
- ะััะปะตะดะพะฒะฐัะตะปะธ ะผะพะณัั ะพะฑะผะตะฝะธะฒะฐัััั ะพะฑััะตะฝะฝัะผะธ ะผะพะดะตะปัะผะธ ะฒะผะตััะพ ัะพะณะพ, ััะพะฑั ะฟะพััะพัะฝะฝะพ ะธั
ะฟะตัะตะพะฑััะฐัั.
- ะัะฐะบัะธะบะธ ะผะพะณัั ัะพะบัะฐัะธัั ะฒัะตะผั ะฒััะธัะปะตะฝะธะน ะธ ะฟัะพะธะทะฒะพะดััะฒะตะฝะฝัะต ะทะฐััะฐัั.
- ะะตัััะบะธ ะฐัั
ะธัะตะบััั ั ะฑะพะปะตะต ัะตะผ 60 000 ะฟัะตะดะฒะฐัะธัะตะปัะฝะพ ะพะฑััะตะฝะฝัั
ะผะพะดะตะปะตะน ะดะปั ะฒัะตั
ะผะพะดะฐะปัะฝะพััะตะน.
1. ะัะฑะพั ะฟะพะดั
ะพะดััะตะณะพ ััะตะนะผะฒะพัะบะฐ ะดะปั ะบะฐะถะดะพะณะพ ััะฐะฟะฐ ะถะธะทะฝะธ ะผะพะดะตะปะธ:
- ะะฑััะตะฝะธะต ัะฐะผัั
ัะพะฒัะตะผะตะฝะฝัั
ะผะพะดะตะปะตะน ะทะฐ 3 ัััะพะบะธ ะบะพะดะฐ.
- ะะตัะตะผะตัะฐะนัะต ะพะดะฝั ะผะพะดะตะปั ะผะตะถะดั ััะตะนะผะฒะพัะบะฐะผะธ TF2.0/PyTorch/JAX ะฟะพ ัะฒะพะตะผั ััะผะพััะตะฝะธั.
- ะะตัะฟัะตะฟััััะฒะตะฝะฝัะน ะฒัะฑะพั ะฟะพะดั
ะพะดััะตะณะพ ััะตะนะผะฒะพัะบะฐ ะดะปั ะพะฑััะตะฝะธั, ะพัะตะฝะบะธ ะธ ะฟัะพะธะทะฒะพะดััะฒะฐ.
1. ะะตะณะบะพ ะฝะฐัััะพะธัั ะผะพะดะตะปั ะธะปะธ ะฟัะธะผะตั ะฟะพะด ัะฒะพะธ ะฝัะถะดั:
- ะั ะฟัะตะดะพััะฐะฒะปัะตะผ ะฟัะธะผะตัั ะดะปั ะบะฐะถะดะพะน ะฐัั
ะธัะตะบัััั, ััะพะฑั ะฒะพัะฟัะพะธะทะฒะตััะธ ัะตะทัะปััะฐัั, ะพะฟัะฑะปะธะบะพะฒะฐะฝะฝัะต ะธั
ะฐะฒัะพัะฐะผะธ.
- ะะฝัััะตะฝะฝะธะต ะบะพะผะฟะพะฝะตะฝัั ะผะพะดะตะปะธ ัะฐัะบััะฒะฐัััั ะผะฐะบัะธะผะฐะปัะฝะพ ะฟะพัะปะตะดะพะฒะฐัะตะปัะฝะพ.
- ะคะฐะนะปั ะผะพะดะตะปะตะน ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะฝะตะทะฐะฒะธัะธะผะพ ะพั ะฑะธะฑะปะธะพัะตะบะธ ะดะปั ะฟัะพะฒะตะดะตะฝะธั ะฑัััััั
ัะบัะฟะตัะธะผะตะฝัะพะฒ.
## ะะพัะตะผั ั ะฝะต ะดะพะปะถะตะฝ ะธัะฟะพะปัะทะพะฒะฐัั transformers?
- ะะฐะฝะฝะฐั ะฑะธะฑะปะธะพัะตะบะฐ ะฝะต ัะฒะปัะตััั ะผะพะดัะปัะฝัะผ ะฝะฐะฑะพัะพะผ ัััะพะธัะตะปัะฝัั
ะฑะปะพะบะพะฒ ะดะปั ะฝะตะนัะพะฝะฝัั
ัะตัะตะน. ะะพะด ะฒ ัะฐะนะปะฐั
ะผะพะดะตะปะตะน ัะฟะตัะธะฐะปัะฝะพ ะฝะต ัะตัะฐะบัะพัะธััั ะดะพะฟะพะปะฝะธัะตะปัะฝัะผะธ ะฐะฑัััะฐะบัะธัะผะธ, ััะพะฑั ะธััะปะตะดะพะฒะฐัะตะปะธ ะผะพะณะปะธ ะฑััััะพ ะธัะตัะฐัะธะฒะฝะพ ัะฐะฑะพัะฐัั ั ะบะฐะถะดะพะน ะธะท ะผะพะดะตะปะตะน, ะฝะต ะฟะพะณััะถะฐััั ะฒ ะดะพะฟะพะปะฝะธัะตะปัะฝัะต ะฐะฑัััะฐะบัะธะธ/ัะฐะนะปั.
- API ะพะฑััะตะฝะธั ะฝะต ะฟัะตะดะฝะฐะทะฝะฐัะตะฝ ะดะปั ัะฐะฑะพัั ั ะปัะฑะพะน ะผะพะดะตะปัั, ะฐ ะพะฟัะธะผะธะทะธัะพะฒะฐะฝ ะดะปั ัะฐะฑะพัั ั ะผะพะดะตะปัะผะธ, ะฟัะตะดะพััะฐะฒะปัะตะผัะผะธ ะฑะธะฑะปะธะพัะตะบะพะน. ะะปั ัะฐะฑะพัั ั ะพะฑัะธะผะธ ัะธะบะปะฐะผะธ ะผะฐัะธะฝะฝะพะณะพ ะพะฑััะตะฝะธั ัะปะตะดัะตั ะธัะฟะพะปัะทะพะฒะฐัั ะดััะณัั ะฑะธะฑะปะธะพัะตะบั (ะฒะพะทะผะพะถะฝะพ, [Accelerate](https://huggingface.co/docs/accelerate)).
- ะะตัะผะพััั ะฝะฐ ัะพ, ััะพ ะผั ัััะตะผะธะผัั ะฟัะตะดััะฐะฒะธัั ะบะฐะบ ะผะพะถะฝะพ ะฑะพะปััะต ะฟัะธะผะตัะพะฒ ะธัะฟะพะปัะทะพะฒะฐะฝะธั, ัะบัะธะฟัั ะฒ ะฝะฐัะตะน ะฟะฐะฟะบะต [ะฟัะธะผะตัะพะฒ](https://github.com/huggingface/transformers/tree/main/examples) ัะฒะปััััั ะธะผะตะฝะฝะพ ะฟัะธะผะตัะฐะผะธ. ะัะตะดะฟะพะปะฐะณะฐะตััั, ััะพ ะพะฝะธ ะฝะต ะฑัะดัั ัะฐะฑะพัะฐัั "ะธะท ะบะพัะพะฑะบะธ" ะดะปั ัะตัะตะฝะธั ะฒะฐัะตะน ะบะพะฝะบัะตัะฝะพะน ะทะฐะดะฐัะธ, ะธ ะฒะฐะผ ะฟัะธะดะตััั ะธะทะผะตะฝะธัั ะฝะตัะบะพะปัะบะพ ัััะพะบ ะบะพะดะฐ, ััะพะฑั ะฐะดะฐะฟัะธัะพะฒะฐัั ะธั
ะฟะพะด ัะฒะพะธ ะฝัะถะดั.
## ะฃััะฐะฝะพะฒะบะฐ
### ะก ะฟะพะผะพััั pip
ะะฐะฝะฝัะน ัะตะฟะพะทะธัะพัะธะน ะฟัะพัะตััะธัะพะฒะฐะฝ ะฝะฐ Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ ะธ TensorFlow 2.6+.
ะฃััะฐะฝะฐะฒะปะธะฒะฐัั ๐ค Transformers ัะปะตะดัะตั ะฒ [ะฒะธัััะฐะปัะฝะพะน ััะตะดะต](https://docs.python.org/3/library/venv.html). ะัะปะธ ะฒั ะฝะต ะทะฝะฐะบะพะผั ั ะฒะธัััะฐะปัะฝัะผะธ ััะตะดะฐะผะธ Python, ะพะทะฝะฐะบะพะผััะตัั ั [ััะบะพะฒะพะดััะฒะพะผ ะฟะพะปัะทะพะฒะฐัะตะปั](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
ะกะฝะฐัะฐะปะฐ ัะพะทะดะฐะนัะต ะฒะธัััะฐะปัะฝัั ััะตะดั ั ัะพะน ะฒะตััะธะตะน Python, ะบะพัะพััั ะฒั ัะพะฑะธัะฐะตัะตัั ะธัะฟะพะปัะทะพะฒะฐัั, ะธ ะฐะบัะธะฒะธััะนัะต ะตะต.
ะะฐัะตะผ ะฝะตะพะฑั
ะพะดะธะผะพ ัััะฐะฝะพะฒะธัั ั
ะพัั ะฑั ะพะดะธะฝ ะฑะตะบะตะฝะด ะธะท Flax, PyTorch ะธะปะธ TensorFlow.
ะะพะถะฐะปัะนััะฐ, ะพะฑัะฐัะธัะตัั ะบ ัััะฐะฝะธัะฐะผ [TensorFlow ัััะฐะฝะพะฒะพัะฝะฐั ัััะฐะฝะธัะฐ](https://www.tensorflow.org/install/), [PyTorch ัััะฐะฝะพะฒะพัะฝะฐั ัััะฐะฝะธัะฐ](https://pytorch.org/get-started/locally/#start-locally) ะธ/ะธะปะธ [Flax](https://github.com/google/flax#quick-install) ะธ [Jax](https://github.com/google/jax#installation), ะณะดะต ะพะฟะธัะฐะฝั ะบะพะผะฐะฝะดั ัััะฐะฝะพะฒะบะธ ะดะปั ะฒะฐัะตะน ะฟะปะฐััะพัะผั.
ะะพัะปะต ัััะฐะฝะพะฒะบะธ ะพะดะฝะพะณะพ ะธะท ััะธั
ะฑัะบะตะฝะดะพะฒ ๐ค Transformers ะผะพะถะตั ะฑััั ัััะฐะฝะพะฒะปะตะฝ ั ะฟะพะผะพััั pip ัะปะตะดัััะธะผ ะพะฑัะฐะทะพะผ:
```bash
pip install transformers
```
ะัะปะธ ะฒั ั
ะพัะธัะต ะฟะพะธะณัะฐัั ั ะฟัะธะผะตัะฐะผะธ ะธะปะธ ะฒะฐะผ ะฝัะถะตะฝ ัะฐะผัะน ัะพะฒัะตะผะตะฝะฝัะน ะบะพะด ะธ ะฒั ะฝะต ะผะพะถะตัะต ะถะดะฐัั ะฝะพะฒะพะณะพ ัะตะปะธะทะฐ, ะฒั ะดะพะปะถะฝั [ัััะฐะฝะพะฒะธัั ะฑะธะฑะปะธะพัะตะบั ะธะท ะธัั
ะพะดะฝะพะณะพ ะบะพะดะฐ](https://huggingface.co/docs/transformers/installation#installing-from-source).
### ะก ะฟะพะผะพััั conda
ะฃััะฐะฝะพะฒะธัั Transformers ั ะฟะพะผะพััั conda ะผะพะถะฝะพ ัะปะตะดัััะธะผ ะพะฑัะฐะทะพะผ:
```bash
conda install conda-forge::transformers
```
> **_ะะะะะขะะ:_** ะฃััะฐะฝะพะฒะบะฐ `transformers` ัะตัะตะท ะบะฐะฝะฐะป `huggingface` ัััะฐัะตะปะฐ.
ะ ัะพะผ, ะบะฐะบ ัััะฐะฝะพะฒะธัั Flax, PyTorch ะธะปะธ TensorFlow ั ะฟะพะผะพััั conda, ัะธัะฐะนัะต ะฝะฐ ัััะฐะฝะธัะฐั
, ะฟะพัะฒััะตะฝะฝัั
ะธั
ัััะฐะฝะพะฒะบะต.
> **_ะะะะะขะะ:_** ะ ะพะฟะตัะฐัะธะพะฝะฝะพะน ัะธััะตะผะต Windows ะฒะฐะผ ะผะพะถะตั ะฑััั ะฟัะตะดะปะพะถะตะฝะพ ะฐะบัะธะฒะธัะพะฒะฐัั ัะตะถะธะผ ัะฐะทัะฐะฑะพััะธะบะฐ, ััะพะฑั ะฒะพัะฟะพะปัะทะพะฒะฐัััั ะฟัะตะธะผััะตััะฒะฐะผะธ ะบััะธัะพะฒะฐะฝะธั. ะัะปะธ ะดะปั ะฒะฐั ััะพ ะฝะตะฒะพะทะผะพะถะฝะพ, ัะพะพะฑัะธัะต ะฝะฐะผ ะพะฑ ััะพะผ [ะทะดะตัั](https://github.com/huggingface/huggingface_hub/issues/1062).
## ะะพะดะตะปัะฝัะต ะฐัั
ะธัะตะบัััั
**[ะัะต ะบะพะฝััะพะปัะฝัะต ัะพัะบะธ ะผะพะดะตะปะตะน](https://huggingface.co/models)**, ะฟัะตะดะพััะฐะฒะปัะตะผัะต ๐ค Transformers, ะฑะตัะฟัะตะฟััััะฒะตะฝะฝะพ ะธะฝัะตะณัะธัััััั ั huggingface.co [model hub](https://huggingface.co/models), ะบัะดะฐ ะพะฝะธ ะทะฐะณััะถะฐัััั ะฝะตะฟะพััะตะดััะฒะตะฝะฝะพ [ะฟะพะปัะทะพะฒะฐัะตะปัะผะธ](https://huggingface.co/users) ะธ [ะพัะณะฐะฝะธะทะฐัะธัะผะธ](https://huggingface.co/organizations).
ะขะตะบััะตะต ะบะพะปะธัะตััะฒะพ ะบะพะฝััะพะปัะฝัั
ัะพัะตะบ: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค ะ ะฝะฐััะพััะตะต ะฒัะตะผั Transformers ะฟัะตะดะพััะฐะฒะปัะตั ัะปะตะดัััะธะต ะฐัั
ะธัะตะบัััั: ะฟะพะดัะพะฑะฝะพะต ะพะฟะธัะฐะฝะธะต ะบะฐะถะดะพะน ะธะท ะฝะธั
ัะผ. [ะทะดะตัั](https://huggingface.co/docs/transformers/model_summary).
ะงัะพะฑั ะฟัะพะฒะตัะธัั, ะตััั ะปะธ ั ะบะฐะถะดะพะน ะผะพะดะตะปะธ ัะตะฐะปะธะทะฐัะธั ะฝะฐ Flax, PyTorch ะธะปะธ TensorFlow, ะธะปะธ ัะฒัะทะฐะฝะฝัะน ั ะฝะตะน ัะพะบะตะฝะธะทะฐัะพั, ะฟะพะดะดะตัะถะธะฒะฐะตะผัะน ะฑะธะฑะปะธะพัะตะบะพะน ๐ค Tokenizers, ะพะฑัะฐัะธัะตัั ะบ [ััะพะน ัะฐะฑะปะธัะต](https://huggingface.co/docs/transformers/index#supported-frameworks).
ะญัะธ ัะตะฐะปะธะทะฐัะธะธ ะฑัะปะธ ะฟัะพัะตััะธัะพะฒะฐะฝั ะฝะฐ ะฝะตัะบะพะปัะบะธั
ะฝะฐะฑะพัะฐั
ะดะฐะฝะฝัั
(ัะผ. ะฟัะธะผะตัั ัะบัะธะฟัะพะฒ) ะธ ะดะพะปะถะฝั ัะพะพัะฒะตัััะฒะพะฒะฐัั ะฟัะพะธะทะฒะพะดะธัะตะปัะฝะพััะธ ะพัะธะณะธะฝะฐะปัะฝัั
ัะตะฐะปะธะทะฐัะธะน. ะะพะปะตะต ะฟะพะดัะพะฑะฝัั ะธะฝัะพัะผะฐัะธั ะพ ะฟัะพะธะทะฒะพะดะธัะตะปัะฝะพััะธ ะผะพะถะฝะพ ะฝะฐะนัะธ ะฒ ัะฐะทะดะตะปะต "ะัะธะผะตัั" [ะดะพะบัะผะตะฝัะฐัะธะธ](https://github.com/huggingface/transformers/tree/main/examples).
## ะะทััะธ ะฑะพะปััะต
| ะกะตะบัะธั | ะะฟะธัะฐะฝะธะต |
|-|-|
| [ะะพะบัะผะตะฝัะฐัะธั](https://huggingface.co/docs/transformers/) | ะะพะปะฝะฐั ะดะพะบัะผะตะฝัะฐัะธั ะฟะพ API ะธ ะณะฐะนะดั |
| [ะัะฐัะบะธะต ะพะฟะธัะฐะฝะธั ะทะฐะดะฐั](https://huggingface.co/docs/transformers/task_summary) | ะะฐะดะฐัะธ ะฟะพะดะดะตัะถะธะฒะฐัััั ๐ค Transformers |
| [ะะพัะพะฑะธะต ะฟะพ ะฟัะตะดะฒะฐัะธัะตะปัะฝะพะน ะพะฑัะฐะฑะพัะบะต](https://huggingface.co/docs/transformers/preprocessing) | ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะบะปะฐััะฐ `Tokenizer` ะดะปั ะฟะพะดะณะพัะพะฒะบะธ ะดะฐะฝะฝัั
ะดะปั ะผะพะดะตะปะตะน |
| [ะะฑััะตะฝะธะต ะธ ะดะพัะฐะฑะพัะบะฐ](https://huggingface.co/docs/transformers/training) | ะัะฟะพะปัะทะพะฒะฐะฝะธะต ะผะพะดะตะปะตะน, ะฟัะตะดะพััะฐะฒะปัะตะผัั
๐ค Transformers, ะฒ ัะธะบะปะต ะพะฑััะตะฝะธั PyTorch/TensorFlow ะธ API `Trainer`. |
| [ะัััััะน ััั: ะขะพะฝะบะฐั ะฝะฐัััะพะนะบะฐ/ัะบัะธะฟัั ะธัะฟะพะปัะทะพะฒะฐะฝะธั](https://github.com/huggingface/transformers/tree/main/examples) | ะัะธะผะตัั ัะบัะธะฟัะพะฒ ะดะปั ัะพะฝะบะพะน ะฝะฐัััะพะนะบะธ ะผะพะดะตะปะตะน ะฝะฐ ัะธัะพะบะพะผ ัะฟะตะบััะต ะทะฐะดะฐั |
| [ะกะพะฒะผะตััะฝะพะต ะธัะฟะพะปัะทะพะฒะฐะฝะธะต ะธ ะทะฐะณััะทะบะฐ ะผะพะดะตะปะตะน](https://huggingface.co/docs/transformers/model_sharing) | ะะฐะณััะถะฐะนัะต ะธ ะดะตะปะธัะตัั ั ัะพะพะฑัะตััะฒะพะผ ัะฒะพะธะผะธ ะดะพัะฐะฑะพัะฐะฝะฝัะผะธ ะผะพะดะตะปัะผะธ |
## ะฆะธัะธัะพะฒะฐะฝะธะต
ะขะตะฟะตัั ั ะฝะฐั ะตััั [ััะฐััั](https://www.aclweb.org/anthology/2020.emnlp-demos.6/), ะบะพัะพััั ะผะพะถะฝะพ ัะธัะธัะพะฒะฐัั ะดะปั ะฑะธะฑะปะธะพัะตะบะธ ๐ค Transformers:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_zh-hant.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!---
A useful guide for English-Traditional Chinese translation of Hugging Face documentation
- Add space around English words and numbers when they appear between Chinese characters. E.g., ๅ
ฑ 100 ๅค็จฎ่ช่จ; ไฝฟ็จ transformers ๅฝๅผๅบซใ
- Use square quotes, e.g.,ใๅผ็จใ
- Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
Dictionary
API: API (ไธ็ฟป่ญฏ๏ผ
add: ๅ ๅ
ฅ
checkpoint: ๆชขๆฅ้ป
code: ็จๅผ็ขผ
community: ็คพ็พค
confidence: ไฟก่ณดๅบฆ
dataset: ่ณๆ้
documentation: ๆไปถ
example: ๅบๆฌ็ฟป่ญฏ็บใ็ฏไพใ๏ผๆไพ่ชๆ็ฟป็บใไพๅญใ
finetune: ๅพฎ่ชฟ
Hugging Face: Hugging Face๏ผไธ็ฟป่ญฏ๏ผ
implementation: ๅฏฆไฝ
inference: ๆจ่ซ
library: ๅฝๅผๅบซ
module: ๆจก็ต
NLP/Natural Language Processing: ไปฅ NLP ๅบ็พๆไธ็ฟป่ญฏ๏ผไปฅ Natural Language Processing ๅบ็พๆ็ฟป่ญฏ็บ่ช็ถ่ช่จ่็
online demos: ็ทไธDemo
pipeline: pipeline๏ผไธ็ฟป่ญฏ๏ผ
pretrained/pretrain: ้ ่จ็ทด
Python data structures (e.g., list, set, dict): ็ฟป่ญฏ็บไธฒๅ๏ผ้ๅ๏ผๅญๅ
ธ๏ผไธฆ็จๆฌ่ๆจ่จปๅ่ฑๆ
repository: repository๏ผไธ็ฟป่ญฏ๏ผ
summary: ๆฆ่ฆฝ
token-: token-๏ผไธ็ฟป่ญฏ๏ผ
Trainer: Trainer๏ผไธ็ฟป่ญฏ๏ผ
transformer: transformer๏ผไธ็ฟป่ญฏ๏ผ
tutorial: ๆๅญธ
user: ไฝฟ็จ่
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<b>็น้ซไธญๆ</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>็บ JaxใPyTorch ไปฅๅ TensorFlow ๆ้ ็ๅ
้ฒ่ช็ถ่ช่จ่็ๅฝๅผๅบซ</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers ๆไพไบๆธไปฅๅ่จ็้ ่จ็ทดๆจกๅ๏ผๆฏๆด 100 ๅค็จฎ่ช่จ็ๆๆฌๅ้กใ่ณ่จๆทๅใๅ็ญใๆ่ฆใ็ฟป่ญฏใๆๆฌ็ๆใๅฎ็ๅฎๆจๆฏ่ฎๆๅ
้ฒ็ NLP ๆ่กไบบไบบๆ็จใ
๐ค Transformers ๆไพไบไพฟๆผๅฟซ้ไธ่ผๅไฝฟ็จ็API๏ผ่ฎไฝ ๅฏไปฅๅฐ้ ่จ็ทดๆจกๅ็จๅจ็ตฆๅฎๆๆฌใๅจไฝ ็่ณๆ้ไธๅพฎ่ชฟ็ถๅพ็ถ็ฑ [model hub](https://huggingface.co/models) ่็คพ็พคๅ
ฑไบซใๅๆ๏ผๆฏๅๅฎ็พฉ็ Python ๆจก็ตๆถๆงๅๅฎๅ
จ็จ็ซ๏ผๆนไพฟไฟฎๆนๅๅฟซ้็ ็ฉถๅฏฆ้ฉใ
๐ค Transformers ๆฏๆดไธๅๆ็ฑ้็ๆทฑๅบฆๅญธ็ฟๅฝๅผๅบซ๏ผ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ไปฅๅ [TensorFlow](https://www.tensorflow.org/) โ ไธฆ่ไนๅฎ็พๆดๅใไฝ ๅฏไปฅ็ดๆฅไฝฟ็จๅ
ถไธญไธๅๆกๆถ่จ็ทดไฝ ็ๆจกๅ๏ผ็ถๅพ็จๅฆไธๅ่ผๅ
ฅๅๆจ่ซใ
## ็ทไธDemo
ไฝ ๅฏไปฅ็ดๆฅๅจ [model hub](https://huggingface.co/models) ไธๆธฌ่ฉฆๅคงๅคๆธ็ๆจกๅใๆๅไนๆไพไบ [็งๆๆจกๅ่จ็ฎกใๆจกๅ็ๆฌ็ฎก็ไปฅๅๆจ่ซAPI](https://huggingface.co/pricing)ใ
้่ฃกๆฏไธไบ็ฏไพ๏ผ
- [็จ BERT ๅ้ฎ่ๅกซ่ฉ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [็จ Electra ๅๅฐๆๅ่ฉ่พจ่ญ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [็จ GPT-2 ๅๆๆฌ็ๆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [็จ RoBERTa ๅ่ช็ถ่ช่จๆจ่ซ](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [็จ BART ๅๆๆฌๆ่ฆ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [็จ DistilBERT ๅๅ็ญ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [็จ T5 ๅ็ฟป่ญฏ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
**[Write With Transformer](https://transformer.huggingface.co)**๏ผ็ฑ Hugging Face ๅ้ๆๆ้ ๏ผๆฏไธๅๆๆฌ็ๆ็ๅฎๆน demoใ
## ๅฆๆไฝ ๅจๅฐๆพ็ฑ Hugging Face ๅ้ๆๆไพ็ๅฎข่ฃฝๅๆฏๆดๆๅ
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## ๅฟซ้ไธๆ
ๆๅ็บๅฟซ้ไฝฟ็จๆจกๅๆไพไบ `pipeline` APIใ Pipeline ๅ
ๅซไบ้ ่จ็ทดๆจกๅๅๅฐๆ็ๆๆฌ้ ่็ใไธ้ขๆฏไธๅๅฟซ้ไฝฟ็จ pipeline ๅปๅคๆทๆญฃ่ฒ ้ขๆ
็ท็ไพๅญ๏ผ
```python
>>> from transformers import pipeline
# ไฝฟ็จๆ
็ทๅๆ pipeline
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
็ฌฌไบ่ก็จๅผ็ขผไธ่ผไธฆๅฟซๅ pipeline ไฝฟ็จ็้ ่จ็ทดๆจกๅ๏ผ่็ฌฌไธ่ก็จๅผ็ขผๅๅจ็ตฆๅฎ็ๆๆฌไธ้ฒ่กไบ่ฉไผฐใ้่ฃก็็ญๆกโๆญฃ้ขโ (positive) ๅ
ทๆ 99.97% ็ไฟก่ณดๅบฆใ
่จฑๅค็ NLP ไปปๅ้ฝๆ้จ้ธๅณ็จ็้ ่จ็ทด `pipeline`ใไพๅฆ๏ผๆๅๅฏไปฅ่ผ้ฌๅฐๅพ็ตฆๅฎๆๆฌไธญๆทๅๅ้ก็ญๆก๏ผ
``` python
>>> from transformers import pipeline
# ไฝฟ็จๅ็ญ pipeline
>>> question_answerer = pipeline('question-answering')
>>> question_answerer({
... 'question': 'What is the name of the repository ?',
... 'context': 'Pipeline has been included in the huggingface/transformers repository'
... })
{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
```
้คไบๆไพๅ้ก่งฃ็ญ๏ผ้ ่จ็ทดๆจกๅ้ๆไพไบๅฐๆ็ไฟก่ณดๅบฆๅๆธไปฅๅ่งฃ็ญๅจ tokenized ๅพ็ๆๆฌไธญ้ๅงๅ็ตๆ็ไฝ็ฝฎใไฝ ๅฏไปฅๅพ[้ๅๆๅญธ](https://huggingface.co/docs/transformers/task_summary)ไบ่งฃๆดๅค `pipeline` APIๆฏๆด็ไปปๅใ
่ฆๅจไฝ ็ไปปๅไธญไธ่ผๅไฝฟ็จไปปไฝ้ ่จ็ทดๆจกๅๅพ็ฐกๅฎ๏ผๅช้ไธ่ก็จๅผ็ขผใ้่ฃกๆฏ PyTorch ็็็ฏไพ๏ผ
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
้่ฃกๆฏๅฐๆ็ TensorFlow ็จๅผ็ขผ๏ผ
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
Tokenizer ็บๆๆ็้ ่จ็ทดๆจกๅๆไพไบ้ ่็๏ผไธฆๅฏไปฅ็ดๆฅ่ฝๆๅฎไธๅญไธฒ๏ผๆฏๅฆไธ้ข็ไพๅญ๏ผๆไธฒๅ (list)ใๅฎๆ่ผธๅบไธๅ็ๅญๅ
ธ (dict) ่ฎไฝ ๅฏไปฅๅจไธๆธธ็จๅผ็ขผ่ฃกไฝฟ็จๆ็ดๆฅ่็ฑ `**` ้็ฎๅผๅณ็ตฆๆจกๅใ
ๆจกๅๆฌ่บซๆฏไธๅๅธธ่ฆ็ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ๆ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผๅๆฑบๆผไฝ ็ๅพ็ซฏ๏ผ๏ผๅฏไพๅธธ่ฆๆนๅผไฝฟ็จใ [้ๅๆๅญธ](https://huggingface.co/transformers/training.html)่งฃ้ไบๅฆไฝๅฐ้ๆจฃ็ๆจกๅๆดๅๅฐไธ่ฌ็ PyTorch ๆ TensorFlow ่จ็ทด่ฟดๅไธญ๏ผๆๆฏๅฆไฝไฝฟ็จๆๅ็ `Trainer` API ๅจไธๅๆฐ็่ณๆ้ไธๅฟซ้้ฒ่กๅพฎ่ชฟใ
## ็บไป้บผ่ฆ็จ transformers๏ผ
1. ไพฟๆผไฝฟ็จ็ๅ
้ฒๆจกๅ๏ผ
- NLU ๅ NLG ไธๆง่ฝๅ่ถ
- ๅฐๆๅญธๅๅฏฆไฝๅๅฅฝไธไฝ้ๆชป
- ้ซๅบฆๆฝ่ฑก๏ผไฝฟ็จ่
ๅช้ ๅญธ็ฟ 3 ๅ้กๅฅ
- ๅฐๆๆๆจกๅไฝฟ็จ็ๅถๅผๅAPI
1. ๆดไฝ็้็ฎๆๆฌ๏ผๆดๅฐ็็ขณๆๆพ๏ผ
- ็ ็ฉถไบบๅกๅฏไปฅๅไบซๅทฒ่จ็ทด็ๆจกๅ่้ๆฏๆฌกๅพ้ ญ้ๅง่จ็ทด
- ๅทฅ็จๅธซๅฏไปฅๆธๅฐ่จ็ฎๆ้ไปฅๅ็็ขๆๆฌ
- ๆธๅ็จฎๆจกๅๆถๆงใๅ
ฉๅๅคๅ้ ่จ็ทดๆจกๅใ100ๅค็จฎ่ช่จๆฏๆด
1. ๅฐๆผๆจกๅ็ๅฝ้ฑๆ็ๆฏไธๅ้จๅ้ฝ้ข้ขไฟฑๅฐ๏ผ
- ่จ็ทดๅ
้ฒ็ๆจกๅ๏ผๅช้ 3 ่ก็จๅผ็ขผ
- ๆจกๅๅฏไปฅๅจไธๅๆทฑๅบฆๅญธ็ฟๆกๆถไน้ไปปๆ่ฝๆ
- ็บ่จ็ทดใ่ฉไผฐๅ็็ข้ธๆๆ้ฉๅ็ๆกๆถ๏ผไธฆๅฎ็พ้ๆฅ
1. ็บไฝ ็้ๆฑ่ผ้ฌๅฎข่ฃฝๅๅฐๅฑฌๆจกๅๅ็ฏไพ๏ผ
- ๆๅ็บๆฏ็จฎๆจกๅๆถๆงๆไพไบๅคๅ็ฏไพไพ้็พๅ่ซๆ็ตๆ
- ไธ่ด็ๆจกๅๅ
ง้จๆถๆง
- ๆจกๅๆชๆกๅฏๅฎ็จไฝฟ็จ๏ผไพฟๆผไฟฎๆนๅๅฟซ้ๅฏฆ้ฉ
## ไป้บผๆ
ๆณไธๆไธ่ฉฒ็จ transformers๏ผ
- ๆฌๅฝๅผๅบซไธฆไธๆฏๆจก็ตๅ็็ฅ็ถ็ถฒ็ตกๅทฅๅ
ท็ฎฑใๆจกๅๆไปถไธญ็็จๅผ็ขผไธฆๆชๅ้กๅค็ๆฝ่ฑกๅฐ่ฃ๏ผไปฅไพฟ็ ็ฉถไบบๅกๅฟซ้ๅฐ็ฟป้ฑๅไฟฎๆน็จๅผ็ขผ๏ผ่ไธๆๆทฑ้ท่ค้็้กๅฅๅ
่ฃไนไธญใ
- `Trainer` API ไธฆ้็ธๅฎนไปปไฝๆจกๅ๏ผๅฎๅช็บๆฌๅฝๅผๅบซไธญ็ๆจกๅๆไฝณๅใๅฐๆผไธ่ฌ็ๆฉๅจๅญธ็ฟ็จ้๏ผ่ซไฝฟ็จๅ
ถไปๅฝๅผๅบซใ
- ๅ็ฎกๆๅๅทฒ็กๅ่็บ๏ผ[examples ็ฎ้](https://github.com/huggingface/transformers/tree/main/examples)ไธญ็่
ณๆฌไนๅ
็บ็ฏไพ่ๅทฒใๅฐๆผ็นๅฎๅ้ก๏ผๅฎๅไธฆไธไธๅฎ้จ้ธๅณ็จ๏ผๅฏ่ฝ้่ฆไฟฎๆนๅนพ่ก็จๅผ็ขผไปฅ็ฌฆๅ้ๆฑใ
## ๅฎ่ฃ
### ไฝฟ็จ pip
้ๅ Repository ๅทฒๅจ Python 3.8+ใFlax 0.4.1+ใPyTorch 1.11+ ๅ TensorFlow 2.6+ ไธ็ถ้ๆธฌ่ฉฆใ
ไฝ ๅฏไปฅๅจ[่ๆฌ็ฐๅข](https://docs.python.org/3/library/venv.html)ไธญๅฎ่ฃ ๐ค Transformersใๅฆๆไฝ ้ไธ็ๆ Python ็่ๆฌ็ฐๅข๏ผ่ซ้ฑๆญค[ไฝฟ็จ่
ๆๅผ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ
้ฆๅ
๏ผ็จไฝ ๆ็ฎไฝฟ็จ็็ๆฌ็ Python ๅตๅปบไธๅ่ๆฌ็ฐๅขไธฆ้ฒๅ
ฅใ
็ถๅพ๏ผไฝ ้่ฆๅฎ่ฃ FlaxใPyTorch ๆ TensorFlow ๅ
ถไธญไนไธใๅฐๆผ่ฉฒๅฆไฝๅจไฝ ไฝฟ็จ็ๅนณๅฐไธๅฎ่ฃ้ไบๆกๆถ๏ผ่ซๅ้ฑ [TensorFlow ๅฎ่ฃ้ ้ข](https://www.tensorflow.org/install/), [PyTorch ๅฎ่ฃ้ ้ข](https://pytorch.org/get-started/locally/#start-locally) ๆ [Flax ๅฎ่ฃ้ ้ข](https://github.com/google/flax#quick-install)ใ
็ถๅ
ถไธญไธๅๅพ็ซฏๅฎ่ฃๆๅๅพ๏ผ๐ค Transformers ๅฏไพๆญคๅฎ่ฃ๏ผ
```bash
pip install transformers
```
ๅฆๆไฝ ๆณ่ฆ่ฉฆ่ฉฆ็ฏไพๆ่
ๆณๅจๆญฃๅผ็ผๅธๅไฝฟ็จๆๆฐ้็ผไธญ็็จๅผ็ขผ๏ผไฝ ๅฟ
้ [ๅพๅๅง็ขผๅฎ่ฃ](https://huggingface.co/docs/transformers/installation#installing-from-source)ใ
### ไฝฟ็จ conda
๐ค Transformers ๅฏไปฅ่็ฑ conda ไพๆญคๅฎ่ฃ๏ผ
```shell script
conda install conda-forge::transformers
```
> **_็ญ่จ:_** ๅพ `huggingface` ้ ป้ๅฎ่ฃ `transformers` ๅทฒ่ขซๆทๆฑฐใ
่ฆ่็ฑ conda ๅฎ่ฃ FlaxใPyTorch ๆ TensorFlow ๅ
ถไธญไนไธ๏ผ่ซๅ้ฑๅฎๅๅ่ชๅฎ่ฃ้ ้ข็่ชชๆใ
## ๆจกๅๆถๆง
**๐ค Transformers ๆฏๆด็[ๆๆ็ๆจกๅๆชขๆฅ้ป](https://huggingface.co/models)**๏ผ็ฑ[ไฝฟ็จ่
](https://huggingface.co/users)ๅ[็ต็น](https://huggingface.co/organizations)ไธๅณ๏ผๅ่ huggingface.co [model hub](https://huggingface.co) ๅฎ็พ็ตๅใ
็ฎๅ็ๆชขๆฅ้ปๆธ้๏ผ ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers ็ฎๅๆฏๆดไปฅไธ็ๆถๆง: ๆจกๅๆฆ่ฆฝ่ซๅ้ฑ[้่ฃก](https://huggingface.co/docs/transformers/model_summary).
่ฆๆชขๆฅๆๅๆจกๅๆฏๅฆๅทฒๆ FlaxใPyTorch ๆ TensorFlow ็ๅฏฆไฝ๏ผๆๅ
ถๆฏๅฆๅจ๐ค Tokenizers ๅฝๅผๅบซไธญๆๅฐๆ็ tokenizer๏ผๆฌ่ซๅ้ฑ[ๆญค่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใ
้ไบๅฏฆไฝๅๅทฒๆผๅคๅ่ณๆ้ๆธฌ่ฉฆ๏ผ่ซๅ้ฑ็ฏไพ่
ณๆฌ๏ผไธฆๆ่ๅ็ๅฏฆไฝ่กจ็พ็ธ็ถใไฝ ๅฏไปฅๅจ็ฏไพๆไปถ็[ๆญค็ฏ](https://huggingface.co/docs/transformers/examples)ไธญไบ่งฃๅฏฆไฝ็็ดฐ็ฏใ
## ไบ่งฃๆดๅค
| ็ซ ็ฏ | ๆ่ฟฐ |
|-|-|
| [ๆไปถ](https://huggingface.co/transformers/) | ๅฎๆด็ API ๆไปถๅๆๅญธ |
| [ไปปๅๆฆ่ฆฝ](https://huggingface.co/docs/transformers/task_summary) | ๐ค Transformers ๆฏๆด็ไปปๅ |
| [้ ่็ๆๅญธ](https://huggingface.co/docs/transformers/preprocessing) | ไฝฟ็จ `Tokenizer` ไพ็บๆจกๅๆบๅ่ณๆ |
| [่จ็ทดๅๅพฎ่ชฟ](https://huggingface.co/docs/transformers/training) | ไฝฟ็จ PyTorch/TensorFlow ็ๅ
งๅปบ็่จ็ทดๆนๅผๆๆผ `Trainer` API ไธญไฝฟ็จ ๐ค Transformers ๆไพ็ๆจกๅ |
| [ๅฟซ้ไธๆ๏ผๅพฎ่ชฟๅ็ฏไพ่
ณๆฌ](https://github.com/huggingface/transformers/tree/main/examples) | ็บๅ็จฎไปปๅๆไพ็็ฏไพ่
ณๆฌ |
| [ๆจกๅๅไบซๅไธๅณ](https://huggingface.co/docs/transformers/model_sharing) | ไธๅณไธฆ่็คพ็พคๅไบซไฝ ๅพฎ่ชฟ็ๆจกๅ |
| [้ท็งป](https://huggingface.co/docs/transformers/migration) | ๅพ `pytorch-transformers` ๆ `pytorch-pretrained-bert` ้ท็งปๅฐ ๐ค Transformers |
## ๅผ็จ
ๆๅๅทฒๅฐๆญคๅฝๅผๅบซ็[่ซๆ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ๆญฃๅผ็ผ่กจใๅฆๆไฝ ไฝฟ็จไบ ๐ค Transformers ๅฝๅผๅบซ๏ผๅฏไปฅๅผ็จ๏ผ
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/ISSUES.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# How To Request Support
This is an Open Source Project so please be mindful that like in any other project of this kind there is no obligation to answer all requests for help.
However, we want to encourage you to ask for help whenever you think it's needed! We are happy about every question we get because it allows us to better understand your needs, possible misunderstandings, and most importantly a way for you to help us make this library better. That being said, this document's main purpose is to provide guidelines at how you can formulate your requests to increase your chances to be understood and to get support.
There are two main venues to receive support: [the forums](https://discuss.huggingface.co/) and [the GitHub issues](https://github.com/huggingface/transformers/issues).
## The Forums
[The user forums](https://discuss.huggingface.co/) are supported by the wide community of the library users and backed up by developers when needed.
If you have a difficulty with deploying this library or some questions, or you'd like to discuss a new feature, please first consider discussing those things at the forums. Only when you feel your subject matter has been crystalized and you still need support from the library developers do proceed to file an [issue](https://github.com/huggingface/transformers/issues).
In particular all "Please explain" questions or objectively very user-specific feature requests belong to the forums. Here are some example of such questions:
* "I would like to use a BertModel within a RL-Agent for a customer support service. How can I use a BertForMaskedLM in my ChatBotModel?"
* "Could you please explain why T5 has no positional embedding matrix under T5Model?"
* "How should I set my generation parameters for translation?"
* "How to train T5 on De->En translation?"
## The GitHub Issues
Everything which hints at a bug should be opened as an [issue](https://github.com/huggingface/transformers/issues).
You are not required to read the following guidelines before opening an issue. However, if you notice that your issue doesn't get any replies, chances are that the developers have one or several difficulties with its quality. In this case, reading the following points and adjusting your issue accordingly could help.
1. Before posting an issue, first search for already posted issues, since chances are someone has already asked a similar question before you.
If you use Google your search query should be:
```
"huggingface" "transformers" your query
```
The first two quoted words tell Google to limit the search to the context of the Huggingface Transformers. The remainder is your query - most commonly this would be the error message the software fails with. We will go deeper into details shortly.
The results of such a query will typically match GitHub issues, Hugging Face forums, StackExchange, and blogs.
If you find relevant hints, you may choose to continue the discussion there if you have follow up questions.
If what you found is similar but doesn't quite answer your problem, please, post a new issue and do include links to similar issues or forum discussions you may have found.
Let's look at some examples:
The error message, often referred to as an assertion, tells us what went wrong. Here is an example of an assertion:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/transformers/src/transformers/__init__.py", line 34, in <module>
from . import dependency_versions_check
File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module>
from .utils import is_tokenizers_available
File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module>
from tqdm.auto import tqdm
ModuleNotFoundError: No module named 'tqdm.auto'
```
and it typically includes a traceback, so that we can see the full stack of calls the program made before it fails. This gives us the context to know why the program failed.
Going back to the above example. If you received this error search, look at the very last line of the error which is:
```python
ModuleNotFoundError: No module named 'tqdm.auto'
```
And now we can use it to do the searching on your favorite search engine:
1. first for `"huggingface" "transformers" "ModuleNotFoundError: No module named 'tqdm.auto'"`
2. if you don't find relevant results, then search for just `"ModuleNotFoundError: No module named 'tqdm.auto'"`
3. and finally if nothing still comes up, then remove the outside quotes: `ModuleNotFoundError: No module named 'tqdm.auto'`
If the error includes any messages that include bits unique to your filesystem, always remove those in the search query since other users will not have the same filesystem as yours. For example:
```bash
python -c 'open("/tmp/wrong_path.txt", "r")'
Traceback (most recent call last):
File "<string>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/wrong_path.txt'
```
Here you'd search for just: `"FileNotFoundError: [Errno 2] No such file or directory"`
If the local information that you removed were inside the error message and you removed them you may need to remove double quotes since your query is no longer exact. So if the error message was something like:
```bash
ValueError: '/tmp/wrong_path.txt' cannot be found
```
then you'd search for `"ValueError" "cannot be found"`
As you search you will notice that when you don't use quotes often the search engines will return a variety of unrelated hits, which may or may not be what you want.
Experiment with different ways and find which approach gives the most satisfactory results.
2. Keep the issue short, providing the information that you think will aid the developers to understand your situation. Put yourself in the shoes of the person who has never seen your code or knows anything about your custom setup. This mental exercise will help to develop an intuition to what/what not to share"
3. If there is a software failure, always provide the full traceback, for example:
```python
$ python -c 'import transformers'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/transformers/src/transformers/__init__.py", line 34, in <module>
from . import dependency_versions_check
File "/transformers/src/transformers/dependency_versions_check.py", line 34, in <module>
from .utils import is_tokenizers_available
File "/transformers/src/transformers/utils/import_utils.py", line 40, in <module>
from tqdm.auto import tqdm
ModuleNotFoundError: No module named 'tqdm.auto'
```
As compared to providing just the last line of the error message, e.g.:
```python
ModuleNotFoundError: No module named 'tqdm.auto'
```
which is not sufficient.
If your application is running on more than one GPU (e.g. under `DistributedDataParallel`) and typically getting every log and traceback printed multiple times, please make sure that you paste only one copy of it. At times the traceback from parallel processes may get interleaved - so either disentangle these or change the loggers to log only for `local_rank==0` so that only one process logs things.
4. When quoting a traceback, command line instructions and any type of code always enclose it in triple backticks inside the editor window, that is:
````
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
```
````
If it's a command line with a long argument list, please consider breaking it down using backslashes and new lines. Here is an example of a good command line quote:
```bash
cd examples/seq2seq
torchrun --nproc_per_node=2 ./finetune_trainer.py \
--model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --data_dir wmt_en_ro \
--output_dir output_dir --overwrite_output_dir \
--do_train --n_train 500 --num_train_epochs 1 \
--per_device_train_batch_size 1 --freeze_embeds \
--src_lang en_XX --tgt_lang ro_RO --task translation \
--fp16
```
If you don't break it up, one has to scroll horizontally which often makes it quite difficult to quickly see what's happening.
The backslashes allow us to copy the command directly into the console to run it, without needing to edit it.
5. Include only the important information that you think will help the developer to quickly identify the problem.
For example applications often create huge amounts of logs. Ask yourself whether providing all or parts of the log is useful.
Pasting a 100-1000 lines of log into the issue is an immediate turn off, since it will take a lot of time to figure out where the pertinent parts of the log are.
Attaching a full log can be helpful if it's done as an attachment, if it's enclosed in the following html code in the comment editor window:
```
<details>
<summary>Full log</summary>
<pre>
many
lines
go
here
</pre>
</details>
```
which would result in the following entry, which can be opened if desired, but otherwise takes little space.
<details>
<summary>Full log</summary>
<pre>
many
lines
go
here
</pre>
</details>
You could also provide a link to a pastebin service, but this is less beneficial since those links tend to expire quickly and future readers of your issue might not be able to access that log file anymore and may lack some context.
6. If this is an issue in your code, do try to reduce that code to a minimal example that still demonstrates the problem. Please ask at the forums if you have a hard time figuring how to do that. Please realize that we don't have the luxury of having time to try and understand all of your custom code.
If you really tried to make a short reproducible code but couldn't figure it out, it might be that having a traceback will give the developer enough information to know what's going on. But if it is not enough and we can't reproduce the problem, we can't really solve it.
Do not despair if you can't figure it out from the beginning, just share what you can and perhaps someone else will be able to help you at the forums.
If your setup involves any custom datasets, the best way to help us reproduce the problem is to create a [Google Colab notebook](https://colab.research.google.com/) that demonstrates the issue and once you verify that the issue still exists, include a link to that notebook in the Issue. Just make sure that you don't copy and paste the location bar url of the open notebook - as this is private and we won't be able to open it. Instead, you need to click on `Share` in the right upper corner of the notebook, select `Get Link` and then copy and paste the public link it will give to you.
7. If you forked off some of this project's code or example applications, please, do not ask us to go into your code repository and figure out what you may have done. The code is already very complex and unless there is an easy way to do a diff and it's a small diff, it won't be possible to find someone with time on their hands to make a lengthy investigation. Albeit, you might find someone at the forums who will be generous to do this for you.
8. Before reporting an issue, first, always try to update your environment to the latest official version of this library. We have no resources to go and debug older revisions, which could easily have bugs that have been fixed in the latest released version.
We understand that this is not always possible, especially when APIs change, in which case file an issue against the highest library version your environment can support.
Of course, if you upgrade the library, always retest that the problem is still there.
9. Please do not ask us to reproduce an issue with your custom data, since we don't have it. So, either you should use some existing dataset supported by HF datasets or you need to supply a code that generates a small sample on the fly, or some another quick and simple way to get it.
Please do not send us any non-public domain data that may require a license or a permission to be used.
10. Do not tag multiple developers on the issue unless you know this is expected, either because you asked them and they gave you an explicit permission to tag them or the issue template instructs you to do so.
The "who to tag for what domain" part of the issue template is there to help users direct their questions to the right developers who are designated maintainers of project's specific domains. They can then decide at their own discretion to tag other developers if they feel it'd help move the issue forward.
We currently don't have a triage service and we trust your capacity to identify the right domain and thus the persons to tag in your issue. If you are not sure, please use the forums to ask for guidance.
When in doubt, err on the side of not tagging a given person. If you tag multiple people out of context or permission don't be surprised if you get no response at all. Please remember that every time you tag someone, they get a notification and you're taking their time without their permission. Please be sensitive to that.
If you got helped by one of the developers in the past please don't tag them in future issues, unless they are listed in the issue template for the domain you are asking about or that developer gave you an explicit permission to tag them in future issues.
If you see a certain developer doing multiple and/or recent commits into a specific area of the project that you feel is relevant to your issue, it is not a good reason to tag them. Various developers may be fixing things that prevent them from moving forward, but often their work is focused on a totally different domain. And while they may or may not know how to help you with the problem at hand, it would benefit the whole community much more if they focus on the domain of their unique expertise.
11. Use the Edit button. Take your time, and re-read and improve the wording and formatting to make your posts and comments as easy to understand as possible.
Avoid posting multiple comments in a row, as each comment generates a notification for the developers tagged in that issue. If you happened to post multiple comments in a row, and nobody followed up yet - consider merging those into one or a few comments while editing the combined content to be coherent.
If you choose to edit your older comments after others posted follow up comments you need to be aware that your modifications might not be noticed, so if it's not a typo fixing, try to write a new comment flagging that something has been changed in the previous comments.
For example, the very first comment is the most important one. If while the thread unfolds you realize that things aren't as they seemed to you originally you may want to edit the first post to reflect the up-to-date understanding of the issue at hand so that it helps those who read your issue in the future quickly understand what's going on and not need to sift through dozens of comments. It also helps to indicate that the post was edited. So, those reading the thread later can understand why there might be certain discontinuity in the information flow.
Use bullets and items if you have lists of items and the outcome improves overall readability.
Use backticks to refer to class and function names, e.g. `BartModel` and `generate` as these stand out and improve the speed of a reader's comprehension.
Try not use italics and bold text too much as these often make the text more difficult to read.
12. If you are cross-referencing a specific comment in a given thread or another issue, always link to that specific comment, rather than using the issue link. If you do the latter it could be quite impossible to find which specific comment you're referring to.
To get the link to the specific comment do not copy the url from the location bar of your browser, but instead, click the `...` icon in the upper right corner of the comment and then select "Copy Link".
For example the first link is a link to an issue, and the second to a specific comment in the same issue:
1. https://github.com/huggingface/transformers/issues/9257
2. https://github.com/huggingface/transformers/issues/9257#issuecomment-749945162
13. If you are replying to a last comment, it's totally fine to make your reply with just your comment in it. The readers can follow the information flow here.
But if you're replying to a comment that happened some comments back it's always a good practice to quote just the relevant lines you're replying it. The `>` is used for quoting, or you can always use the menu to do so. For example your editor box will look like:
```
> How big is your gpu cluster?
Our cluster is made of 256 gpus.
```
If you are addressing multiple comments, quote the relevant parts of each before your answer. Some people use the same comment to do multiple replies, others separate them into separate comments. Either way works. The latter approach helps for linking to a specific comment.
In general the best way to figure out what works the best is learn from issues posted by other people - see which issues get great responses and which get little to no response - observe what the posters who received great responses did differently from those who did not.
Thank you for reading this somewhat lengthy document. We would like to conclude that these are not absolute rules, but a friendly advice that will help maximize the chances for us to understand what you are trying to communicate, reproduce the problem then resolve it to your satisfaction and the benefit of the whole community.
If after reading this document there are remaining questions on how and why or there is a need for further elucidation, please, don't hesitate to ask your question in [this thread](https://discuss.huggingface.co/t/how-to-request-support/3128).
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/hubconf.py | # Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
SRC_DIR = os.path.join(os.path.dirname(__file__), "src")
sys.path.append(SRC_DIR)
from transformers import (
AutoConfig,
AutoModel,
AutoModelForCausalLM,
AutoModelForMaskedLM,
AutoModelForQuestionAnswering,
AutoModelForSequenceClassification,
AutoTokenizer,
add_start_docstrings,
)
dependencies = ["torch", "numpy", "tokenizers", "filelock", "requests", "tqdm", "regex", "sentencepiece", "sacremoses", "importlib_metadata", "huggingface_hub"]
@add_start_docstrings(AutoConfig.__doc__)
def config(*args, **kwargs):
r"""
# Using torch.hub !
import torch
config = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased') # Download configuration from huggingface.co and cache.
config = torch.hub.load('huggingface/transformers', 'config', './test/bert_saved_model/') # E.g. config (or model) was saved using `save_pretrained('./test/saved_model/')`
config = torch.hub.load('huggingface/transformers', 'config', './test/bert_saved_model/my_configuration.json')
config = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased', output_attentions=True, foo=False)
assert config.output_attentions == True
config, unused_kwargs = torch.hub.load('huggingface/transformers', 'config', 'google-bert/bert-base-uncased', output_attentions=True, foo=False, return_unused_kwargs=True)
assert config.output_attentions == True
assert unused_kwargs == {'foo': False}
"""
return AutoConfig.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoTokenizer.__doc__)
def tokenizer(*args, **kwargs):
r"""
# Using torch.hub !
import torch
tokenizer = torch.hub.load('huggingface/transformers', 'tokenizer', 'google-bert/bert-base-uncased') # Download vocabulary from huggingface.co and cache.
tokenizer = torch.hub.load('huggingface/transformers', 'tokenizer', './test/bert_saved_model/') # E.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`
"""
return AutoTokenizer.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoModel.__doc__)
def model(*args, **kwargs):
r"""
# Using torch.hub !
import torch
model = torch.hub.load('huggingface/transformers', 'model', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache.
model = torch.hub.load('huggingface/transformers', 'model', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = torch.hub.load('huggingface/transformers', 'model', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading
assert model.config.output_attentions == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
model = torch.hub.load('huggingface/transformers', 'model', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
return AutoModel.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoModelForCausalLM.__doc__)
def modelForCausalLM(*args, **kwargs):
r"""
# Using torch.hub !
import torch
model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', 'openai-community/gpt2') # Download model and configuration from huggingface.co and cache.
model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', './test/saved_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', 'openai-community/gpt2', output_attentions=True) # Update configuration during loading
assert model.config.output_attentions == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = AutoConfig.from_pretrained('./tf_model/gpt_tf_model_config.json')
model = torch.hub.load('huggingface/transformers', 'modelForCausalLM', './tf_model/gpt_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
return AutoModelForCausalLM.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoModelForMaskedLM.__doc__)
def modelForMaskedLM(*args, **kwargs):
r"""
# Using torch.hub !
import torch
model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache.
model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading
assert model.config.output_attentions == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
model = torch.hub.load('huggingface/transformers', 'modelForMaskedLM', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
return AutoModelForMaskedLM.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoModelForSequenceClassification.__doc__)
def modelForSequenceClassification(*args, **kwargs):
r"""
# Using torch.hub !
import torch
model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache.
model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading
assert model.config.output_attentions == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
model = torch.hub.load('huggingface/transformers', 'modelForSequenceClassification', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
return AutoModelForSequenceClassification.from_pretrained(*args, **kwargs)
@add_start_docstrings(AutoModelForQuestionAnswering.__doc__)
def modelForQuestionAnswering(*args, **kwargs):
r"""
# Using torch.hub !
import torch
model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', 'google-bert/bert-base-uncased') # Download model and configuration from huggingface.co and cache.
model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', './test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')`
model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', 'google-bert/bert-base-uncased', output_attentions=True) # Update configuration during loading
assert model.config.output_attentions == True
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
model = torch.hub.load('huggingface/transformers', 'modelForQuestionAnswering', './tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
"""
return AutoModelForQuestionAnswering.from_pretrained(*args, **kwargs)
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_zh-hans.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!---
A useful guide for English-Chinese translation of Hugging Face documentation
- Add space around English words and numbers when they appear between Chinese characters. E.g., ๅ
ฑ 100 ๅค็ง่ฏญ่จ; ไฝฟ็จ transformers ๅบใ
- Use square quotes, e.g.,ใๅผ็จใ
Dictionary
Hugging Face: ๆฑๆฑ่ธ
token: ่ฏ็ฌฆ๏ผๅนถ็จๆฌๅทๆ ๆณจๅ่ฑๆ๏ผ
tokenize: ่ฏ็ฌฆๅ๏ผๅนถ็จๆฌๅทๆ ๆณจๅ่ฑๆ๏ผ
tokenizer: ่ฏ็ฌฆๅๅจ๏ผๅนถ็จๆฌๅทๆ ๆณจๅ่ฑๆ๏ผ
transformer: transformer๏ผไธ็ฟป่ฏ๏ผ
pipeline: ๆตๆฐด็บฟ
API: API (ไธ็ฟป่ฏ๏ผ
inference: ๆจ็
Trainer: ่ฎญ็ปๅจใๅฝไฝไธบ็ฑปๅๅบ็ฐๆถไธ็ฟป่ฏใ
pretrained/pretrain: ้ข่ฎญ็ป
finetune: ๅพฎ่ฐ
community: ็คพๅบ
example: ๅฝ็นๆไปๅบไธญ example ็ฎๅฝๆถ็ฟป่ฏไธบใ็จไพใ
Python data structures (e.g., list, set, dict): ็ฟป่ฏไธบๅ่กจ๏ผ้ๅ๏ผ่ฏๅ
ธ๏ผๅนถ็จๆฌๅทๆ ๆณจๅ่ฑๆ
NLP/Natural Language Processing: ไปฅ NLP ๅบ็ฐๆถไธ็ฟป่ฏ๏ผไปฅ Natural Language Processing ๅบ็ฐๆถ็ฟป่ฏไธบ่ช็ถ่ฏญ่จๅค็
checkpoint: ๆฃๆฅ็น
-->
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/>
<br>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<b>็ฎไฝไธญๆ</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>ไธบ JaxใPyTorch ๅ TensorFlow ๆ้ ็ๅ
่ฟ็่ช็ถ่ฏญ่จๅค็</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers ๆไพไบๆฐไปฅๅ่ฎก็้ข่ฎญ็ปๆจกๅ๏ผๆฏๆ 100 ๅค็ง่ฏญ่จ็ๆๆฌๅ็ฑปใไฟกๆฏๆฝๅใ้ฎ็ญใๆ่ฆใ็ฟป่ฏใๆๆฌ็ๆใๅฎ็ๅฎๆจๆฏ่ฎฉๆๅ
่ฟ็ NLP ๆๆฏไบบไบบๆ็จใ
๐ค Transformers ๆไพไบไพฟไบๅฟซ้ไธ่ฝฝๅไฝฟ็จ็API๏ผ่ฎฉไฝ ๅฏไปฅๆ้ข่ฎญ็ปๆจกๅ็จๅจ็ปๅฎๆๆฌใๅจไฝ ็ๆฐๆฎ้ไธๅพฎ่ฐ็ถๅ้่ฟ [model hub](https://huggingface.co/models) ไธ็คพๅบๅ
ฑไบซใๅๆถ๏ผๆฏไธชๅฎไน็ Python ๆจกๅๅๅฎๅ
จ็ฌ็ซ๏ผๆนไพฟไฟฎๆนๅๅฟซ้็ ็ฉถๅฎ้ชใ
๐ค Transformers ๆฏๆไธไธชๆ็ญ้จ็ๆทฑๅบฆๅญฆไน ๅบ๏ผ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) ไปฅๅ [TensorFlow](https://www.tensorflow.org/) โ ๅนถไธไนๆ ็ผๆดๅใไฝ ๅฏไปฅ็ดๆฅไฝฟ็จไธไธชๆกๆถ่ฎญ็ปไฝ ็ๆจกๅ็ถๅ็จๅฆไธไธชๅ ่ฝฝๅๆจ็ใ
## ๅจ็บฟๆผ็คบ
ไฝ ๅฏไปฅ็ดๆฅๅจๆจกๅ้กต้ขไธๆต่ฏๅคงๅคๆฐ [model hub](https://huggingface.co/models) ไธ็ๆจกๅใ ๆไปฌไนๆไพไบ [็งๆๆจกๅๆ็ฎกใๆจกๅ็ๆฌ็ฎก็ไปฅๅๆจ็API](https://huggingface.co/pricing)ใ
่ฟ้ๆฏไธไบไพๅญ๏ผ
- [็จ BERT ๅๆฉ็ ๅกซ่ฏ](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [็จ Electra ๅๅฝๅๅฎไฝ่ฏๅซ](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [็จ GPT-2 ๅๆๆฌ็ๆ](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [็จ RoBERTa ๅ่ช็ถ่ฏญ่จๆจ็](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [็จ BART ๅๆๆฌๆ่ฆ](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [็จ DistilBERT ๅ้ฎ็ญ](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [็จ T5 ๅ็ฟป่ฏ](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
**[Write With Transformer](https://transformer.huggingface.co)**๏ผ็ฑๆฑๆฑ่ธๅข้ๆ้ ๏ผๆฏไธไธชๆๆฌ็ๆ็ๅฎๆน demoใ
## ๅฆๆไฝ ๅจๅฏปๆพ็ฑๆฑๆฑ่ธๅข้ๆไพ็ๅฎๅถๅๆฏๆๆๅก
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## ๅฟซ้ไธๆ
ๆไปฌไธบๅฟซ้ไฝฟ็จๆจกๅๆไพไบ `pipeline` ๏ผๆตๆฐด็บฟ๏ผAPIใๆตๆฐด็บฟ่ๅไบ้ข่ฎญ็ปๆจกๅๅๅฏนๅบ็ๆๆฌ้ขๅค็ใไธ้ขๆฏไธไธชๅฟซ้ไฝฟ็จๆตๆฐด็บฟๅปๅคๆญๆญฃ่ด้ขๆ
็ปช็ไพๅญ๏ผ
```python
>>> from transformers import pipeline
# ไฝฟ็จๆ
็ปชๅๆๆตๆฐด็บฟ
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
็ฌฌไบ่กไปฃ็ ไธ่ฝฝๅนถ็ผๅญไบๆตๆฐด็บฟไฝฟ็จ็้ข่ฎญ็ปๆจกๅ๏ผ่็ฌฌไธ่กไปฃ็ ๅๅจ็ปๅฎ็ๆๆฌไธ่ฟ่กไบ่ฏไผฐใ่ฟ้็็ญๆกโๆญฃ้ขโ (positive) ๅ
ทๆ 99 ็็ฝฎไฟกๅบฆใ
่ฎธๅค็ NLP ไปปๅก้ฝๆๅผ็ฎฑๅณ็จ็้ข่ฎญ็ปๆตๆฐด็บฟใๆฏๅฆ่ฏด๏ผๆไปฌๅฏไปฅ่ฝปๆพ็ไป็ปๅฎๆๆฌไธญๆฝๅ้ฎ้ข็ญๆก๏ผ
``` python
>>> from transformers import pipeline
# ไฝฟ็จ้ฎ็ญๆตๆฐด็บฟ
>>> question_answerer = pipeline('question-answering')
>>> question_answerer({
... 'question': 'What is the name of the repository ?',
... 'context': 'Pipeline has been included in the huggingface/transformers repository'
... })
{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'}
```
้คไบ็ปๅบ็ญๆก๏ผ้ข่ฎญ็ปๆจกๅ่ฟ็ปๅบไบๅฏนๅบ็็ฝฎไฟกๅบฆๅๆฐใ็ญๆกๅจ่ฏ็ฌฆๅ (tokenized) ๅ็ๆๆฌไธญๅผๅงๅ็ปๆ็ไฝ็ฝฎใไฝ ๅฏไปฅไป[่ฟไธชๆ็จ](https://huggingface.co/docs/transformers/task_summary)ไบ่งฃๆดๅคๆตๆฐด็บฟAPIๆฏๆ็ไปปๅกใ
่ฆๅจไฝ ็ไปปๅกไธไธ่ฝฝๅไฝฟ็จไปปๆ้ข่ฎญ็ปๆจกๅไนๅพ็ฎๅ๏ผๅช้ไธ่กไปฃ็ ใ่ฟ้ๆฏ PyTorch ็็็คบไพ๏ผ
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
่ฟ้ๆฏ็ญๆ็ TensorFlow ไปฃ็ ๏ผ
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
่ฏ็ฌฆๅๅจ (tokenizer) ไธบๆๆ็้ข่ฎญ็ปๆจกๅๆไพไบ้ขๅค็๏ผๅนถๅฏไปฅ็ดๆฅๅฏนๅไธชๅญ็ฌฆไธฒ่ฟ่ก่ฐ็จ๏ผๆฏๅฆไธ้ข็ไพๅญ๏ผๆๅฏนๅ่กจ (list) ่ฐ็จใๅฎไผ่พๅบไธไธชไฝ ๅฏไปฅๅจไธๆธธไปฃ็ ้ไฝฟ็จๆ็ดๆฅ้่ฟ `**` ่งฃๅ
่กจ่พพๅผไผ ็ปๆจกๅ็่ฏๅ
ธ (dict)ใ
ๆจกๅๆฌ่บซๆฏไธไธชๅธธ่ง็ [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ๆ [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๏ผๅๅณไบไฝ ็ๅ็ซฏ๏ผ๏ผๅฏไปฅๅธธ่งๆนๅผไฝฟ็จใ [่ฟไธชๆ็จ](https://huggingface.co/transformers/training.html)่งฃ้ไบๅฆไฝๅฐ่ฟๆ ท็ๆจกๅๆดๅๅฐ็ปๅ
ธ็ PyTorch ๆ TensorFlow ่ฎญ็ปๅพช็ฏไธญ๏ผๆๆฏๅฆไฝไฝฟ็จๆไปฌ็ `Trainer` ่ฎญ็ปๅจ๏ผAPI ๆฅๅจไธไธชๆฐ็ๆฐๆฎ้ไธๅฟซ้ๅพฎ่ฐใ
## ไธบไปไน่ฆ็จ transformers๏ผ
1. ไพฟไบไฝฟ็จ็ๅ
่ฟๆจกๅ๏ผ
- NLU ๅ NLG ไธ่กจ็ฐไผ่ถ
- ๅฏนๆๅญฆๅๅฎ่ทตๅๅฅฝไธไฝ้จๆง
- ้ซ็บงๆฝ่ฑก๏ผๅช้ไบ่งฃไธไธช็ฑป
- ๅฏนๆๆๆจกๅ็ปไธ็API
1. ๆดไฝ่ฎก็ฎๅผ้๏ผๆดๅฐ็็ขณๆๆพ๏ผ
- ็ ็ฉถไบบๅๅฏไปฅๅไบซๅทฒ่ฎญ็ป็ๆจกๅ่้ๆฏๆฌกไปๅคดๅผๅง่ฎญ็ป
- ๅทฅ็จๅธๅฏไปฅๅๅฐ่ฎก็ฎ็จๆถๅ็ไบง็ฏๅขๅผ้
- ๆฐๅ็งๆจกๅๆถๆใไธคๅๅคไธช้ข่ฎญ็ปๆจกๅใ100ๅค็ง่ฏญ่จๆฏๆ
1. ๅฏนไบๆจกๅ็ๅฝๅจๆ็ๆฏไธไธช้จๅ้ฝ้ข้ขไฟฑๅฐ๏ผ
- ่ฎญ็ปๅ
่ฟ็ๆจกๅ๏ผๅช้ 3 ่กไปฃ็
- ๆจกๅๅจไธๅๆทฑๅบฆๅญฆไน ๆกๆถ้ดไปปๆ่ฝฌ็งป๏ผ้ไฝ ๅฟๆ
- ไธบ่ฎญ็ปใ่ฏไผฐๅ็ไบง้ๆฉๆ้ๅ็ๆกๆถ๏ผ่กๆฅๆ ็ผ
1. ไธบไฝ ็้ๆฑ่ฝปๆพๅฎๅถไธๅฑๆจกๅๅ็จไพ๏ผ
- ๆไปฌไธบๆฏ็งๆจกๅๆถๆๆไพไบๅคไธช็จไพๆฅๅค็ฐๅ่ฎบๆ็ปๆ
- ๆจกๅๅ
้จ็ปๆไฟๆ้ๆไธ่ด
- ๆจกๅๆไปถๅฏๅ็ฌไฝฟ็จ๏ผๆนไพฟ้ญๆนๅๅฟซ้ๅฎ้ช
## ไปไนๆ
ๅตไธๆไธ่ฏฅ็จ transformers๏ผ
- ๆฌๅบๅนถไธๆฏๆจกๅๅ็็ฅ็ป็ฝ็ปๅทฅๅ
ท็ฎฑใๆจกๅๆไปถไธญ็ไปฃ็ ็นๆๅ่ฅ็็๏ผๆช็ป้ขๅคๆฝ่ฑกๅฐ่ฃ
๏ผไปฅไพฟ็ ็ฉถไบบๅๅฟซ้่ฟญไปฃ้ญๆน่ไธ่ดๆบบไบๆฝ่ฑกๅๆไปถ่ทณ่ฝฌไนไธญใ
- `Trainer` API ๅนถ้ๅ
ผๅฎนไปปไฝๆจกๅ๏ผๅชไธบๆฌๅบไนๆจกๅไผๅใ่ฅๆฏๅจๅฏปๆพ้็จไบ้็จๆบๅจๅญฆไน ็่ฎญ็ปๅพช็ฏๅฎ็ฐ๏ผ่ฏทๅฆ่ง
ไปๅบใ
- ๅฐฝ็ฎกๆไปฌๅทฒๅฐฝๅ่ไธบ๏ผ[examples ็ฎๅฝ](https://github.com/huggingface/transformers/tree/main/examples)ไธญ็่ๆฌไนไป
ไธบ็จไพ่ๅทฒใๅฏนไบไฝ ็็นๅฎ้ฎ้ข๏ผๅฎไปฌๅนถไธไธๅฎๅผ็ฎฑๅณ็จ๏ผๅฏ่ฝ้่ฆๆนๅ ่กไปฃ็ ไปฅ้ไนใ
## ๅฎ่ฃ
### ไฝฟ็จ pip
่ฟไธชไปๅบๅทฒๅจ Python 3.8+ใFlax 0.4.1+ใPyTorch 1.11+ ๅ TensorFlow 2.6+ ไธ็ป่ฟๆต่ฏใ
ไฝ ๅฏไปฅๅจ[่ๆ็ฏๅข](https://docs.python.org/3/library/venv.html)ไธญๅฎ่ฃ
๐ค Transformersใๅฆๆไฝ ่ฟไธ็ๆ Python ็่ๆ็ฏๅข๏ผ่ฏท้
ๆญค[็จๆท่ฏดๆ](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)ใ
้ฆๅ
๏ผ็จไฝ ๆ็ฎไฝฟ็จ็็ๆฌ็ Python ๅๅปบไธไธช่ๆ็ฏๅขๅนถๆฟๆดปใ
็ถๅ๏ผไฝ ้่ฆๅฎ่ฃ
FlaxใPyTorch ๆ TensorFlow ๅ
ถไธญไนไธใๅ
ณไบๅจไฝ ไฝฟ็จ็ๅนณๅฐไธๅฎ่ฃ
่ฟไบๆกๆถ๏ผ่ฏทๅ้
[TensorFlow ๅฎ่ฃ
้กต](https://www.tensorflow.org/install/), [PyTorch ๅฎ่ฃ
้กต](https://pytorch.org/get-started/locally/#start-locally) ๆ [Flax ๅฎ่ฃ
้กต](https://github.com/google/flax#quick-install)ใ
ๅฝ่ฟไบๅ็ซฏไนไธๅฎ่ฃ
ๆๅๅ๏ผ ๐ค Transformers ๅฏไพๆญคๅฎ่ฃ
๏ผ
```bash
pip install transformers
```
ๅฆๆไฝ ๆณ่ฆ่ฏ่ฏ็จไพๆ่
ๆณๅจๆญฃๅผๅๅธๅไฝฟ็จๆๆฐ็ๅผๅไธญไปฃ็ ๏ผไฝ ๅพ[ไปๆบไปฃ็ ๅฎ่ฃ
](https://huggingface.co/docs/transformers/installation#installing-from-source)ใ
### ไฝฟ็จ conda
๐ค Transformers ๅฏไปฅ้่ฟ conda ไพๆญคๅฎ่ฃ
๏ผ
```shell script
conda install conda-forge::transformers
```
> **_็ฌ่ฎฐ:_** ไป `huggingface` ๆธ ้ๅฎ่ฃ
`transformers` ๅทฒ่ขซๅบๅผใ
่ฆ้่ฟ conda ๅฎ่ฃ
FlaxใPyTorch ๆ TensorFlow ๅ
ถไธญไนไธ๏ผ่ฏทๅ้
ๅฎไปฌๅ่ชๅฎ่ฃ
้กต็่ฏดๆใ
## ๆจกๅๆถๆ
๐ค Transformers ๆฏๆ็[**ๆๆ็ๆจกๅๆฃๆฅ็น**](https://huggingface.co/models)็ฑ[็จๆท](https://huggingface.co/users)ๅ[็ป็ป](https://huggingface.co/organizations)ไธไผ ๏ผๅไธ huggingface.co [model hub](https://huggingface.co) ๆ ็ผๆดๅใ
็ฎๅ็ๆฃๆฅ็นๆฐ้๏ผ ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers ็ฎๅๆฏๆๅฆไธ็ๆถๆ: ๆจกๅๆฆ่ฟฐ่ฏท้
[่ฟ้](https://huggingface.co/docs/transformers/model_summary).
่ฆๆฃๆฅๆไธชๆจกๅๆฏๅฆๅทฒๆ FlaxใPyTorch ๆ TensorFlow ็ๅฎ็ฐ๏ผๆๅ
ถๆฏๅฆๅจ ๐ค Tokenizers ๅบไธญๆๅฏนๅบ่ฏ็ฌฆๅๅจ๏ผtokenizer๏ผ๏ผๆฌ่ฏทๅ้
[ๆญค่กจ](https://huggingface.co/docs/transformers/index#supported-frameworks)ใ
่ฟไบๅฎ็ฐๅๅทฒไบๅคไธชๆฐๆฎ้ๆต่ฏ๏ผ่ฏทๅ็็จไพ่ๆฌ๏ผๅนถๅบไบๅ็ๅฎ็ฐ่กจ็ฐ็ธๅฝใไฝ ๅฏไปฅๅจ็จไพๆๆกฃ็[ๆญค่](https://huggingface.co/docs/transformers/examples)ไธญไบ่งฃ่กจ็ฐ็็ป่ใ
## ไบ่งฃๆดๅค
| ็ซ ่ | ๆ่ฟฐ |
|-|-|
| [ๆๆกฃ](https://huggingface.co/docs/transformers/) | ๅฎๆด็ API ๆๆกฃๅๆ็จ |
| [ไปปๅกๆป็ป](https://huggingface.co/docs/transformers/task_summary) | ๐ค Transformers ๆฏๆ็ไปปๅก |
| [้ขๅค็ๆ็จ](https://huggingface.co/docs/transformers/preprocessing) | ไฝฟ็จ `Tokenizer` ๆฅไธบๆจกๅๅๅคๆฐๆฎ |
| [่ฎญ็ปๅๅพฎ่ฐ](https://huggingface.co/docs/transformers/training) | ๅจ PyTorch/TensorFlow ็่ฎญ็ปๅพช็ฏๆ `Trainer` API ไธญไฝฟ็จ ๐ค Transformers ๆไพ็ๆจกๅ |
| [ๅฟซ้ไธๆ๏ผๅพฎ่ฐๅ็จไพ่ๆฌ](https://github.com/huggingface/transformers/tree/main/examples) | ไธบๅ็งไปปๅกๆไพ็็จไพ่ๆฌ |
| [ๆจกๅๅไบซๅไธไผ ](https://huggingface.co/docs/transformers/model_sharing) | ๅ็คพๅบไธไผ ๅๅไบซไฝ ๅพฎ่ฐ็ๆจกๅ |
| [่ฟ็งป](https://huggingface.co/docs/transformers/migration) | ไป `pytorch-transformers` ๆ `pytorch-pretrained-bert` ่ฟ็งปๅฐ ๐ค Transformers |
## ๅผ็จ
ๆไปฌๅทฒๅฐๆญคๅบ็[่ฎบๆ](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)ๆญฃๅผๅ่กจ๏ผๅฆๆไฝ ไฝฟ็จไบ ๐ค Transformers ๅบ๏ผ่ฏทๅผ็จ:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/pyproject.toml | [tool.ruff]
line-length = 119
[tool.ruff.lint]
# Never enforce `E501` (line length violations).
ignore = ["C901", "E501", "E741", "F402", "F823" ]
select = ["C", "E", "F", "I", "W"]
# Ignore import violations in all `__init__.py` files.
[tool.ruff.lint.per-file-ignores]
"__init__.py" = ["E402", "F401", "F403", "F811"]
"src/transformers/file_utils.py" = ["F401"]
"src/transformers/utils/dummy_*.py" = ["F401"]
[tool.ruff.lint.isort]
lines-after-imports = 2
known-first-party = ["transformers"]
[tool.ruff.format]
# Like Black, use double quotes for strings.
quote-style = "double"
# Like Black, indent with spaces, rather than tabs.
indent-style = "space"
# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
[tool.pytest.ini_options]
doctest_optionflags="NUMBER NORMALIZE_WHITESPACE ELLIPSIS"
doctest_glob="**/*.md"
markers = [
"flash_attn_test: marks tests related to flash attention (deselect with '-m \"not flash_attn_test\"')",
"bitsandbytes: select (or deselect with `not`) bitsandbytes integration tests",
]
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_fr.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Bibliothรจque Hugging Face Transformers" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Construction" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="Version GitHub" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Pacte des contributeurs" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<b>Franรงais</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>Apprentissage automatique de pointe pour JAX, PyTorch et TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers fournit des milliers de modรจles prรฉ-entraรฎnรฉs pour effectuer des tรขches sur diffรฉrentes modalitรฉs telles que le texte, la vision et l'audio.
Ces modรจles peuvent รชtre appliquรฉs ร :
* ๐ Texte, pour des tรขches telles que la classification de texte, l'extraction d'informations, la rรฉponse aux questions, le rรฉsumรฉ, la traduction et la gรฉnรฉration de texte, dans plus de 100 langues.
* ๐ผ๏ธ Images, pour des tรขches telles que la classification d'images, la dรฉtection d'objets et la segmentation.
* ๐ฃ๏ธ Audio, pour des tรขches telles que la reconnaissance vocale et la classification audio.
Les modรจles de transformer peuvent รฉgalement effectuer des tรขches sur **plusieurs modalitรฉs combinรฉes**, telles que la rรฉponse aux questions sur des tableaux, la reconnaissance optique de caractรจres, l'extraction d'informations ร partir de documents numรฉrisรฉs, la classification vidรฉo et la rรฉponse aux questions visuelles.
๐ค Transformers fournit des API pour tรฉlรฉcharger et utiliser rapidement ces modรจles prรฉ-entraรฎnรฉs sur un texte donnรฉ, les affiner sur vos propres ensembles de donnรฉes, puis les partager avec la communautรฉ sur notre [hub de modรจles](https://huggingface.co/models). En mรชme temps, chaque module Python dรฉfinissant une architecture est complรจtement indรฉpendant et peut รชtre modifiรฉ pour permettre des expรฉriences de recherche rapides.
๐ค Transformers est soutenu par les trois bibliothรจques d'apprentissage profond les plus populaires โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) et [TensorFlow](https://www.tensorflow.org/) โ avec une intรฉgration transparente entre eux. Il est facile de former vos modรจles avec l'un avant de les charger pour l'infรฉrence avec l'autre.
## Dรฉmos en ligne
Vous pouvez tester la plupart de nos modรจles directement sur leurs pages du [hub de modรจles](https://huggingface.co/models). Nous proposons รฉgalement [l'hรฉbergement privรฉ de modรจles, le versionning et une API d'infรฉrence](https://huggingface.co/pricing) pour des modรจles publics et privรฉs.
Voici quelques exemples :
En traitement du langage naturel :
- [Complรฉtion de mots masquรฉs avec BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Reconnaissance d'entitรฉs nommรฉes avec Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Gรฉnรฉration de texte avec GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [Infรฉrence de langage naturel avec RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Rรฉsumรฉ avec BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Rรฉponse aux questions avec DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Traduction avec T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
En vision par ordinateur :
- [Classification d'images avec ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Dรฉtection d'objets avec DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Segmentation sรฉmantique avec SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Segmentation panoptique avec MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [Estimation de profondeur avec DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
- [Classification vidรฉo avec VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Segmentation universelle avec OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
En audio :
- [Reconnaissance automatique de la parole avec Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Spotting de mots-clรฉs avec Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Classification audio avec Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
Dans les tรขches multimodales :
- [Rรฉponses aux questions sur table avec TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Rรฉponses aux questions visuelles avec ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Classification d'images sans รฉtiquette avec CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
- [Rรฉponses aux questions sur les documents avec LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Classification vidรฉo sans รฉtiquette avec X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
## 100 projets utilisant Transformers
Transformers est plus qu'une boรฎte ร outils pour utiliser des modรจles prรฉ-entraรฎnรฉs : c'est une communautรฉ de projets construits autour de lui et du Hub Hugging Face. Nous voulons que Transformers permette aux dรฉveloppeurs, chercheurs, รฉtudiants, professeurs, ingรฉnieurs et ร quiconque d'imaginer et de rรฉaliser leurs projets de rรชve.
Afin de cรฉlรฉbrer les 100 000 รฉtoiles de transformers, nous avons dรฉcidรฉ de mettre en avant la communautรฉ et avons crรฉรฉ la page [awesome-transformers](./awesome-transformers.md) qui rรฉpertorie 100 projets incroyables construits autour de transformers.
Si vous possรฉdez ou utilisez un projet que vous pensez devoir figurer dans la liste, veuillez ouvrir une pull request pour l'ajouter !
## Si vous recherchez un support personnalisรฉ de la part de l'รฉquipe Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="Programme d'accรฉlรฉration des experts HuggingFace" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Tour rapide
Pour utiliser immรฉdiatement un modรจle sur une entrรฉe donnรฉe (texte, image, audio,...), nous fournissons l'API `pipeline`. Les pipelines regroupent un modรจle prรฉ-entraรฎnรฉ avec la prรฉparation des donnรฉes qui a รฉtรฉ utilisรฉe lors de l'entraรฎnement de ce modรจle. Voici comment utiliser rapidement un pipeline pour classer des textes en positif ou nรฉgatif :
```python
>>> from transformers import pipeline
# Allouer un pipeline pour l'analyse de sentiment
>>> classifieur = pipeline('sentiment-analysis')
>>> classifieur("Nous sommes trรจs heureux d'introduire le pipeline dans le rรฉfรฉrentiel transformers.")
[{'label': 'POSITIF', 'score': 0.9996980428695679}]
```
La deuxiรจme ligne de code tรฉlรฉcharge et met en cache le modรจle prรฉ-entraรฎnรฉ utilisรฉ par le pipeline, tandis que la troisiรจme l'รฉvalue sur le texte donnรฉ. Ici, la rรฉponse est "positive" avec une confiance de 99,97%.
De nombreuses tรขches ont une pipeline prรฉ-entraรฎnรฉ prรชt ร l'emploi, en NLP, mais aussi en vision par ordinateur et en parole. Par exemple, nous pouvons facilement extraire les objets dรฉtectรฉs dans une image :
```python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Tรฉlรฉcharger une image avec de jolis chats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> donnees_image = requests.get(url, stream=True).raw
>>> image = Image.open(donnees_image)
# Allouer un pipeline pour la dรฉtection d'objets
>>> detecteur_objets = pipeline('object-detection')
>>> detecteur_objets(image)
[{'score': 0.9982201457023621,
'label': 'tรฉlรฉcommande',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'tรฉlรฉcommande',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'canapรฉ',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'chat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'chat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Ici, nous obtenons une liste d'objets dรฉtectรฉs dans l'image, avec une boรฎte entourant l'objet et un score de confiance. Voici l'image originale ร gauche, avec les prรฉdictions affichรฉes ร droite :
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Vous pouvez en savoir plus sur les tรขches supportรฉes par l'API pipeline dans [ce tutoriel](https://huggingface.co/docs/transformers/task_summary).
En plus de `pipeline`, pour tรฉlรฉcharger et utiliser n'importe lequel des modรจles prรฉ-entraรฎnรฉs sur votre tรขche donnรฉe, il suffit de trois lignes de code. Voici la version PyTorch :
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("Bonjour le monde !", return_tensors="pt")
outputs = model(**inputs)
```
Et voici le code รฉquivalent pour TensorFlow :
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("Bonjour le monde !", return_tensors="tf")
outputs = model(**inputs)
```
Le tokenizer est responsable de toutes les รฉtapes de prรฉtraitement que le modรจle prรฉentraรฎnรฉ attend et peut รชtre appelรฉ directement sur une seule chaรฎne de caractรจres (comme dans les exemples ci-dessus) ou sur une liste. Il produira un dictionnaire que vous pouvez utiliser dans votre code ou simplement passer directement ร votre modรจle en utilisant l'opรฉrateur de dรฉballage **.
Le modรจle lui-mรชme est un module [`nn.Module` PyTorch](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) ou un modรจle [`tf.keras.Model` TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (selon votre backend) que vous pouvez utiliser comme d'habitude. [Ce tutoriel](https://huggingface.co/docs/transformers/training) explique comment intรฉgrer un tel modรจle dans une boucle d'entraรฎnement classique PyTorch ou TensorFlow, ou comment utiliser notre API `Trainer` pour affiner rapidement sur un nouvel ensemble de donnรฉes.
## Pourquoi devrais-je utiliser transformers ?
1. Des modรจles de pointe faciles ร utiliser :
- Hautes performances en comprรฉhension et gรฉnรฉration de langage naturel, en vision par ordinateur et en tรขches audio.
- Faible barriรจre ร l'entrรฉe pour les รฉducateurs et les praticiens.
- Peu d'abstractions visibles pour l'utilisateur avec seulement trois classes ร apprendre.
- Une API unifiรฉe pour utiliser tous nos modรจles prรฉentraรฎnรฉs.
1. Coรปts informatiques rรฉduits, empreinte carbone plus petite :
- Les chercheurs peuvent partager des modรจles entraรฎnรฉs au lieu de toujours les rรฉentraรฎner.
- Les praticiens peuvent rรฉduire le temps de calcul et les coรปts de production.
- Des dizaines d'architectures avec plus de 400 000 modรจles prรฉentraรฎnรฉs dans toutes les modalitรฉs.
1. Choisissez le bon framework pour chaque partie de la vie d'un modรจle :
- Entraรฎnez des modรจles de pointe en 3 lignes de code.
- Trasnfรฉrer un seul modรจle entre les frameworks TF2.0/PyTorch/JAX ร volontรฉ.
- Choisissez facilement le bon framework pour l'entraรฎnement, l'รฉvaluation et la production.
1. Personnalisez facilement un modรจle ou un exemple selon vos besoins :
- Nous fournissons des exemples pour chaque architecture afin de reproduire les rรฉsultats publiรฉs par ses auteurs originaux.
- Les dรฉtails internes du modรจle sont exposรฉs de maniรจre aussi cohรฉrente que possible.
- Les fichiers de modรจle peuvent รชtre utilisรฉs indรฉpendamment de la bibliothรจque pour des expรฉriences rapides.
## Pourquoi ne devrais-je pas utiliser transformers ?
- Cette bibliothรจque n'est pas une boรฎte ร outils modulaire de blocs de construction pour les rรฉseaux neuronaux. Le code dans les fichiers de modรจle n'est pas refactored avec des abstractions supplรฉmentaires ร dessein, afin que les chercheurs puissent itรฉrer rapidement sur chacun des modรจles sans plonger dans des abstractions/fichiers supplรฉmentaires.
- L'API d'entraรฎnement n'est pas destinรฉe ร fonctionner avec n'importe quel modรจle, mais elle est optimisรฉe pour fonctionner avec les modรจles fournis par la bibliothรจque. Pour des boucles gรฉnรฉriques d'apprentissage automatique, vous devriez utiliser une autre bibliothรจque (รฉventuellement, [Accelerate](https://huggingface.co/docs/accelerate)).
- Bien que nous nous efforcions de prรฉsenter autant de cas d'utilisation que possible, les scripts de notre [dossier d'exemples](https://github.com/huggingface/transformers/tree/main/examples) ne sont que cela : des exemples. Il est prรฉvu qu'ils ne fonctionnent pas immรฉdiatement sur votre problรจme spรฉcifique et que vous devrez probablement modifier quelques lignes de code pour les adapter ร vos besoins.
## Installation
### Avec pip
Ce rรฉfรฉrentiel est testรฉ sur Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ et TensorFlow 2.6+.
Vous devriez installer ๐ค Transformers dans un [environnement virtuel](https://docs.python.org/3/library/venv.html). Si vous n'รชtes pas familier avec les environnements virtuels Python, consultez le [guide utilisateur](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
D'abord, crรฉez un environnement virtuel avec la version de Python que vous allez utiliser et activez-le.
Ensuite, vous devrez installer au moins l'un de Flax, PyTorch ou TensorFlow.
Veuillez vous rรฉfรฉrer ร la page d'installation de [TensorFlow](https://www.tensorflow.org/install/), de [PyTorch](https://pytorch.org/get-started/locally/#start-locally) et/ou de [Flax](https://github.com/google/flax#quick-install) et [Jax](https://github.com/google/jax#installation) pour connaรฎtre la commande d'installation spรฉcifique ร votre plateforme.
Lorsqu'un de ces backends est installรฉ, ๐ค Transformers peut รชtre installรฉ avec pip comme suit :
```bash
pip install transformers
```
Si vous souhaitez jouer avec les exemples ou avez besoin de la derniรจre version du code et ne pouvez pas attendre une nouvelle version, vous devez [installer la bibliothรจque ร partir de la source](https://huggingface.co/docs/transformers/installation#installing-from-source).
### Avec conda
๐ค Transformers peut รชtre installรฉ avec conda comme suit :
```shell
conda install conda-forge::transformers
```
> **_NOTE:_** L'installation de `transformers` depuis le canal `huggingface` est obsolรจte.
Suivez les pages d'installation de Flax, PyTorch ou TensorFlow pour voir comment les installer avec conda.
> **_NOTE:_** Sur Windows, on peut vous demander d'activer le mode dรฉveloppeur pour bรฉnรฉficier de la mise en cache. Si ce n'est pas une option pour vous, veuillez nous le faire savoir dans [cette issue](https://github.com/huggingface/huggingface_hub/issues/1062).
## Architectures de modรจles
**[Tous les points de contrรดle](https://huggingface.co/models)** de modรจle fournis par ๐ค Transformers sont intรฉgrรฉs de maniรจre transparente depuis le [hub de modรจles](https://huggingface.co/models) huggingface.co, oรน ils sont tรฉlรฉchargรฉs directement par les [utilisateurs](https://huggingface.co/users) et les [organisations](https://huggingface.co/organizations).
Nombre actuel de points de contrรดle : ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers fournit actuellement les architectures suivantes: consultez [ici](https://huggingface.co/docs/transformers/model_summary) pour un rรฉsumรฉ global de chacune d'entre elles.
Pour vรฉrifier si chaque modรจle a une implรฉmentation en Flax, PyTorch ou TensorFlow, ou s'il a un tokenizer associรฉ pris en charge par la bibliothรจque ๐ค Tokenizers, consultez [ce tableau](https://huggingface.co/docs/transformers/index#supported-frameworks).
Ces implรฉmentations ont รฉtรฉ testรฉes sur plusieurs ensembles de donnรฉes (voir les scripts d'exemple) et devraient correspondre aux performances des implรฉmentations originales. Vous pouvez trouver plus de dรฉtails sur les performances dans la section Exemples de la [documentation](https://github.com/huggingface/transformers/tree/main/examples).
## En savoir plus
| Section | Description |
|-|-|
| [Documentation](https://huggingface.co/docs/transformers/) | Documentation complรจte de l'API et tutoriels |
| [Rรฉsumรฉ des tรขches](https://huggingface.co/docs/transformers/task_summary) | Tรขches prises en charge par les ๐ค Transformers |
| [Tutoriel de prรฉtraitement](https://huggingface.co/docs/transformers/preprocessing) | Utilisation de la classe `Tokenizer` pour prรฉparer les donnรฉes pour les modรจles |
| [Entraรฎnement et ajustement fin](https://huggingface.co/docs/transformers/training) | Utilisation des modรจles fournis par les ๐ค Transformers dans une boucle d'entraรฎnement PyTorch/TensorFlow et de l'API `Trainer` |
| [Tour rapide : Scripts d'ajustement fin/d'utilisation](https://github.com/huggingface/transformers/tree/main/examples) | Scripts d'exemple pour ajuster finement les modรจles sur une large gamme de tรขches |
| [Partage et tรฉlรฉversement de modรจles](https://huggingface.co/docs/transformers/model_sharing) | Tรฉlรฉchargez et partagez vos modรจles ajustรฉs avec la communautรฉ |
## Citation
Nous disposons dรฉsormais d'un [article](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que vous pouvez citer pour la bibliothรจque ๐ค Transformers :
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/setup.py | # Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/main/setup.py
To create the package for pypi.
1. Create the release branch named: v<RELEASE>-release, for example v4.19-release. For a patch release checkout the
current release branch.
If releasing on a special branch, copy the updated README.md on the main branch for your the commit you will make
for the post-release and run `make fix-copies` on the main branch as well.
2. Run `make pre-release` (or `make pre-patch` for a patch release) and commit these changes with the message:
"Release: <VERSION>" and push.
3. Go back to the main branch and run `make post-release` then `make fix-copies`. Commit these changes with the
message "v<NEXT_VERSION>.dev.0" and push to main.
# If you were just cutting the branch in preparation for a release, you can stop here for now.
4. Wait for the tests on the release branch to be completed and be green (otherwise revert and fix bugs)
5. On the release branch, add a tag in git to mark the release: "git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi' "
Push the tag to git: git push --tags origin v<RELEASE>-release
6. Build both the sources and the wheel. Do not change anything in setup.py between
creating the wheel and the source distribution (obviously).
Run `make build-release`. This will build the release and do some sanity checks for you. If this ends with an error
message, you need to fix things before going further.
You should now have a /dist directory with both .whl and .tar.gz source versions.
7. Check that everything looks correct by uploading the package to the pypi test server:
twine upload dist/* -r testpypi
(pypi suggest using twine as other methods upload files via plaintext.)
You may have to specify the repository url, use the following command then:
twine upload dist/* -r testpypi --repository-url=https://test.pypi.org/legacy/
Check that you can install it in a virtualenv by running:
pip install -i https://testpypi.python.org/pypi transformers
Check you can run the following commands:
python -c "from transformers import pipeline; classifier = pipeline('text-classification'); print(classifier('What a nice release'))"
python -c "from transformers import *"
python utils/check_build.py --check_lib
If making a patch release, double check the bug you are patching is indeed resolved.
8. Upload the final version to actual pypi:
twine upload dist/* -r pypi
9. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory.
"""
import os
import re
import shutil
from pathlib import Path
from setuptools import Command, find_packages, setup
# Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466
stale_egg_info = Path(__file__).parent / "transformers.egg-info"
if stale_egg_info.exists():
print(
(
"Warning: {} exists.\n\n"
"If you recently updated transformers to 3.0 or later, this is expected,\n"
"but it may prevent transformers from installing in editable mode.\n\n"
"This directory is automatically generated by Python's packaging tools.\n"
"I will remove it now.\n\n"
"See https://github.com/pypa/pip/issues/5466 for details.\n"
).format(stale_egg_info)
)
shutil.rmtree(stale_egg_info)
# IMPORTANT:
# 1. all dependencies should be listed here with their version requirements if any
# 2. once modified, run: `make deps_table_update` to update src/transformers/dependency_versions_table.py
_deps = [
"Pillow>=10.0.1,<=15.0",
"accelerate>=0.21.0",
"av==9.2.0", # Latest version of PyAV (10.0.0) has issues with audio stream.
"beautifulsoup4",
"codecarbon==1.2.0",
"cookiecutter==1.7.3",
"dataclasses",
"datasets!=2.5.0",
"decord==0.6.0",
"deepspeed>=0.9.3",
"diffusers",
"dill<0.3.5",
"evaluate>=0.2.0",
"faiss-cpu",
"fastapi",
"filelock",
"flax>=0.4.1,<=0.7.0",
"fsspec<2023.10.0",
"ftfy",
"fugashi>=1.0",
"GitPython<3.1.19",
"huggingface-hub>=0.19.3,<1.0",
"importlib_metadata",
"ipadic>=1.0.0,<2.0",
"isort>=5.5.4",
"jax>=0.4.1,<=0.4.13",
"jaxlib>=0.4.1,<=0.4.13",
"jieba",
"kenlm",
# Keras pin - this is to make sure Keras 3 doesn't destroy us. Remove or change when we have proper support.
"keras>2.9,<2.16",
"keras-nlp>=0.3.1",
"librosa",
"nltk",
"natten>=0.14.6,<0.15.0",
"numpy>=1.17",
"onnxconverter-common",
"onnxruntime-tools>=1.4.2",
"onnxruntime>=1.4.0",
"opencv-python",
"optuna",
"optax>=0.0.8,<=0.1.4",
"packaging>=20.0",
"parameterized",
"phonemizer",
"protobuf",
"psutil",
"pyyaml>=5.1",
"pydantic",
"pytest>=7.2.0,<8.0.0",
"pytest-timeout",
"pytest-xdist",
"python>=3.8.0",
"ray[tune]>=2.7.0",
"regex!=2019.12.17",
"requests",
"rhoknp>=1.1.0,<1.3.1",
"rjieba",
"rouge-score!=0.0.7,!=0.0.8,!=0.1,!=0.1.1",
"ruff==0.1.5",
"sacrebleu>=1.4.12,<2.0.0",
"sacremoses",
"safetensors>=0.4.1",
"sagemaker>=2.31.0",
"scikit-learn",
"scipy<1.13.0", # SciPy >= 1.13.0 is not supported with the current jax pin (`jax>=0.4.1,<=0.4.13`)
"sentencepiece>=0.1.91,!=0.1.92",
"sigopt",
"starlette",
"sudachipy>=0.6.6",
"sudachidict_core>=20220729",
"tensorboard",
# TensorFlow pin. When changing this value, update examples/tensorflow/_tests_requirements.txt accordingly
"tensorflow-cpu>2.9,<2.16",
"tensorflow>2.9,<2.16",
"tensorflow-text<2.16",
"tensorflow-probability<2.16",
"tf2onnx",
"timeout-decorator",
"timm",
"tokenizers>=0.19,<0.20",
"torch",
"torchaudio",
"torchvision",
"pyctcdecode>=0.4.0",
"tqdm>=4.27",
"unidic>=1.0.2",
"unidic_lite>=1.0.7",
"urllib3<2.0.0",
"uvicorn",
"pytest-rich",
]
# this is a lookup table with items like:
#
# tokenizers: "tokenizers==0.9.4"
# packaging: "packaging"
#
# some of the values are versioned whereas others aren't.
deps = {b: a for a, b in (re.findall(r"^(([^!=<>~ ]+)(?:[!=<>~ ].*)?$)", x)[0] for x in _deps)}
# since we save this data in src/transformers/dependency_versions_table.py it can be easily accessed from
# anywhere. If you need to quickly access the data from this table in a shell, you can do so easily with:
#
# python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([ deps[x] for x in sys.argv[1:]]))' tokenizers datasets
#
# Just pass the desired package names to that script as it's shown with 2 packages above.
#
# If transformers is not yet installed and the work is done from the cloned repo remember to add `PYTHONPATH=src` to the script above
#
# You can then feed this for example to `pip`:
#
# pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([deps[x] for x in sys.argv[1:]]))' tokenizers datasets)
#
def deps_list(*pkgs):
return [deps[pkg] for pkg in pkgs]
class DepsTableUpdateCommand(Command):
"""
A custom distutils command that updates the dependency table.
usage: python setup.py deps_table_update
"""
description = "build runtime dependency table"
user_options = [
# format: (long option, short option, description).
("dep-table-update", None, "updates src/transformers/dependency_versions_table.py"),
]
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
content = [
"# THIS FILE HAS BEEN AUTOGENERATED. To update:",
"# 1. modify the `_deps` dict in setup.py",
"# 2. run `make deps_table_update``",
"deps = {",
entries,
"}",
"",
]
target = "src/transformers/dependency_versions_table.py"
print(f"updating {target}")
with open(target, "w", encoding="utf-8", newline="\n") as f:
f.write("\n".join(content))
extras = {}
extras["ja"] = deps_list("fugashi", "ipadic", "unidic_lite", "unidic", "sudachipy", "sudachidict_core", "rhoknp")
extras["sklearn"] = deps_list("scikit-learn")
extras["tf"] = deps_list("tensorflow", "onnxconverter-common", "tf2onnx", "tensorflow-text", "keras-nlp")
extras["tf-cpu"] = deps_list("keras", "tensorflow-cpu", "onnxconverter-common", "tf2onnx", "tensorflow-text", "keras-nlp", "tensorflow-probability")
extras["torch"] = deps_list("torch", "accelerate")
extras["accelerate"] = deps_list("accelerate")
if os.name == "nt": # windows
extras["retrieval"] = deps_list("datasets") # faiss is not supported on windows
extras["flax"] = [] # jax is not supported on windows
else:
extras["retrieval"] = deps_list("faiss-cpu", "datasets")
extras["flax"] = deps_list("jax", "jaxlib", "flax", "optax", "scipy")
extras["tokenizers"] = deps_list("tokenizers")
extras["ftfy"] = deps_list("ftfy")
extras["onnxruntime"] = deps_list("onnxruntime", "onnxruntime-tools")
extras["onnx"] = deps_list("onnxconverter-common", "tf2onnx") + extras["onnxruntime"]
extras["modelcreation"] = deps_list("cookiecutter")
extras["sagemaker"] = deps_list("sagemaker")
extras["deepspeed"] = deps_list("deepspeed") + extras["accelerate"]
extras["optuna"] = deps_list("optuna")
extras["ray"] = deps_list("ray[tune]")
extras["sigopt"] = deps_list("sigopt")
extras["integrations"] = extras["optuna"] + extras["ray"] + extras["sigopt"]
extras["serving"] = deps_list("pydantic", "uvicorn", "fastapi", "starlette")
extras["audio"] = deps_list("librosa", "pyctcdecode", "phonemizer", "kenlm")
# `pip install ".[speech]"` is deprecated and `pip install ".[torch-speech]"` should be used instead
extras["speech"] = deps_list("torchaudio") + extras["audio"]
extras["torch-speech"] = deps_list("torchaudio") + extras["audio"]
extras["tf-speech"] = extras["audio"]
extras["flax-speech"] = extras["audio"]
extras["vision"] = deps_list("Pillow")
extras["timm"] = deps_list("timm")
extras["torch-vision"] = deps_list("torchvision") + extras["vision"]
extras["natten"] = deps_list("natten")
extras["codecarbon"] = deps_list("codecarbon")
extras["video"] = deps_list("decord", "av")
extras["sentencepiece"] = deps_list("sentencepiece", "protobuf")
extras["testing"] = (
deps_list(
"pytest",
"pytest-rich",
"pytest-xdist",
"timeout-decorator",
"parameterized",
"psutil",
"datasets",
"dill",
"evaluate",
"pytest-timeout",
"ruff",
"sacrebleu",
"rouge-score",
"nltk",
"GitPython",
"sacremoses",
"rjieba",
"beautifulsoup4",
"tensorboard",
"pydantic",
"sentencepiece",
)
+ extras["retrieval"]
+ extras["modelcreation"]
)
extras["deepspeed-testing"] = extras["deepspeed"] + extras["testing"] + extras["optuna"] + extras["sentencepiece"]
extras["quality"] = deps_list("datasets", "isort", "ruff", "GitPython", "urllib3")
extras["all"] = (
extras["tf"]
+ extras["torch"]
+ extras["flax"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["torch-speech"]
+ extras["vision"]
+ extras["integrations"]
+ extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"]
+ extras["accelerate"]
+ extras["video"]
)
extras["dev-torch"] = (
extras["testing"]
+ extras["torch"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["torch-speech"]
+ extras["vision"]
+ extras["integrations"]
+ extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"]
+ extras["quality"]
+ extras["ja"]
+ extras["sklearn"]
+ extras["modelcreation"]
+ extras["onnxruntime"]
)
extras["dev-tensorflow"] = (
extras["testing"]
+ extras["tf"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["vision"]
+ extras["quality"]
+ extras["sklearn"]
+ extras["modelcreation"]
+ extras["onnx"]
+ extras["tf-speech"]
)
extras["dev"] = (
extras["all"]
+ extras["testing"]
+ extras["quality"]
+ extras["ja"]
+ extras["sklearn"]
+ extras["modelcreation"]
)
extras["torchhub"] = deps_list(
"filelock",
"huggingface-hub",
"importlib_metadata",
"numpy",
"packaging",
"protobuf",
"regex",
"requests",
"sentencepiece",
"torch",
"tokenizers",
"tqdm",
)
extras["agents"] = deps_list(
"diffusers", "accelerate", "datasets", "torch", "sentencepiece", "opencv-python", "Pillow"
)
# when modifying the following list, make sure to update src/transformers/dependency_versions_check.py
install_requires = [
deps["filelock"], # filesystem locks, e.g., to prevent parallel downloads
deps["huggingface-hub"],
deps["numpy"],
deps["packaging"], # utilities from PyPA to e.g., compare versions
deps["pyyaml"], # used for the model cards metadata
deps["regex"], # for OpenAI GPT
deps["requests"], # for downloading models over HTTPS
deps["tokenizers"],
deps["safetensors"],
deps["tqdm"], # progress bars in model download and training scripts
]
setup(
name="transformers",
version="4.41.0.dev0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)
author="The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)",
author_email="[email protected]",
description="State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
keywords="NLP vision speech deep learning transformer pytorch tensorflow jax BERT GPT-2 Wav2Vec2 ViT",
license="Apache 2.0 License",
url="https://github.com/huggingface/transformers",
package_dir={"": "src"},
packages=find_packages("src"),
include_package_data=True,
package_data={"": ["**/*.cu", "**/*.cpp", "**/*.cuh", "**/*.h", "**/*.pyx"]},
zip_safe=False,
extras_require=extras,
entry_points={"console_scripts": ["transformers-cli=transformers.commands.transformers_cli:main"]},
python_requires=">=3.8.0",
install_requires=list(install_requires),
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
cmdclass={"deps_table_update": DepsTableUpdateCommand},
)
extras["tests_torch"] = deps_list()
extras["tests_tf"] = deps_list()
extras["tests_flax"] = deps_list()
extras["tests_torch_and_tf"] = deps_list()
extras["tests_torch_and_flax"] = deps_list()
extras["tests_hub"] = deps_list()
extras["tests_pipelines_torch"] = deps_list()
extras["tests_pipelines_tf"] = deps_list()
extras["tests_onnx"] = deps_list()
extras["tests_examples_torch"] = deps_list()
extras["tests_examples_tf"] = deps_list()
extras["tests_custom_tokenizers"] = deps_list()
extras["tests_exotic_models"] = deps_list()
extras["consistency"] = deps_list() | 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README.md | <!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<b>English</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
These models can be applied on:
* ๐ Text, for tasks like text classification, information extraction, question answering, summarization, translation, and text generation, in over 100 languages.
* ๐ผ๏ธ Images, for tasks like image classification, object detection, and segmentation.
* ๐ฃ๏ธ Audio, for tasks like speech recognition and audio classification.
Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
๐ค Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
๐ค Transformers is backed by the three most popular deep learning libraries โ [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) โ with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
## Online demos
You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models.
Here are a few examples:
In Natural Language Processing:
- [Masked word completion with BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Named Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Text generation with Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- [Natural Language Inference with RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Question answering with DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Translation with T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
In Computer Vision:
- [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Semantic Segmentation with SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Panoptic Segmentation with Mask2Former](https://huggingface.co/facebook/mask2former-swin-large-coco-panoptic)
- [Depth Estimation with Depth Anything](https://huggingface.co/docs/transformers/main/model_doc/depth_anything)
- [Video Classification with VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Universal Segmentation with OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
In Audio:
- [Automatic Speech Recognition with Whisper](https://huggingface.co/openai/whisper-large-v3)
- [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Audio Classification with Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
In Multimodal tasks:
- [Table Question Answering with TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Visual Question Answering with ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Image captioning with LLaVa](https://huggingface.co/llava-hf/llava-1.5-7b-hf)
- [Zero-shot Image Classification with SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384)
- [Document Question Answering with LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Zero-shot Video Classification with X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
- [Zero-shot Object Detection with OWLv2](https://huggingface.co/docs/transformers/en/model_doc/owlv2)
- [Zero-shot Image Segmentation with CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)
- [Automatic Mask Generation with SAM](https://huggingface.co/docs/transformers/model_doc/sam)
## 100 projects using Transformers
Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the
Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone
else to build their dream projects.
In order to celebrate the 100,000 stars of transformers, we have decided to put the spotlight on the
community, and we have created the [awesome-transformers](./awesome-transformers.md) page which lists 100
incredible projects built in the vicinity of transformers.
If you own or use a project that you believe should be part of the list, please open a PR to add it!
## If you are looking for custom support from the Hugging Face team
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Quick tour
To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
```python
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here, the answer is "positive" with a confidence of 99.97%.
Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Here, we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary).
In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
And here is the equivalent code for TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
The tokenizer is responsible for all the preprocessing the pretrained model expects and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset.
## Why should I use transformers?
1. Easy-to-use state-of-the-art models:
- High performance on natural language understanding & generation, computer vision, and audio tasks.
- Low barrier to entry for educators and practitioners.
- Few user-facing abstractions with just three classes to learn.
- A unified API for using all our pretrained models.
1. Lower compute costs, smaller carbon footprint:
- Researchers can share trained models instead of always retraining.
- Practitioners can reduce compute time and production costs.
- Dozens of architectures with over 400,000 pretrained models across all modalities.
1. Choose the right framework for every part of a model's lifetime:
- Train state-of-the-art models in 3 lines of code.
- Move a single model between TF2.0/PyTorch/JAX frameworks at will.
- Seamlessly pick the right framework for training, evaluation, and production.
1. Easily customize a model or an example to your needs:
- We provide examples for each architecture to reproduce the results published by its original authors.
- Model internals are exposed as consistently as possible.
- Model files can be used independently of the library for quick experiments.
## Why shouldn't I use transformers?
- This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
- The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)).
- While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the-box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs.
## Installation
### With pip
This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
You should install ๐ค Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
First, create a virtual environment with the version of Python you're going to use and activate it.
Then, you will need to install at least one of Flax, PyTorch, or TensorFlow.
Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
When one of those backends has been installed, ๐ค Transformers can be installed using pip as follows:
```bash
pip install transformers
```
If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
### With conda
๐ค Transformers can be installed using conda as follows:
```shell script
conda install conda-forge::transformers
```
> **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
> **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).
## Model architectures
**[All the model checkpoints](https://huggingface.co/models)** provided by ๐ค Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co/models), where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations).
Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers currently provides the following architectures: see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them.
To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the ๐ค Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks).
These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://github.com/huggingface/transformers/tree/main/examples).
## Learn more
| Section | Description |
|-|-|
| [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials |
| [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by ๐ค Transformers |
| [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models |
| [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by ๐ค Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API |
| [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks |
| [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community |
## Citation
We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the ๐ค Transformers library:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos | mavonic_private_repos/transformers/README_de.md | <!---
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">็ฎไฝไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">็น้ซไธญๆ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">ํ๊ตญ์ด</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Espaรฑol</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">ๆฅๆฌ่ช</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">เคนเคฟเคจเฅเคฆเฅ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ru.md">ะ ัััะบะธะน</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_pt-br.md">ะ ortuguรชs</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_te.md">เฐคเฑเฐฒเฑเฐเฑ</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_fr.md">Franรงais</a> |
<b>Deutsch</b> |
<a href="https://github.com/huggingface/transformers/blob/main/README_vi.md">Tiแบฟng Viแปt</a> |
</p>
</h4>
<h3 align="center">
<p>Maschinelles Lernen auf dem neuesten Stand der Technik fรผr JAX, PyTorch und TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
๐ค Transformers bietet Tausende von vortrainierten Modellen, um Aufgaben in verschiedenen Modalitรคten wie Text, Bild und Audio durchzufรผhren.
Diese Modelle kรถnnen angewendet werden, auf:
* ๐ Text - fรผr Aufgaben wie Textklassifizierung, Informationsextraktion, Question Answering, automatische Textzusammenfassung, maschinelle รbersetzung und Textgenerierung in รผber 100 Sprachen.
* ๐ผ๏ธ Bilder - fรผr Aufgaben wie Bildklassifizierung, Objekterkennung und Segmentierung.
* ๐ฃ๏ธ Audio - fรผr Aufgaben wie Spracherkennung und Audioklassifizierung.
Transformer-Modelle kรถnnen auch Aufgaben fรผr **mehrere Modalitรคten in Kombination** durchfรผhren, z. B. tabellenbasiertes Question Answering, optische Zeichenerkennung, Informationsextraktion aus gescannten Dokumenten, Videoklassifizierung und visuelles Question Answering.
๐ค Transformers bietet APIs, um diese vortrainierten Modelle schnell herunterzuladen und fรผr einen gegebenen Text zu verwenden, sie auf Ihren eigenen Datensรคtzen zu feintunen und dann mit der Community in unserem [Model Hub](https://huggingface.co/models) zu teilen. Gleichzeitig ist jedes Python-Modul, das eine Architektur definiert, komplett eigenstรคndig und kann modifiziert werden, um schnelle Forschungsexperimente zu ermรถglichen.
๐ค Transformers unterstรผtzt die nahtlose Integration von drei der beliebtesten Deep-Learning-Bibliotheken: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) und [TensorFlow](https://www.tensorflow.org/). Trainieren Sie Ihr Modell in einem Framework und laden Sie es zur Inferenz unkompliziert mit einem anderen.
## Online-Demos
Sie kรถnnen die meisten unserer Modelle direkt auf ihren Seiten im [Model Hub](https://huggingface.co/models) testen. Wir bieten auch [privates Modell-Hosting, Versionierung, & eine Inferenz-API](https://huggingface.co/pricing) fรผr รถffentliche und private Modelle an.
Hier sind einige Beispiele:
In der Computerlinguistik:
- [Maskierte Wortvervollstรคndigung mit BERT](https://huggingface.co/google-bert/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Eigennamenerkennung mit Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Textgenerierung mit GPT-2](https://huggingface.co/openai-community/gpt2?text=A+long+time+ago%2C+)
- [Natural Language Inference mit RoBERTa](https://huggingface.co/FacebookAI/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Automatische Textzusammenfassung mit BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Question Answering mit DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Maschinelle รbersetzung mit T5](https://huggingface.co/google-t5/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
In der Computer Vision:
- [Bildklassifizierung mit ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Objekterkennung mit DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Semantische Segmentierung mit SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Panoptische Segmentierung mit MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [Depth Estimation mit DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
- [Videoklassifizierung mit VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Universelle Segmentierung mit OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
Im Audio-Bereich:
- [Automatische Spracherkennung mit Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Keyword Spotting mit Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Audioklassifizierung mit Audio Spectrogram Transformer](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
In multimodalen Aufgaben:
- [Tabellenbasiertes Question Answering mit TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Visuelles Question Answering mit ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Zero-Shot-Bildklassifizierung mit CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
- [Dokumentenbasiertes Question Answering mit LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Zero-Shot-Videoklassifizierung mit X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
## 100 Projekte, die ๐ค Transformers verwenden
๐ค Transformers ist mehr als nur ein Toolkit zur Verwendung von vortrainierten Modellen: Es ist eine Gemeinschaft von Projekten, die darum herum und um den Hugging Face Hub aufgebaut sind. Wir mรถchten, dass ๐ค Transformers es Entwicklern, Forschern, Studenten, Professoren, Ingenieuren und jedem anderen ermรถglicht, ihre Traumprojekte zu realisieren.
Um die 100.000 Sterne von ๐ค Transformers zu feiern, haben wir beschlossen, die Gemeinschaft in den Mittelpunkt zu stellen und die Seite [awesome-transformers](./awesome-transformers.md) erstellt, die 100 unglaubliche Projekte auflistet, die zusammen mit ๐ค Transformers realisiert wurden.
Wenn Sie ein Projekt besitzen oder nutzen, von dem Sie glauben, dass es Teil der Liste sein sollte, รถffnen Sie bitte einen PR, um es hinzuzufรผgen!
## Wenn Sie individuelle Unterstรผtzung vom Hugging Face-Team mรถchten
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Schnelleinstieg
Um sofort ein Modell mit einer bestimmten Eingabe (Text, Bild, Audio ...) zu verwenden, bieten wir die `pipeline`-API an. Pipelines kombinieren ein vortrainiertes Modell mit der jeweiligen Vorverarbeitung, die wรคhrend dessen Trainings verwendet wurde. Hier sehen Sie, wie man schnell eine Pipeline verwenden kann, um positive und negative Texte zu klassifizieren:
```python
>>> from transformers import pipeline
# Zuweisung einer Pipeline fรผr die Sentiment-Analyse
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
Die zweite Codezeile lรคdt und cacht das vortrainierte Modell, das von der Pipeline verwendet wird, wรคhrend die dritte es an dem gegebenen Text evaluiert. Hier ist die Antwort "positiv" mit einer Konfidenz von 99,97 %.
Viele Aufgaben, sowohl in der Computerlinguistik als auch in der Computer Vision und Sprachverarbeitung, haben eine vortrainierte `pipeline`, die sofort einsatzbereit ist. Z. B. kรถnnen wir leicht erkannte Objekte in einem Bild extrahieren:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download eines Bildes mit sรผรen Katzen
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Zuweisung einer Pipeline fรผr die Objekterkennung
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Hier erhalten wir eine Liste von Objekten, die im Bild erkannt wurden, mit einer Markierung, die das Objekt eingrenzt, und einem zugehรถrigen Konfidenzwert. Folgend ist das Originalbild links und die Vorhersagen rechts dargestellt:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Sie kรถnnen mehr รผber die von der `pipeline`-API unterstรผtzten Aufgaben in [diesem Tutorial](https://huggingface.co/docs/transformers/task_summary) erfahren.
Zusรคtzlich zur `pipeline` benรถtigt es nur drei Zeilen Code, um eines der vortrainierten Modelle fรผr Ihre Aufgabe herunterzuladen und zu verwenden. Hier ist der Code fรผr die PyTorch-Version:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = AutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
```
Und hier ist der entsprechende Code fรผr TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
```
Der Tokenizer ist fรผr die gesamte Vorverarbeitung, die das vortrainierte Modell benรถtigt, verantwortlich und kann direkt auf einem einzelnen String (wie in den obigen Beispielen) oder einer Liste ausgefรผhrt werden. Er gibt ein Dictionary aus, das Sie im darauffolgenden Code verwenden oder einfach direkt Ihrem Modell รผbergeben kรถnnen, indem Sie den ** Operator zum Entpacken von Argumenten einsetzen.
Das Modell selbst ist ein regulรคres [PyTorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (abhรคngig von Ihrem Backend), das Sie wie gewohnt verwenden kรถnnen. [Dieses Tutorial](https://huggingface.co/docs/transformers/training) erklรคrt, wie man ein solches Modell in eine klassische PyTorch- oder TensorFlow-Trainingsschleife integrieren kann oder wie man unsere `Trainer`-API verwendet, um es schnell auf einem neuen Datensatz zu feintunen.
## Warum sollten Sie ๐ค Transformers verwenden?
1. Benutzerfreundliche Modelle auf dem neuesten Stand der Technik:
- Hohe Leistung bei Aufgaben zu Natural Language Understanding & Generation, Computer Vision und Audio.
- Niedrige Einstiegshรผrde fรผr Bildungskrรคfte und Praktiker.
- Wenige benutzerseitige Abstraktionen mit nur drei zu lernenden Klassen.
- Eine einheitliche API fรผr die Verwendung aller unserer vortrainierten Modelle.
1. Geringere Rechenkosten, kleinerer CO<sub>2</sub>-Fuรabdruck:
- Forscher kรถnnen trainierte Modelle teilen, anstatt sie immer wieder neu zu trainieren.
- Praktiker kรถnnen die Rechenzeit und Produktionskosten reduzieren.
- Dutzende Architekturen mit รผber 400.000 vortrainierten Modellen รผber alle Modalitรคten hinweg.
1. Wรคhlen Sie das richtige Framework fรผr jeden Lebensabschnitt eines Modells:
- Trainieren Sie Modelle auf neustem Stand der Technik in nur drei Codezeilen.
- Verwenden Sie ein einzelnes Modell nach Belieben mit TF2.0-/PyTorch-/JAX-Frameworks.
- Wรคhlen Sie nahtlos das richtige Framework fรผr Training, Evaluation und Produktiveinsatz.
1. Passen Sie ein Modell oder Beispiel leicht an Ihre Bedรผrfnisse an:
- Wir bieten Beispiele fรผr jede Architektur an, um die von ihren ursprรผnglichen Autoren verรถffentlichten Ergebnisse zu reproduzieren.
- Modellinterna sind so einheitlich wie mรถglich verfรผgbar gemacht.
- Modelldateien kรถnnen unabhรคngig von der Bibliothek fรผr schnelle Experimente verwendet werden.
## Warum sollten Sie ๐ค Transformers nicht verwenden?
- Diese Bibliothek ist kein modularer Werkzeugkasten mit Bausteinen fรผr neuronale Netze. Der Code in den Modelldateien ist absichtlich nicht mit zusรคtzlichen Abstraktionen refaktorisiert, sodass Forscher schnell mit jedem der Modelle iterieren kรถnnen, ohne sich in zusรคtzliche Abstraktionen/Dateien vertiefen zu mรผssen.
- Die Trainings-API ist nicht dafรผr gedacht, mit beliebigen Modellen zu funktionieren, sondern ist fรผr die Verwendung mit den von der Bibliothek bereitgestellten Modellen optimiert. Fรผr generische Trainingsschleifen von maschinellem Lernen sollten Sie eine andere Bibliothek verwenden (mรถglicherweise [Accelerate](https://huggingface.co/docs/accelerate)).
- Auch wenn wir bestrebt sind, so viele Anwendungsfรคlle wie mรถglich zu veranschaulichen, sind die Beispielskripte in unserem [`examples`](./examples) Ordner genau das: Beispiele. Es ist davon auszugehen, dass sie nicht sofort auf Ihr spezielles Problem anwendbar sind und einige Codezeilen geรคndert werden mรผssen, um sie fรผr Ihre Bedรผrfnisse anzupassen.
## Installation
### Mit pip
Dieses Repository wurde mit Python 3.8+, Flax 0.4.1+, PyTorch 1.11+ und TensorFlow 2.6+ getestet.
Sie sollten ๐ค Transformers in einer [virtuellen Umgebung](https://docs.python.org/3/library/venv.html) installieren. Wenn Sie mit virtuellen Python-Umgebungen nicht vertraut sind, schauen Sie sich den [Benutzerleitfaden](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) an.
Erstellen und aktivieren Sie zuerst eine virtuelle Umgebung mit der Python-Version, die Sie verwenden mรถchten.
Dann mรผssen Sie entweder Flax, PyTorch oder TensorFlow installieren. Bitte beziehe dich entsprechend auf die jeweiligen Installationsanleitungen fรผr [TensorFlow](https://www.tensorflow.org/install/), [PyTorch](https://pytorch.org/get-started/locally/#start-locally), und/oder [Flax](https://github.com/google/flax#quick-install) und [Jax](https://github.com/google/jax#installation) fรผr den spezifischen Installationsbefehl fรผr Ihre Plattform.
Wenn eines dieser Backends installiert ist, kann ๐ค Transformers wie folgt mit pip installiert werden:
```bash
pip install transformers
```
Wenn Sie mit den Beispielen experimentieren mรถchten oder die neueste Version des Codes benรถtigen und nicht auf eine neue Verรถffentlichung warten kรถnnen, mรผssen Sie [die Bibliothek von der Quelle installieren](https://huggingface.co/docs/transformers/installation#installing-from-source).
### Mit conda
๐ค Transformers kann wie folgt mit conda installiert werden:
```shell script
conda install conda-forge::transformers
```
> **_HINWEIS:_** Die Installation von `transformers` aus dem `huggingface`-Kanal ist veraltet.
Folgen Sie den Installationsanleitungen von Flax, PyTorch oder TensorFlow, um zu sehen, wie sie mit conda installiert werden kรถnnen.
> **_HINWEIS:_** Auf Windows werden Sie mรถglicherweise aufgefordert, den Entwicklermodus zu aktivieren, um von Caching zu profitieren. Wenn das fรผr Sie keine Option ist, lassen Sie es uns bitte in [diesem Issue](https://github.com/huggingface/huggingface_hub/issues/1062) wissen.
## Modellarchitekturen
**[Alle Modell-Checkpoints](https://huggingface.co/models)**, die von ๐ค Transformers bereitgestellt werden, sind nahtlos aus dem huggingface.co [Model Hub](https://huggingface.co/models) integriert, wo sie direkt von [Benutzern](https://huggingface.co/users) und [Organisationen](https://huggingface.co/organizations) hochgeladen werden.
Aktuelle Anzahl der Checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen)
๐ค Transformers bietet derzeit die folgenden Architekturen an: siehe [hier](https://huggingface.co/docs/transformers/model_summary) fรผr eine jeweilige รbersicht.
Um zu รผberprรผfen, ob jedes Modell eine Implementierung in Flax, PyTorch oder TensorFlow hat oder รผber einen zugehรถrigen Tokenizer verfรผgt, der von der ๐ค Tokenizers-Bibliothek unterstรผtzt wird, schauen Sie auf [diese Tabelle](https://huggingface.co/docs/transformers/index#supported-frameworks).
Diese Implementierungen wurden mit mehreren Datensรคtzen getestet (siehe Beispielskripte) und sollten den Leistungen der ursprรผnglichen Implementierungen entsprechen. Weitere Details zur Leistung finden Sie im Abschnitt der Beispiele in der [Dokumentation](https://github.com/huggingface/transformers/tree/main/examples).
## Mehr erfahren
| Abschnitt | Beschreibung |
|-|-|
| [Dokumentation](https://huggingface.co/docs/transformers/) | Vollstรคndige API-Dokumentation und Tutorials |
| [Zusammenfassung der Aufgaben](https://huggingface.co/docs/transformers/task_summary) | Von ๐ค Transformers unterstรผtzte Aufgaben |
| [Vorverarbeitungs-Tutorial](https://huggingface.co/docs/transformers/preprocessing) | Verwendung der `Tokenizer`-Klasse zur Vorverarbeitung der Daten fรผr die Modelle |
| [Training und Feintuning](https://huggingface.co/docs/transformers/training) | Verwendung der von ๐ค Transformers bereitgestellten Modelle in einer PyTorch-/TensorFlow-Trainingsschleife und der `Trainer`-API |
| [Schnelleinstieg: Feintuning/Anwendungsskripte](https://github.com/huggingface/transformers/tree/main/examples) | Beispielskripte fรผr das Feintuning von Modellen fรผr eine breite Palette von Aufgaben |
| [Modellfreigabe und -upload](https://huggingface.co/docs/transformers/model_sharing) | Laden Sie Ihre feingetunten Modelle hoch und teilen Sie sie mit der Community |
## Zitation
Wir haben jetzt ein [Paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/), das Sie fรผr die ๐ค Transformers-Bibliothek zitieren kรถnnen:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rรฉmi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/split_model_tests.py | # Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is used to get the list of folders under `tests/models` and split the list into `NUM_SLICES` splits.
The main use case is a GitHub Actions workflow file calling this script to get the (nested) list of folders allowing it
to split the list of jobs to run into multiple slices each containing a smaller number of jobs. This way, we can bypass
the maximum of 256 jobs in a matrix.
See the `setup` and `run_models_gpu` jobs defined in the workflow file `.github/workflows/self-scheduled.yml` for more
details.
Usage:
This script is required to be run under `tests` folder of `transformers` root directory.
Assume we are under `transformers` root directory:
```bash
cd tests
python ../utils/split_model_tests.py --num_splits 64
```
"""
import argparse
import os
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--num_splits",
type=int,
default=1,
help="the number of splits into which the (flat) list of folders will be split.",
)
args = parser.parse_args()
tests = os.getcwd()
model_tests = os.listdir(os.path.join(tests, "models"))
d1 = sorted(filter(os.path.isdir, os.listdir(tests)))
d2 = sorted(filter(os.path.isdir, [f"models/{x}" for x in model_tests]))
d1.remove("models")
d = d2 + d1
num_jobs = len(d)
num_jobs_per_splits = num_jobs // args.num_splits
model_splits = []
end = 0
for idx in range(args.num_splits):
start = end
end = start + num_jobs_per_splits + (1 if idx < num_jobs % args.num_splits else 0)
model_splits.append(d[start:end])
print(model_splits)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_docstrings.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks all docstrings of public objects have an argument section matching their signature.
Use from the root of the repo with:
```bash
python utils/check_docstrings.py
```
for a check that will error in case of inconsistencies (used by `make repo-consistency`).
To auto-fix issues run:
```bash
python utils/check_docstrings.py --fix_and_overwrite
```
which is used by `make fix-copies` (note that this fills what it cans, you might have to manually fill information
like argument descriptions).
"""
import argparse
import ast
import enum
import inspect
import operator as op
import re
from pathlib import Path
from typing import Any, Optional, Tuple, Union
from check_repo import ignore_undocumented
from transformers.utils import direct_transformers_import
PATH_TO_TRANSFORMERS = Path("src").resolve() / "transformers"
# This is to make sure the transformers module imported is the one in the repo.
transformers = direct_transformers_import(PATH_TO_TRANSFORMERS)
OPTIONAL_KEYWORD = "*optional*"
# Re pattern that catches args blocks in docstrings (with all variation around the name supported).
_re_args = re.compile(r"^\s*(Args?|Arguments?|Attributes?|Params?|Parameters?):\s*$")
# Re pattern that parses the start of an arg block: catches <name> (<description>) in those lines.
_re_parse_arg = re.compile(r"^(\s*)(\S+)\s+\((.+)\)(?:\:|$)")
# Re pattern that parses the end of a description of an arg (catches the default in *optional*, defaults to xxx).
_re_parse_description = re.compile(r"\*optional\*, defaults to (.*)$")
# This is a temporary list of objects to ignore while we progressively fix them. Do not add anything here, fix the
# docstrings instead. If formatting should be ignored for the docstring, you can put a comment # no-format on the
# line before the docstring.
OBJECTS_TO_IGNORE = [
# Deprecated
"InputExample",
"InputFeatures",
# Signature is *args/**kwargs
# "PretrainedConfig", #ignored but could be fixed
# "GenerationConfig", #ignored but could be fixed
"TFSequenceSummary",
"TFBertTokenizer",
"TFGPT2Tokenizer",
# Missing arguments in the docstring
"ASTFeatureExtractor",
"AlbertModel",
"AlbertTokenizerFast",
"AlignTextModel",
"AlignVisionConfig",
"AudioClassificationPipeline",
"AutoformerConfig",
"AutomaticSpeechRecognitionPipeline",
"AzureOpenAiAgent",
"BarkCoarseConfig",
"BarkConfig",
"BarkFineConfig",
"BarkSemanticConfig",
"BartConfig",
"BartTokenizerFast",
"BarthezTokenizerFast",
"BeitModel",
"BertConfig",
"BertJapaneseTokenizer",
"BertModel",
"BertTokenizerFast",
"BigBirdConfig",
"BigBirdForQuestionAnswering",
"BigBirdModel",
"BigBirdPegasusConfig",
"BigBirdTokenizerFast",
"BitImageProcessor",
"BlenderbotConfig",
"BlenderbotSmallConfig",
"BlenderbotSmallTokenizerFast",
"BlenderbotTokenizerFast",
"Blip2QFormerConfig",
"Blip2VisionConfig",
"BlipTextConfig",
"BlipVisionConfig",
"BloomConfig",
"BloomTokenizerFast",
"BridgeTowerTextConfig",
"BridgeTowerVisionConfig",
"BrosModel",
"CamembertConfig",
"CamembertModel",
"CamembertTokenizerFast",
"CanineModel",
"CanineTokenizer",
"ChineseCLIPTextModel",
"ClapTextConfig",
"ConditionalDetrConfig",
"ConditionalDetrImageProcessor",
"ConvBertConfig",
"ConvBertTokenizerFast",
"ConvNextConfig",
"ConvNextV2Config",
"ConversationalPipeline",
"CpmAntTokenizer",
"CvtConfig",
"CvtModel",
"DeiTImageProcessor",
"DPRReaderTokenizer",
"DPRReaderTokenizerFast",
"DPTModel",
"Data2VecAudioConfig",
"Data2VecTextConfig",
"Data2VecTextModel",
"Data2VecVisionModel",
"DataCollatorForLanguageModeling",
"DebertaConfig",
"DebertaV2Config",
"DebertaV2Tokenizer",
"DebertaV2TokenizerFast",
"DecisionTransformerConfig",
"DeformableDetrConfig",
"DeformableDetrImageProcessor",
"DeiTModel",
"DepthEstimationPipeline",
"DetaConfig",
"DetaImageProcessor",
"DetrConfig",
"DetrImageProcessor",
"DinatModel",
"DistilBertConfig",
"DistilBertTokenizerFast",
"DocumentQuestionAnsweringPipeline",
"DonutSwinModel",
"EarlyStoppingCallback",
"EfficientFormerConfig",
"EfficientFormerImageProcessor",
"EfficientNetConfig",
"ElectraConfig",
"ElectraTokenizerFast",
"EncoderDecoderModel",
"EncoderRepetitionPenaltyLogitsProcessor",
"ErnieMModel",
"ErnieModel",
"ErnieMTokenizer",
"EsmConfig",
"EsmModel",
"FlaxAlbertForMaskedLM",
"FlaxAlbertForMultipleChoice",
"FlaxAlbertForPreTraining",
"FlaxAlbertForQuestionAnswering",
"FlaxAlbertForSequenceClassification",
"FlaxAlbertForTokenClassification",
"FlaxAlbertModel",
"FlaxBartForCausalLM",
"FlaxBartForConditionalGeneration",
"FlaxBartForQuestionAnswering",
"FlaxBartForSequenceClassification",
"FlaxBartModel",
"FlaxBeitForImageClassification",
"FlaxBeitForMaskedImageModeling",
"FlaxBeitModel",
"FlaxBertForCausalLM",
"FlaxBertForMaskedLM",
"FlaxBertForMultipleChoice",
"FlaxBertForNextSentencePrediction",
"FlaxBertForPreTraining",
"FlaxBertForQuestionAnswering",
"FlaxBertForSequenceClassification",
"FlaxBertForTokenClassification",
"FlaxBertModel",
"FlaxBigBirdForCausalLM",
"FlaxBigBirdForMaskedLM",
"FlaxBigBirdForMultipleChoice",
"FlaxBigBirdForPreTraining",
"FlaxBigBirdForQuestionAnswering",
"FlaxBigBirdForSequenceClassification",
"FlaxBigBirdForTokenClassification",
"FlaxBigBirdModel",
"FlaxBlenderbotForConditionalGeneration",
"FlaxBlenderbotModel",
"FlaxBlenderbotSmallForConditionalGeneration",
"FlaxBlenderbotSmallModel",
"FlaxBloomForCausalLM",
"FlaxBloomModel",
"FlaxCLIPModel",
"FlaxDistilBertForMaskedLM",
"FlaxDistilBertForMultipleChoice",
"FlaxDistilBertForQuestionAnswering",
"FlaxDistilBertForSequenceClassification",
"FlaxDistilBertForTokenClassification",
"FlaxDistilBertModel",
"FlaxElectraForCausalLM",
"FlaxElectraForMaskedLM",
"FlaxElectraForMultipleChoice",
"FlaxElectraForPreTraining",
"FlaxElectraForQuestionAnswering",
"FlaxElectraForSequenceClassification",
"FlaxElectraForTokenClassification",
"FlaxElectraModel",
"FlaxEncoderDecoderModel",
"FlaxGPT2LMHeadModel",
"FlaxGPT2Model",
"FlaxGPTJForCausalLM",
"FlaxGPTJModel",
"FlaxGPTNeoForCausalLM",
"FlaxGPTNeoModel",
"FlaxLlamaForCausalLM",
"FlaxLlamaModel",
"FlaxGemmaForCausalLM",
"FlaxGemmaModel",
"FlaxMBartForConditionalGeneration",
"FlaxMBartForQuestionAnswering",
"FlaxMBartForSequenceClassification",
"FlaxMBartModel",
"FlaxMarianMTModel",
"FlaxMarianModel",
"FlaxMistralForCausalLM",
"FlaxMistralModel",
"FlaxOPTForCausalLM",
"FlaxPegasusForConditionalGeneration",
"FlaxPegasusModel",
"FlaxRegNetForImageClassification",
"FlaxRegNetModel",
"FlaxResNetForImageClassification",
"FlaxResNetModel",
"FlaxRoFormerForMaskedLM",
"FlaxRoFormerForMultipleChoice",
"FlaxRoFormerForQuestionAnswering",
"FlaxRoFormerForSequenceClassification",
"FlaxRoFormerForTokenClassification",
"FlaxRoFormerModel",
"FlaxRobertaForCausalLM",
"FlaxRobertaForMaskedLM",
"FlaxRobertaForMultipleChoice",
"FlaxRobertaForQuestionAnswering",
"FlaxRobertaForSequenceClassification",
"FlaxRobertaForTokenClassification",
"FlaxRobertaModel",
"FlaxRobertaPreLayerNormForCausalLM",
"FlaxRobertaPreLayerNormForMaskedLM",
"FlaxRobertaPreLayerNormForMultipleChoice",
"FlaxRobertaPreLayerNormForQuestionAnswering",
"FlaxRobertaPreLayerNormForSequenceClassification",
"FlaxRobertaPreLayerNormForTokenClassification",
"FlaxRobertaPreLayerNormModel",
"FlaxSpeechEncoderDecoderModel",
"FlaxViTForImageClassification",
"FlaxViTModel",
"FlaxVisionEncoderDecoderModel",
"FlaxVisionTextDualEncoderModel",
"FlaxWav2Vec2ForCTC",
"FlaxWav2Vec2ForPreTraining",
"FlaxWav2Vec2Model",
"FlaxWhisperForAudioClassification",
"FlaxWhisperForConditionalGeneration",
"FlaxWhisperModel",
"FlaxWhisperTimeStampLogitsProcessor",
"FlaxXGLMForCausalLM",
"FlaxXGLMModel",
"FlaxXLMRobertaForCausalLM",
"FlaxXLMRobertaForMaskedLM",
"FlaxXLMRobertaForMultipleChoice",
"FlaxXLMRobertaForQuestionAnswering",
"FlaxXLMRobertaForSequenceClassification",
"FlaxXLMRobertaForTokenClassification",
"FlaxXLMRobertaModel",
"FNetConfig",
"FNetModel",
"FNetTokenizerFast",
"FSMTConfig",
"FeatureExtractionPipeline",
"FillMaskPipeline",
"FlaubertConfig",
"FlavaConfig",
"FlavaForPreTraining",
"FlavaImageModel",
"FlavaImageProcessor",
"FlavaMultimodalModel",
"FlavaTextConfig",
"FlavaTextModel",
"FocalNetModel",
"FunnelTokenizerFast",
"GPTBigCodeConfig",
"GPTJConfig",
"GPTNeoXConfig",
"GPTNeoXJapaneseConfig",
"GPTNeoXTokenizerFast",
"GPTSanJapaneseConfig",
"GitConfig",
"GitVisionConfig",
"GraphormerConfig",
"GroupViTTextConfig",
"GroupViTVisionConfig",
"HerbertTokenizerFast",
"HubertConfig",
"HubertForCTC",
"IBertConfig",
"IBertModel",
"IdeficsConfig",
"IdeficsProcessor",
"ImageClassificationPipeline",
"ImageFeatureExtractionPipeline",
"ImageGPTConfig",
"ImageSegmentationPipeline",
"ImageToImagePipeline",
"ImageToTextPipeline",
"InformerConfig",
"InstructBlipQFormerConfig",
"JukeboxPriorConfig",
"JukeboxTokenizer",
"LEDConfig",
"LEDTokenizerFast",
"LayoutLMForQuestionAnswering",
"LayoutLMTokenizerFast",
"LayoutLMv2Config",
"LayoutLMv2ForQuestionAnswering",
"LayoutLMv2TokenizerFast",
"LayoutLMv3Config",
"LayoutLMv3ImageProcessor",
"LayoutLMv3TokenizerFast",
"LayoutXLMTokenizerFast",
"LevitConfig",
"LiltConfig",
"LiltModel",
"LongT5Config",
"LongformerConfig",
"LongformerModel",
"LongformerTokenizerFast",
"LukeModel",
"LukeTokenizer",
"LxmertTokenizerFast",
"M2M100Config",
"M2M100Tokenizer",
"MarkupLMProcessor",
"MaskGenerationPipeline",
"MBart50TokenizerFast",
"MBartConfig",
"MCTCTFeatureExtractor",
"MPNetConfig",
"MPNetModel",
"MPNetTokenizerFast",
"MT5Config",
"MT5TokenizerFast",
"MarianConfig",
"MarianTokenizer",
"MarkupLMConfig",
"MarkupLMModel",
"MarkupLMTokenizer",
"MarkupLMTokenizerFast",
"Mask2FormerConfig",
"MaskFormerConfig",
"MaxTimeCriteria",
"MegaConfig",
"MegaModel",
"MegatronBertConfig",
"MegatronBertForPreTraining",
"MegatronBertModel",
"MobileBertConfig",
"MobileBertModel",
"MobileBertTokenizerFast",
"MobileNetV1ImageProcessor",
"MobileNetV1Model",
"MobileNetV2ImageProcessor",
"MobileNetV2Model",
"MobileViTModel",
"MobileViTV2Model",
"MLukeTokenizer",
"MraConfig",
"MusicgenDecoderConfig",
"MusicgenForConditionalGeneration",
"MusicgenMelodyForConditionalGeneration",
"MvpConfig",
"MvpTokenizerFast",
"MT5Tokenizer",
"NatModel",
"NerPipeline",
"NezhaConfig",
"NezhaModel",
"NllbMoeConfig",
"NllbTokenizer",
"NllbTokenizerFast",
"NystromformerConfig",
"OPTConfig",
"ObjectDetectionPipeline",
"OneFormerProcessor",
"OpenAIGPTTokenizerFast",
"OpenLlamaConfig",
"PLBartConfig",
"PegasusConfig",
"PegasusTokenizer",
"PegasusTokenizerFast",
"PegasusXConfig",
"PerceiverImageProcessor",
"PerceiverModel",
"PerceiverTokenizer",
"PersimmonConfig",
"Pipeline",
"Pix2StructConfig",
"Pix2StructTextConfig",
"PLBartTokenizer",
"Pop2PianoConfig",
"PreTrainedTokenizer",
"PreTrainedTokenizerBase",
"PreTrainedTokenizerFast",
"PrefixConstrainedLogitsProcessor",
"ProphetNetConfig",
"QDQBertConfig",
"QDQBertModel",
"QuestionAnsweringPipeline",
"RagConfig",
"RagModel",
"RagRetriever",
"RagSequenceForGeneration",
"RagTokenForGeneration",
"RealmConfig",
"RealmForOpenQA",
"RealmScorer",
"RealmTokenizerFast",
"ReformerConfig",
"ReformerTokenizerFast",
"RegNetConfig",
"RemBertConfig",
"RemBertModel",
"RemBertTokenizer",
"RemBertTokenizerFast",
"RepetitionPenaltyLogitsProcessor",
"RetriBertConfig",
"RetriBertTokenizerFast",
"RoCBertConfig",
"RoCBertModel",
"RoCBertTokenizer",
"RoFormerConfig",
"RobertaConfig",
"RobertaModel",
"RobertaPreLayerNormConfig",
"RobertaPreLayerNormModel",
"RobertaTokenizerFast",
"SEWConfig",
"SEWDConfig",
"SEWDForCTC",
"SEWForCTC",
"SamConfig",
"SamPromptEncoderConfig",
"SeamlessM4TConfig", # use of unconventional markdown
"SeamlessM4Tv2Config", # use of unconventional markdown
"Seq2SeqTrainingArguments",
"SpecialTokensMixin",
"Speech2Text2Config",
"Speech2Text2Tokenizer",
"Speech2TextTokenizer",
"SpeechEncoderDecoderModel",
"SpeechT5Config",
"SpeechT5Model",
"SplinterConfig",
"SplinterTokenizerFast",
"SqueezeBertTokenizerFast",
"SummarizationPipeline",
"Swin2SRImageProcessor",
"Swinv2Model",
"SwitchTransformersConfig",
"T5Config",
"T5Tokenizer",
"T5TokenizerFast",
"TableQuestionAnsweringPipeline",
"TableTransformerConfig",
"TapasConfig",
"TapasModel",
"TapasTokenizer",
"Text2TextGenerationPipeline",
"TextClassificationPipeline",
"TextGenerationPipeline",
"TFAlbertForMaskedLM",
"TFAlbertForMultipleChoice",
"TFAlbertForPreTraining",
"TFAlbertForQuestionAnswering",
"TFAlbertForSequenceClassification",
"TFAlbertForTokenClassification",
"TFAlbertModel",
"TFBartForConditionalGeneration",
"TFBartForSequenceClassification",
"TFBartModel",
"TFBertForMaskedLM",
"TFBertForMultipleChoice",
"TFBertForNextSentencePrediction",
"TFBertForPreTraining",
"TFBertForQuestionAnswering",
"TFBertForSequenceClassification",
"TFBertForTokenClassification",
"TFBertModel",
"TFBlenderbotForConditionalGeneration",
"TFBlenderbotModel",
"TFBlenderbotSmallForConditionalGeneration",
"TFBlenderbotSmallModel",
"TFBlipForConditionalGeneration",
"TFBlipForImageTextRetrieval",
"TFBlipForQuestionAnswering",
"TFCLIPModel",
"TFCTRLForSequenceClassification",
"TFCTRLLMHeadModel",
"TFCTRLModel",
"TFCamembertForCausalLM",
"TFCamembertForMaskedLM",
"TFCamembertForMultipleChoice",
"TFCamembertForQuestionAnswering",
"TFCamembertForSequenceClassification",
"TFCamembertForTokenClassification",
"TFCamembertModel",
"TFConvBertForMaskedLM",
"TFConvBertForMultipleChoice",
"TFConvBertForQuestionAnswering",
"TFConvBertForSequenceClassification",
"TFConvBertForTokenClassification",
"TFConvBertModel",
"TFConvNextForImageClassification",
"TFConvNextModel",
"TFConvNextV2Model", # Parsing issue. Equivalent to PT ConvNextV2Model, see PR #25558
"TFConvNextV2ForImageClassification",
"TFCvtForImageClassification",
"TFCvtModel",
"TFDPRReader",
"TFData2VecVisionForImageClassification",
"TFData2VecVisionForSemanticSegmentation",
"TFData2VecVisionModel",
"TFDebertaForMaskedLM",
"TFDebertaForQuestionAnswering",
"TFDebertaForSequenceClassification",
"TFDebertaForTokenClassification",
"TFDebertaModel",
"TFDebertaV2ForMaskedLM",
"TFDebertaV2ForMultipleChoice",
"TFDebertaV2ForQuestionAnswering",
"TFDebertaV2ForSequenceClassification",
"TFDebertaV2ForTokenClassification",
"TFDebertaV2Model",
"TFDeiTForImageClassification",
"TFDeiTForImageClassificationWithTeacher",
"TFDeiTForMaskedImageModeling",
"TFDeiTModel",
"TFDistilBertForMaskedLM",
"TFDistilBertForMultipleChoice",
"TFDistilBertForQuestionAnswering",
"TFDistilBertForSequenceClassification",
"TFDistilBertForTokenClassification",
"TFDistilBertModel",
"TFEfficientFormerForImageClassification",
"TFEfficientFormerForImageClassificationWithTeacher",
"TFEfficientFormerModel",
"TFElectraForMaskedLM",
"TFElectraForMultipleChoice",
"TFElectraForPreTraining",
"TFElectraForQuestionAnswering",
"TFElectraForSequenceClassification",
"TFElectraForTokenClassification",
"TFElectraModel",
"TFEncoderDecoderModel",
"TFEsmForMaskedLM",
"TFEsmForSequenceClassification",
"TFEsmForTokenClassification",
"TFEsmModel",
"TFFlaubertForMultipleChoice",
"TFFlaubertForQuestionAnsweringSimple",
"TFFlaubertForSequenceClassification",
"TFFlaubertForTokenClassification",
"TFFlaubertModel",
"TFFlaubertWithLMHeadModel",
"TFFunnelBaseModel",
"TFFunnelForMaskedLM",
"TFFunnelForMultipleChoice",
"TFFunnelForPreTraining",
"TFFunnelForQuestionAnswering",
"TFFunnelForSequenceClassification",
"TFFunnelForTokenClassification",
"TFFunnelModel",
"TFGPT2DoubleHeadsModel",
"TFGPT2ForSequenceClassification",
"TFGPT2LMHeadModel",
"TFGPT2Model",
"TFGPTJForCausalLM",
"TFGPTJForQuestionAnswering",
"TFGPTJForSequenceClassification",
"TFGPTJModel",
"TFGroupViTModel",
"TFHubertForCTC",
"TFHubertModel",
"TFLEDForConditionalGeneration",
"TFLEDModel",
"TFLayoutLMForMaskedLM",
"TFLayoutLMForQuestionAnswering",
"TFLayoutLMForSequenceClassification",
"TFLayoutLMForTokenClassification",
"TFLayoutLMModel",
"TFLayoutLMv3ForQuestionAnswering",
"TFLayoutLMv3ForSequenceClassification",
"TFLayoutLMv3ForTokenClassification",
"TFLayoutLMv3Model",
"TFLongformerForMaskedLM",
"TFLongformerForMultipleChoice",
"TFLongformerForQuestionAnswering",
"TFLongformerForSequenceClassification",
"TFLongformerForTokenClassification",
"TFLongformerModel",
"TFLxmertForPreTraining",
"TFLxmertModel",
"TFMBartForConditionalGeneration",
"TFMBartModel",
"TFMPNetForMaskedLM",
"TFMPNetForMultipleChoice",
"TFMPNetForQuestionAnswering",
"TFMPNetForSequenceClassification",
"TFMPNetForTokenClassification",
"TFMPNetModel",
"TFMarianMTModel",
"TFMarianModel",
"TFMobileBertForMaskedLM",
"TFMobileBertForMultipleChoice",
"TFMobileBertForNextSentencePrediction",
"TFMobileBertForPreTraining",
"TFMobileBertForQuestionAnswering",
"TFMobileBertForSequenceClassification",
"TFMobileBertForTokenClassification",
"TFMobileBertModel",
"TFMobileViTForImageClassification",
"TFMobileViTForSemanticSegmentation",
"TFMobileViTModel",
"TFOPTForCausalLM",
"TFOPTModel",
"TFOpenAIGPTDoubleHeadsModel",
"TFOpenAIGPTForSequenceClassification",
"TFOpenAIGPTLMHeadModel",
"TFOpenAIGPTModel",
"TFPegasusForConditionalGeneration",
"TFPegasusModel",
"TFRagModel",
"TFRagSequenceForGeneration",
"TFRagTokenForGeneration",
"TFRegNetForImageClassification",
"TFRegNetModel",
"TFRemBertForCausalLM",
"TFRemBertForMaskedLM",
"TFRemBertForMultipleChoice",
"TFRemBertForQuestionAnswering",
"TFRemBertForSequenceClassification",
"TFRemBertForTokenClassification",
"TFRemBertModel",
"TFRepetitionPenaltyLogitsProcessor",
"TFResNetForImageClassification",
"TFResNetModel",
"TFRoFormerForCausalLM",
"TFRoFormerForMaskedLM",
"TFRoFormerForMultipleChoice",
"TFRoFormerForQuestionAnswering",
"TFRoFormerForSequenceClassification",
"TFRoFormerForTokenClassification",
"TFRoFormerModel",
"TFRobertaForMaskedLM",
"TFRobertaForMultipleChoice",
"TFRobertaForQuestionAnswering",
"TFRobertaForSequenceClassification",
"TFRobertaForTokenClassification",
"TFRobertaModel",
"TFRobertaPreLayerNormForMaskedLM",
"TFRobertaPreLayerNormForMultipleChoice",
"TFRobertaPreLayerNormForQuestionAnswering",
"TFRobertaPreLayerNormForSequenceClassification",
"TFRobertaPreLayerNormForTokenClassification",
"TFRobertaPreLayerNormModel",
"TFSamModel",
"TFSegformerForImageClassification",
"TFSegformerForSemanticSegmentation",
"TFSegformerModel",
"TFSpeech2TextForConditionalGeneration",
"TFSpeech2TextModel",
"TFSwiftFormerForImageClassification",
"TFSwiftFormerModel",
"TFSwinForImageClassification",
"TFSwinForMaskedImageModeling",
"TFSwinModel",
"TFT5EncoderModel",
"TFT5ForConditionalGeneration",
"TFT5Model",
"TFTapasForMaskedLM",
"TFTapasForQuestionAnswering",
"TFTapasForSequenceClassification",
"TFTapasModel",
"TFTransfoXLForSequenceClassification",
"TFTransfoXLLMHeadModel",
"TFTransfoXLModel",
"TFViTForImageClassification",
"TFViTMAEForPreTraining",
"TFViTMAEModel",
"TFViTModel",
"TFVisionEncoderDecoderModel",
"TFVisionTextDualEncoderModel",
"TFWav2Vec2ForCTC",
"TFWav2Vec2Model",
"TFWhisperForConditionalGeneration",
"TFWhisperModel",
"TFXGLMForCausalLM",
"TFXGLMModel",
"TFXLMForMultipleChoice",
"TFXLMForQuestionAnsweringSimple",
"TFXLMForSequenceClassification",
"TFXLMForTokenClassification",
"TFXLMModel",
"TFXLMRobertaForCausalLM",
"TFXLMRobertaForMaskedLM",
"TFXLMRobertaForMultipleChoice",
"TFXLMRobertaForQuestionAnswering",
"TFXLMRobertaForSequenceClassification",
"TFXLMRobertaForTokenClassification",
"TFXLMRobertaModel",
"TFXLMWithLMHeadModel",
"TFXLNetForMultipleChoice",
"TFXLNetForQuestionAnsweringSimple",
"TFXLNetForSequenceClassification",
"TFXLNetForTokenClassification",
"TFXLNetLMHeadModel",
"TFXLNetModel",
"TimeSeriesTransformerConfig",
"TokenClassificationPipeline",
"TrOCRConfig",
"TrainerState",
"TrainingArguments",
"TrajectoryTransformerConfig",
"TranslationPipeline",
"TvltImageProcessor",
"UMT5Config",
"UperNetConfig",
"UperNetForSemanticSegmentation",
"ViTHybridImageProcessor",
"ViTHybridModel",
"ViTMSNModel",
"ViTModel",
"VideoClassificationPipeline",
"ViltConfig",
"ViltForImagesAndTextClassification",
"ViltModel",
"VisionEncoderDecoderModel",
"VisionTextDualEncoderModel",
"VisualBertConfig",
"VisualBertModel",
"VisualQuestionAnsweringPipeline",
"VitMatteForImageMatting",
"VitsTokenizer",
"VivitModel",
"Wav2Vec2BertForCTC",
"Wav2Vec2CTCTokenizer",
"Wav2Vec2Config",
"Wav2Vec2ConformerConfig",
"Wav2Vec2ConformerForCTC",
"Wav2Vec2FeatureExtractor",
"Wav2Vec2PhonemeCTCTokenizer",
"WavLMConfig",
"WavLMForCTC",
"WhisperConfig",
"WhisperFeatureExtractor",
"WhisperForAudioClassification",
"XCLIPTextConfig",
"XCLIPVisionConfig",
"XGLMConfig",
"XGLMModel",
"XGLMTokenizerFast",
"XLMConfig",
"XLMProphetNetConfig",
"XLMRobertaConfig",
"XLMRobertaModel",
"XLMRobertaTokenizerFast",
"XLMRobertaXLConfig",
"XLMRobertaXLModel",
"XLNetConfig",
"XLNetTokenizerFast",
"XmodConfig",
"XmodModel",
"YolosImageProcessor",
"YolosModel",
"YosoConfig",
"ZeroShotAudioClassificationPipeline",
"ZeroShotClassificationPipeline",
"ZeroShotImageClassificationPipeline",
"ZeroShotObjectDetectionPipeline",
]
# Supported math operations when interpreting the value of defaults.
MATH_OPERATORS = {
ast.Add: op.add,
ast.Sub: op.sub,
ast.Mult: op.mul,
ast.Div: op.truediv,
ast.Pow: op.pow,
ast.BitXor: op.xor,
ast.USub: op.neg,
}
def find_indent(line: str) -> int:
"""
Returns the number of spaces that start a line indent.
"""
search = re.search(r"^(\s*)(?:\S|$)", line)
if search is None:
return 0
return len(search.groups()[0])
def stringify_default(default: Any) -> str:
"""
Returns the string representation of a default value, as used in docstring: numbers are left as is, all other
objects are in backtiks.
Args:
default (`Any`): The default value to process
Returns:
`str`: The string representation of that default.
"""
if isinstance(default, bool):
# We need to test for bool first as a bool passes isinstance(xxx, (int, float))
return f"`{default}`"
elif isinstance(default, enum.Enum):
# We need to test for enum first as an enum with int values will pass isinstance(xxx, (int, float))
return f"`{str(default)}`"
elif isinstance(default, int):
return str(default)
elif isinstance(default, float):
result = str(default)
return str(round(default, 2)) if len(result) > 6 else result
elif isinstance(default, str):
return str(default) if default.isnumeric() else f'`"{default}"`'
elif isinstance(default, type):
return f"`{default.__name__}`"
else:
return f"`{default}`"
def eval_math_expression(expression: str) -> Optional[Union[float, int]]:
# Mainly taken from the excellent https://stackoverflow.com/a/9558001
"""
Evaluate (safely) a mathematial expression and returns its value.
Args:
expression (`str`): The expression to evaluate.
Returns:
`Optional[Union[float, int]]`: Returns `None` if the evaluation fails in any way and the value computed
otherwise.
Example:
```py
>>> eval_expr('2^6')
4
>>> eval_expr('2**6')
64
>>> eval_expr('1 + 2*3**(4^5) / (6 + -7)')
-5.0
```
"""
try:
return eval_node(ast.parse(expression, mode="eval").body)
except TypeError:
return
def eval_node(node):
if isinstance(node, ast.Num): # <number>
return node.n
elif isinstance(node, ast.BinOp): # <left> <operator> <right>
return MATH_OPERATORS[type(node.op)](eval_node(node.left), eval_node(node.right))
elif isinstance(node, ast.UnaryOp): # <operator> <operand> e.g., -1
return MATH_OPERATORS[type(node.op)](eval_node(node.operand))
else:
raise TypeError(node)
def replace_default_in_arg_description(description: str, default: Any) -> str:
"""
Catches the default value in the description of an argument inside a docstring and replaces it by the value passed.
Args:
description (`str`): The description of an argument in a docstring to process.
default (`Any`): The default value that whould be in the docstring of that argument.
Returns:
`str`: The description updated with the new default value.
"""
# Lots of docstrings have `optional` or **opational** instead of *optional* so we do this fix here.
description = description.replace("`optional`", OPTIONAL_KEYWORD)
description = description.replace("**optional**", OPTIONAL_KEYWORD)
if default is inspect._empty:
# No default, make sure the description doesn't have any either
idx = description.find(OPTIONAL_KEYWORD)
if idx != -1:
description = description[:idx].rstrip()
if description.endswith(","):
description = description[:-1].rstrip()
elif default is None:
# Default None are not written, we just set `*optional*`. If there is default that is not None specified in the
# description, we do not erase it (as sometimes we set the default to `None` because the default is a mutable
# object).
idx = description.find(OPTIONAL_KEYWORD)
if idx == -1:
description = f"{description}, {OPTIONAL_KEYWORD}"
elif re.search(r"defaults to `?None`?", description) is not None:
len_optional = len(OPTIONAL_KEYWORD)
description = description[: idx + len_optional]
else:
str_default = None
# For numbers we may have a default that is given by a math operation (1/255 is really popular). We don't
# want to replace those by their actual values.
if isinstance(default, (int, float)) and re.search("defaults to `?(.*?)(?:`|$)", description) is not None:
# Grab the default and evaluate it.
current_default = re.search("defaults to `?(.*?)(?:`|$)", description).groups()[0]
if default == eval_math_expression(current_default):
try:
# If it can be directly converted to the type of the default, it's a simple value
str_default = str(type(default)(current_default))
except Exception:
# Otherwise there is a math operator so we add a code block.
str_default = f"`{current_default}`"
elif isinstance(default, enum.Enum) and default.name == current_default.split(".")[-1]:
# When the default is an Enum (this is often the case for PIL.Image.Resampling), and the docstring
# matches the enum name, keep the existing docstring rather than clobbering it with the enum value.
str_default = f"`{current_default}`"
if str_default is None:
str_default = stringify_default(default)
# Make sure default match
if OPTIONAL_KEYWORD not in description:
description = f"{description}, {OPTIONAL_KEYWORD}, defaults to {str_default}"
elif _re_parse_description.search(description) is None:
idx = description.find(OPTIONAL_KEYWORD)
len_optional = len(OPTIONAL_KEYWORD)
description = f"{description[:idx + len_optional]}, defaults to {str_default}"
else:
description = _re_parse_description.sub(rf"*optional*, defaults to {str_default}", description)
return description
def get_default_description(arg: inspect.Parameter) -> str:
"""
Builds a default description for a parameter that was not documented.
Args:
arg (`inspect.Parameter`): The argument in the signature to generate a description for.
Returns:
`str`: The description.
"""
if arg.annotation is inspect._empty:
arg_type = "<fill_type>"
elif hasattr(arg.annotation, "__name__"):
arg_type = arg.annotation.__name__
else:
arg_type = str(arg.annotation)
if arg.default is inspect._empty:
return f"`{arg_type}`"
elif arg.default is None:
return f"`{arg_type}`, {OPTIONAL_KEYWORD}"
else:
str_default = stringify_default(arg.default)
return f"`{arg_type}`, {OPTIONAL_KEYWORD}, defaults to {str_default}"
def find_source_file(obj: Any) -> Path:
"""
Finds the source file of an object.
Args:
obj (`Any`): The object whose source file we are looking for.
Returns:
`Path`: The source file.
"""
module = obj.__module__
obj_file = PATH_TO_TRANSFORMERS
for part in module.split(".")[1:]:
obj_file = obj_file / part
return obj_file.with_suffix(".py")
def match_docstring_with_signature(obj: Any) -> Optional[Tuple[str, str]]:
"""
Matches the docstring of an object with its signature.
Args:
obj (`Any`): The object to process.
Returns:
`Optional[Tuple[str, str]]`: Returns `None` if there is no docstring or no parameters documented in the
docstring, otherwise returns a tuple of two strings: the current documentation of the arguments in the
docstring and the one matched with the signature.
"""
if len(getattr(obj, "__doc__", "")) == 0:
# Nothing to do, there is no docstring.
return
# Read the docstring in the source code to see if there is a special command to ignore this object.
try:
source, _ = inspect.getsourcelines(obj)
except OSError:
source = []
idx = 0
while idx < len(source) and '"""' not in source[idx]:
idx += 1
ignore_order = False
if idx < len(source):
line_before_docstring = source[idx - 1]
if re.search(r"^\s*#\s*no-format\s*$", line_before_docstring):
# This object is ignored
return
elif re.search(r"^\s*#\s*ignore-order\s*$", line_before_docstring):
ignore_order = True
# Read the signature
signature = inspect.signature(obj).parameters
obj_doc_lines = obj.__doc__.split("\n")
# Get to the line where we start documenting arguments
idx = 0
while idx < len(obj_doc_lines) and _re_args.search(obj_doc_lines[idx]) is None:
idx += 1
if idx == len(obj_doc_lines):
# Nothing to do, no parameters are documented.
return
indent = find_indent(obj_doc_lines[idx])
arguments = {}
current_arg = None
idx += 1
start_idx = idx
# Keep going until the arg section is finished (nonempty line at the same indent level) or the end of the docstring.
while idx < len(obj_doc_lines) and (
len(obj_doc_lines[idx].strip()) == 0 or find_indent(obj_doc_lines[idx]) > indent
):
if find_indent(obj_doc_lines[idx]) == indent + 4:
# New argument -> let's generate the proper doc for it
re_search_arg = _re_parse_arg.search(obj_doc_lines[idx])
if re_search_arg is not None:
_, name, description = re_search_arg.groups()
current_arg = name
if name in signature:
default = signature[name].default
if signature[name].kind is inspect._ParameterKind.VAR_KEYWORD:
default = None
new_description = replace_default_in_arg_description(description, default)
else:
new_description = description
init_doc = _re_parse_arg.sub(rf"\1\2 ({new_description}):", obj_doc_lines[idx])
arguments[current_arg] = [init_doc]
elif current_arg is not None:
arguments[current_arg].append(obj_doc_lines[idx])
idx += 1
# We went too far by one (perhaps more if there are a lot of new lines)
idx -= 1
while len(obj_doc_lines[idx].strip()) == 0:
arguments[current_arg] = arguments[current_arg][:-1]
idx -= 1
# And we went too far by one again.
idx += 1
old_doc_arg = "\n".join(obj_doc_lines[start_idx:idx])
old_arguments = list(arguments.keys())
arguments = {name: "\n".join(doc) for name, doc in arguments.items()}
# Add missing arguments with a template
for name in set(signature.keys()) - set(arguments.keys()):
arg = signature[name]
# We ignore private arguments or *args/**kwargs (unless they are documented by the user)
if name.startswith("_") or arg.kind in [
inspect._ParameterKind.VAR_KEYWORD,
inspect._ParameterKind.VAR_POSITIONAL,
]:
arguments[name] = ""
else:
arg_desc = get_default_description(arg)
arguments[name] = " " * (indent + 4) + f"{name} ({arg_desc}): <fill_docstring>"
# Arguments are sorted by the order in the signature unless a special comment is put.
if ignore_order:
new_param_docs = [arguments[name] for name in old_arguments if name in signature]
missing = set(signature.keys()) - set(old_arguments)
new_param_docs.extend([arguments[name] for name in missing if len(arguments[name]) > 0])
else:
new_param_docs = [arguments[name] for name in signature.keys() if len(arguments[name]) > 0]
new_doc_arg = "\n".join(new_param_docs)
return old_doc_arg, new_doc_arg
def fix_docstring(obj: Any, old_doc_args: str, new_doc_args: str):
"""
Fixes the docstring of an object by replacing its arguments documentaiton by the one matched with the signature.
Args:
obj (`Any`):
The object whose dostring we are fixing.
old_doc_args (`str`):
The current documentation of the parameters of `obj` in the docstring (as returned by
`match_docstring_with_signature`).
new_doc_args (`str`):
The documentation of the parameters of `obj` matched with its signature (as returned by
`match_docstring_with_signature`).
"""
# Read the docstring in the source code and make sure we have the right part of the docstring
source, line_number = inspect.getsourcelines(obj)
# Get to the line where we start documenting arguments
idx = 0
while idx < len(source) and _re_args.search(source[idx]) is None:
idx += 1
if idx == len(source):
# Args are not defined in the docstring of this object
return
# Get to the line where we stop documenting arguments
indent = find_indent(source[idx])
idx += 1
start_idx = idx
while idx < len(source) and (len(source[idx].strip()) == 0 or find_indent(source[idx]) > indent):
idx += 1
idx -= 1
while len(source[idx].strip()) == 0:
idx -= 1
idx += 1
if "".join(source[start_idx:idx])[:-1] != old_doc_args:
# Args are not fully defined in the docstring of this object
return
obj_file = find_source_file(obj)
with open(obj_file, "r", encoding="utf-8") as f:
content = f.read()
# Replace content
lines = content.split("\n")
lines = lines[: line_number + start_idx - 1] + [new_doc_args] + lines[line_number + idx - 1 :]
print(f"Fixing the docstring of {obj.__name__} in {obj_file}.")
with open(obj_file, "w", encoding="utf-8") as f:
f.write("\n".join(lines))
def check_docstrings(overwrite: bool = False):
"""
Check docstrings of all public objects that are callables and are documented.
Args:
overwrite (`bool`, *optional*, defaults to `False`):
Whether to fix inconsistencies or not.
"""
failures = []
hard_failures = []
to_clean = []
for name in dir(transformers):
# Skip objects that are private or not documented.
if name.startswith("_") or ignore_undocumented(name) or name in OBJECTS_TO_IGNORE:
continue
obj = getattr(transformers, name)
if not callable(obj) or not isinstance(obj, type) or getattr(obj, "__doc__", None) is None:
continue
# Check docstring
try:
result = match_docstring_with_signature(obj)
if result is not None:
old_doc, new_doc = result
else:
old_doc, new_doc = None, None
except Exception as e:
print(e)
hard_failures.append(name)
continue
if old_doc != new_doc:
if overwrite:
fix_docstring(obj, old_doc, new_doc)
else:
failures.append(name)
elif not overwrite and new_doc is not None and ("<fill_type>" in new_doc or "<fill_docstring>" in new_doc):
to_clean.append(name)
# Deal with errors
error_message = ""
if len(hard_failures) > 0:
error_message += (
"The argument part of the docstrings of the following objects could not be processed, check they are "
"properly formatted."
)
error_message += "\n" + "\n".join([f"- {name}" for name in hard_failures])
if len(failures) > 0:
error_message += (
"The following objects docstrings do not match their signature. Run `make fix-copies` to fix this. "
"In some cases, this error may be raised incorrectly by the docstring checker. If you think this is the "
"case, you can manually check the docstrings and then add the object name to `OBJECTS_TO_IGNORE` in "
"`utils/check_docstrings.py`."
)
error_message += "\n" + "\n".join([f"- {name}" for name in failures])
if len(to_clean) > 0:
error_message += (
"The following objects docstrings contain templates you need to fix: search for `<fill_type>` or "
"`<fill_docstring>`."
)
error_message += "\n" + "\n".join([f"- {name}" for name in to_clean])
if len(error_message) > 0:
error_message = "There was at least one problem when checking docstrings of public objects.\n" + error_message
raise ValueError(error_message)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
check_docstrings(overwrite=args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/extract_warnings.py | import argparse
import json
import os
import time
import zipfile
from get_ci_error_statistics import download_artifact, get_artifacts_links
from transformers import logging
logger = logging.get_logger(__name__)
def extract_warnings_from_single_artifact(artifact_path, targets):
"""Extract warnings from a downloaded artifact (in .zip format)"""
selected_warnings = set()
buffer = []
def parse_line(fp):
for line in fp:
if isinstance(line, bytes):
line = line.decode("UTF-8")
if "warnings summary (final)" in line:
continue
# This means we are outside the body of a warning
elif not line.startswith(" "):
# process a single warning and move it to `selected_warnings`.
if len(buffer) > 0:
warning = "\n".join(buffer)
# Only keep the warnings specified in `targets`
if any(f": {x}: " in warning for x in targets):
selected_warnings.add(warning)
buffer.clear()
continue
else:
line = line.strip()
buffer.append(line)
if from_gh:
for filename in os.listdir(artifact_path):
file_path = os.path.join(artifact_path, filename)
if not os.path.isdir(file_path):
# read the file
if filename != "warnings.txt":
continue
with open(file_path) as fp:
parse_line(fp)
else:
try:
with zipfile.ZipFile(artifact_path) as z:
for filename in z.namelist():
if not os.path.isdir(filename):
# read the file
if filename != "warnings.txt":
continue
with z.open(filename) as fp:
parse_line(fp)
except Exception:
logger.warning(
f"{artifact_path} is either an invalid zip file or something else wrong. This file is skipped."
)
return selected_warnings
def extract_warnings(artifact_dir, targets):
"""Extract warnings from all artifact files"""
selected_warnings = set()
paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if (p.endswith(".zip") or from_gh)]
for p in paths:
selected_warnings.update(extract_warnings_from_single_artifact(p, targets))
return selected_warnings
if __name__ == "__main__":
def list_str(values):
return values.split(",")
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.")
parser.add_argument(
"--output_dir",
type=str,
required=True,
help="Where to store the downloaded artifacts and other result files.",
)
parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.")
# optional parameters
parser.add_argument(
"--targets",
default="DeprecationWarning,UserWarning,FutureWarning",
type=list_str,
help="Comma-separated list of target warning(s) which we want to extract.",
)
parser.add_argument(
"--from_gh",
action="store_true",
help="If running from a GitHub action workflow and collecting warnings from its artifacts.",
)
args = parser.parse_args()
from_gh = args.from_gh
if from_gh:
# The artifacts have to be downloaded using `actions/download-artifact@v4`
pass
else:
os.makedirs(args.output_dir, exist_ok=True)
# get download links
artifacts = get_artifacts_links(args.workflow_run_id, token=args.token)
with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp:
json.dump(artifacts, fp, ensure_ascii=False, indent=4)
# download artifacts
for idx, (name, url) in enumerate(artifacts.items()):
print(name)
print(url)
print("=" * 80)
download_artifact(name, url, args.output_dir, args.token)
# Be gentle to GitHub
time.sleep(1)
# extract warnings from artifacts
selected_warnings = extract_warnings(args.output_dir, args.targets)
selected_warnings = sorted(selected_warnings)
with open(os.path.join(args.output_dir, "selected_warnings.json"), "w", encoding="UTF-8") as fp:
json.dump(selected_warnings, fp, ensure_ascii=False, indent=4)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/past_ci_versions.py | import argparse
import os
past_versions_testing = {
"pytorch": {
"1.13": {
"torch": "1.13.1",
"torchvision": "0.14.1",
"torchaudio": "0.13.1",
"python": 3.9,
"cuda": "cu116",
"install": (
"python3 -m pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1"
" --extra-index-url https://download.pytorch.org/whl/cu116"
),
"base_image": "nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04",
},
"1.12": {
"torch": "1.12.1",
"torchvision": "0.13.1",
"torchaudio": "0.12.1",
"python": 3.9,
"cuda": "cu113",
"install": (
"python3 -m pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1"
" --extra-index-url https://download.pytorch.org/whl/cu113"
),
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"1.11": {
"torch": "1.11.0",
"torchvision": "0.12.0",
"torchaudio": "0.11.0",
"python": 3.9,
"cuda": "cu113",
"install": (
"python3 -m pip install --no-cache-dir -U torch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0"
" --extra-index-url https://download.pytorch.org/whl/cu113"
),
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"1.10": {
"torch": "1.10.2",
"torchvision": "0.11.3",
"torchaudio": "0.10.2",
"python": 3.9,
"cuda": "cu113",
"install": (
"python3 -m pip install --no-cache-dir -U torch==1.10.2 torchvision==0.11.3 torchaudio==0.10.2"
" --extra-index-url https://download.pytorch.org/whl/cu113"
),
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
# torchaudio < 0.10 has no CUDA-enabled binary distributions
"1.9": {
"torch": "1.9.1",
"torchvision": "0.10.1",
"torchaudio": "0.9.1",
"python": 3.9,
"cuda": "cu111",
"install": (
"python3 -m pip install --no-cache-dir -U torch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1"
" --extra-index-url https://download.pytorch.org/whl/cu111"
),
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
},
"tensorflow": {
"2.11": {
"tensorflow": "2.11.1",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.11.1",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.10": {
"tensorflow": "2.10.1",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.10.1",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.9": {
"tensorflow": "2.9.3",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.9.3",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.8": {
"tensorflow": "2.8.2",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.8.2",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.7": {
"tensorflow": "2.7.3",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.7.3",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.6": {
"tensorflow": "2.6.5",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.6.5",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
"2.5": {
"tensorflow": "2.5.3",
"install": "python3 -m pip install --no-cache-dir -U tensorflow==2.5.3",
"base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04",
},
},
}
if __name__ == "__main__":
parser = argparse.ArgumentParser("Choose the framework and version to install")
parser.add_argument(
"--framework", help="The framework to install. Should be `torch` or `tensorflow`", type=str, required=True
)
parser.add_argument("--version", help="The version of the framework to install.", type=str, required=True)
args = parser.parse_args()
info = past_versions_testing[args.framework][args.version]
os.system(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile')
print(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile')
cuda = ""
if args.framework == "pytorch":
cuda = info["cuda"]
os.system(f"echo \"export CUDA='{cuda}'\" >> ~/.profile")
print(f"echo \"export CUDA='{cuda}'\" >> ~/.profile")
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_copies.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks whether the copies defined in the library match the original or not. This includes:
- All code commented with `# Copied from` comments,
- The list of models in the main README.md matches the ones in the localized READMEs,
- Files that are registered as full copies of one another in the `FULL_COPIES` constant of this script.
This also checks the list of models in the README is complete (has all models) and add a line to complete if there is
a model missing.
Use from the root of the repo with:
```bash
python utils/check_copies.py
```
for a check that will error in case of inconsistencies (used by `make repo-consistency`) or
```bash
python utils/check_copies.py --fix_and_overwrite
```
for a check that will fix all inconsistencies automatically (used by `make fix-copies`).
"""
import argparse
import glob
import os
import re
import subprocess
from collections import OrderedDict
from typing import List, Optional, Tuple, Union
from transformers.utils import direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_copies.py
TRANSFORMERS_PATH = "src/transformers"
MODEL_TEST_PATH = "tests/models"
PATH_TO_DOCS = "docs/source/en"
REPO_PATH = "."
# Mapping for files that are full copies of others (keys are copies, values the file to keep them up to data with)
FULL_COPIES = {
"examples/tensorflow/question-answering/utils_qa.py": "examples/pytorch/question-answering/utils_qa.py",
"examples/flax/question-answering/utils_qa.py": "examples/pytorch/question-answering/utils_qa.py",
}
LOCALIZED_READMES = {
# If the introduction or the conclusion of the list change, the prompts may need to be updated.
"README.md": {
"start_prompt": "๐ค Transformers currently provides the following architectures",
"end_prompt": "1. Want to contribute a new model?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_zh-hans.md": {
"start_prompt": "๐ค Transformers ็ฎๅๆฏๆๅฆไธ็ๆถๆ",
"end_prompt": "1. ๆณ่ฆ่ดก็ฎๆฐ็ๆจกๅ๏ผ",
"format_model_list": (
"**[{title}]({model_link})** (ๆฅ่ช {paper_affiliations}) ไผด้่ฎบๆ {paper_title_link} ็ฑ {paper_authors}"
" ๅๅธใ{supplements}"
),
},
"README_zh-hant.md": {
"start_prompt": "๐ค Transformers ็ฎๅๆฏๆดไปฅไธ็ๆถๆง",
"end_prompt": "1. ๆณ่ฆ่ฒข็ปๆฐ็ๆจกๅ๏ผ",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_ko.md": {
"start_prompt": "๐ค Transformers๋ ๋ค์ ๋ชจ๋ธ๋ค์ ์ ๊ณตํฉ๋๋ค",
"end_prompt": "1. ์๋ก์ด ๋ชจ๋ธ์ ์ฌ๋ฆฌ๊ณ ์ถ๋์?",
"format_model_list": (
"**[{title}]({model_link})** ({paper_affiliations} ์์ ์ ๊ณต)์ {paper_authors}.{supplements}์"
" {paper_title_link}๋
ผ๋ฌธ๊ณผ ํจ๊ป ๋ฐํํ์ต๋๋ค."
),
},
"README_es.md": {
"start_prompt": "๐ค Transformers actualmente proporciona las siguientes arquitecturas",
"end_prompt": "1. ยฟQuieres aportar un nuevo modelo?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_ja.md": {
"start_prompt": "๐คTransformersใฏ็พๅจใไปฅไธใฎใขใผใญใใฏใใฃใๆไพใใฆใใพใ",
"end_prompt": "1. ๆฐใใใขใใซใๆ็จฟใใใใงใใ๏ผ",
"format_model_list": (
"**[{title}]({model_link})** ({paper_affiliations} ใใ) {paper_authors}.{supplements} ใใๅ
ฌ้ใใใ็ ็ฉถ่ซๆ"
" {paper_title_link}"
),
},
"README_hd.md": {
"start_prompt": "๐ค เคเฅเคฐเคพเคเคธเคซเฅเคฐเฅเคฎเคฐ เคตเคฐเฅเคคเคฎเคพเคจ เคฎเฅเค เคจเคฟเคฎเฅเคจเคฒเคฟเคเคฟเคค เคเคฐเฅเคเคฟเคเฅเคเฅเคเคฐ เคเคพ เคธเคฎเคฐเฅเคฅเคจ เคเคฐเคคเฅ เคนเฅเค",
"end_prompt": "1. เคเค เคจเค เคฎเฅเคกเคฒ เคฎเฅเค เคฏเฅเคเคฆเคพเคจ เคฆเฅเคจเคพ เคเคพเคนเคคเฅ เคนเฅเค?",
"format_model_list": (
"**[{title}]({model_link})** ({paper_affiliations} เคธเฅ) {paper_authors}.{supplements} เคฆเฅเคตเคพเคฐเคพ"
"เค
เคจเฅเคธเคเคงเคพเคจ เคชเคคเฅเคฐ {paper_title_link} เคเฅ เคธเคพเคฅ เคเคพเคฐเฅ เคเคฟเคฏเคพ เคเคฏเคพ"
),
},
"README_ru.md": {
"start_prompt": "๐ค ะ ะฝะฐััะพััะตะต ะฒัะตะผั Transformers ะฟัะตะดะพััะฐะฒะปัะตั ัะปะตะดัััะธะต ะฐัั
ะธัะตะบัััั",
"end_prompt": "1. ะฅะพัะธัะต ะฒะฝะตััะธ ะฝะพะฒัั ะผะพะดะตะปั?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_pt-br.md": {
"start_prompt": "๐ค Transformers atualmente fornece as seguintes arquiteturas",
"end_prompt": "1. Quer contribuir com um novo modelo?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_te.md": {
"start_prompt": "๐ค เฐเฑเฐฐเฐพเฐจเฑเฐธเฑโเฐซเฐพเฐฐเฑเฐฎเฐฐเฑเฐฒเฑ เฐชเฑเฐฐเฐธเฑเฐคเฑเฐคเฐ เฐเฐฟเฐเฐฆเฐฟ เฐเฐฐเฑเฐเฐฟเฐเฑเฐเฑเฐเฐฐเฑโเฐฒเฐจเฑ เฐ
เฐเฐฆเฐเฑเฐธเฑเฐคเฑเฐจเฑเฐจเฐพเฐฏเฐฟ",
"end_prompt": "1. เฐเฑเฐคเฑเฐค เฐฎเฑเฐกเฐฒเฑโเฐจเฑ เฐ
เฐเฐฆเฐฟเฐเฐเฐพเฐฒเฐจเฑเฐเฑเฐเฐเฑเฐจเฑเฐจเฐพเฐฐเฐพ?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_fr.md": {
"start_prompt": "๐ค Transformers fournit actuellement les architectures suivantes",
"end_prompt": "1. Vous souhaitez contribuer avec un nouveau modรจle ?",
"format_model_list": (
"**[{title}]({model_link})** (de {paper_affiliations}) publiรฉ dans l'article {paper_title_link} par"
"{paper_authors}.{supplements}"
),
},
"README_de.md": {
"start_prompt": "๐ค Transformers bietet derzeit die folgenden Architekturen an",
"end_prompt": "1. Mรถchten Sie ein neues Modell beitragen?",
"format_model_list": (
"**[{title}]({model_link})** (from {paper_affiliations}) released with the paper {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
"README_vi.md": {
"start_prompt": "๐ค Transformers hiแปn ฤang cung cแบฅp cรกc kiแบฟn trรบc sau ฤรขy",
"end_prompt": "1. Muแปn ฤรณng gรณp mแปt mรด hรฌnh mแปi?",
"format_model_list": (
"**[{title}]({model_link})** (tแปซ {paper_affiliations}) ฤฦฐแปฃc phรกt hร nh vแปi bร i bรกo {paper_title_link} by"
" {paper_authors}.{supplements}"
),
},
}
# This is to make sure the transformers module imported is the one in the repo.
transformers_module = direct_transformers_import(TRANSFORMERS_PATH)
def _is_definition_header_ending_line(line: str) -> bool:
# Helper function. Returns `True` if `line` is the end parenthesis of a class/function definition
return re.search(r"^\s*\)(\s*->.*:|:)\s*$", line) is not None
def _should_continue(line: str, indent: str) -> bool:
# Helper function. Returns `True` if `line` is empty, starts with the `indent` or is the end parenthesis of a
# class/function definition
return line.startswith(indent) or len(line.strip()) == 0 or _is_definition_header_ending_line(line)
def _sanity_check_splits(splits_1, splits_2, is_class):
"""Check the two (inner) block structures of the corresponding code block given by `split_code_into_blocks` match.
For the case of `class`, they must be of one of the following 3 cases:
- a single block without name:
class foo:
a = 1
- a consecutive sequence of (1 or more) blocks with name
class foo:
def f(x):
return x
- a block without name, followed by a consecutive sequence of (1 or more) blocks with name
class foo:
a = 1
def f(x):
return x
def g(x):
return None
The 2 code snippets that give `splits_1` and `splits_2` have to be in the same case to pass this check, but the
number of blocks with name in the consecutive sequence is not taken into account.
For the case of `function or method`, we don't require it to be in one of the above 3 cases. However, the structure
of`splits_1` and `splits_2` have to match exactly. In particular, the number of blocks with name in a consecutive
sequence is taken into account.
"""
block_names_1 = []
block_names_2 = []
for block in splits_1[1:]:
if block[0].startswith("_block_without_name_"):
block_names_1.append("block_without_name")
elif not block[0].startswith("_empty_block_") and (
not is_class or len(block_names_1) == 0 or block_names_1[-1].startswith("block_without_name")
):
block_names_1.append("block_with_name")
for block in splits_2[1:]:
if block[0].startswith("_block_without_name_"):
block_names_2.append("block_without_name")
elif not block[0].startswith("_empty_block_") and (
not is_class or len(block_names_2) == 0 or block_names_2[-1].startswith("block_without_name")
):
block_names_2.append("block_with_name")
if is_class:
if block_names_1 not in [
["block_without_name"],
["block_with_name"],
["block_without_name", "block_with_name"],
]:
raise ValueError(
"For a class, it must have a specific structure. See the docstring of `_sanity_check_splits` in the file `utils/check_copies.py`"
)
if block_names_1 != block_names_2:
raise ValueError("The structures in the 2 code blocks differ.")
def find_block_end(lines: List[str], start_index: int, indent: int) -> int:
"""
Find the end of the class/func block starting at `start_index` in a source code (defined by `lines`).
Args:
lines (`List[str]`):
The source code, represented by a list of lines.
start_index (`int`):
The starting index of the target class/func block.
indent (`int`):
The indent of the class/func body.
Returns:
`int`: The index of the block's ending line plus by 1 (i.e. exclusive).
"""
indent = " " * indent
# enter the block body
line_index = start_index + 1
while line_index < len(lines) and _should_continue(lines[line_index], indent):
line_index += 1
# Clean up empty lines at the end (if any).
while len(lines[line_index - 1]) <= 1:
line_index -= 1
return line_index
def split_code_into_blocks(
lines: List[str], start_index: int, end_index: int, indent: int, backtrace: bool = False
) -> List[Tuple[str, int, int]]:
"""
Split the class/func block starting at `start_index` in a source code (defined by `lines`) into *inner blocks*.
The block's header is included as the first element. The contiguous regions (without empty lines) that are not
inside any inner block are included as blocks. The contiguous regions of empty lines that are not inside any inner
block are also included as (dummy) blocks.
Args:
lines (`List[str]`):
The source code, represented by a list of lines.
start_index (`int`):
The starting index of the target class/func block.
end_index (`int`):
The ending index of the target class/func block.
indent (`int`):
The indent of the class/func body.
backtrace (`bool`, *optional*, defaults to `False`):
Whether or not to include the lines before the inner class/func block's header (e.g. comments, decorators,
etc.) until an empty line is encountered.
Returns:
`List[Tuple[str, int, int]]`: A list of elements with the form `(block_name, start_index, end_index)`.
"""
splits = []
# `indent - 4` is the indent level of the target class/func header
try:
target_block_name = re.search(
rf"^{' ' * (indent - 4)}((class|def)\s+\S+)(\(|\:)", lines[start_index]
).groups()[0]
except Exception:
start_context = min(start_index - 10, 0)
end_context = min(end_index + 10, len(lines))
raise ValueError(
f"Tried to split a class or function. It did not work. Error comes from line {start_index}: \n```\n"
+ "".join(lines[start_context:end_context])
+ "```\n"
)
# from now on, the `block` means inner blocks unless explicitly specified
indent_str = " " * indent
block_without_name_idx = 0
empty_block_idx = 0
# Find the lines for the definition header
index = start_index
if "(" in lines[start_index] and "):" not in lines[start_index] in lines[start_index]:
while index < end_index:
if _is_definition_header_ending_line(lines[index]):
break
index += 1
# the first line outside the definition header
index += 1
splits.append((target_block_name, start_index, index))
block_start_index, prev_block_end_index = index, index
while index < end_index:
# if found, it will be an inner block
block_found = re.search(rf"^{indent_str}((class|def)\s+\S+)(\(|\:)", lines[index])
if block_found:
name = block_found.groups()[0]
block_end_index = find_block_end(lines, index, indent + 4)
# backtrace to include the lines before the found block's definition header (e.g. comments, decorators,
# etc.) until an empty line is encountered.
block_start_index = index
if index > prev_block_end_index and backtrace:
idx = index - 1
for idx in range(index - 1, prev_block_end_index - 2, -1):
if not (len(lines[idx].strip()) > 0 and lines[idx].startswith(indent_str)):
break
idx += 1
if idx < index:
block_start_index = idx
# between the current found block and the previous found block
if block_start_index > prev_block_end_index:
# give it a dummy name
if len("".join(lines[prev_block_end_index:block_start_index]).strip()) == 0:
prev_block_name = f"_empty_block_{empty_block_idx}"
empty_block_idx += 1
else:
prev_block_name = f"_block_without_name_{block_without_name_idx}"
block_without_name_idx += 1
# Add it as a block
splits.append((prev_block_name, prev_block_end_index, block_start_index))
# Add the current found block
splits.append((name, block_start_index, block_end_index))
prev_block_end_index = block_end_index
index = block_end_index - 1
index += 1
if index > prev_block_end_index:
if len("".join(lines[prev_block_end_index:index]).strip()) == 0:
prev_block_name = f"_empty_block_{empty_block_idx}"
else:
prev_block_name = f"_block_without_name_{block_without_name_idx}"
splits.append((prev_block_name, prev_block_end_index, index))
return splits
def find_code_in_transformers(
object_name: str, base_path: str = None, return_indices: bool = False
) -> Union[str, Tuple[List[str], int, int]]:
"""
Find and return the source code of an object.
Args:
object_name (`str`):
The name of the object we want the source code of.
base_path (`str`, *optional*):
The path to the base folder where files are checked. If not set, it will be set to `TRANSFORMERS_PATH`.
return_indices(`bool`, *optional*, defaults to `False`):
If `False`, will only return the code (as a string), otherwise it will also return the whole lines of the
file where the object specified by `object_name` is defined, together the start/end indices of the block in
the file that defines the object.
Returns:
`Union[str, Tuple[List[str], int, int]]`: If `return_indices=False`, only the source code of the object will be
returned. Otherwise, it also returns the whole lines of the file where the object specified by `object_name` is
defined, together the start/end indices of the block in the file that defines the object.
"""
parts = object_name.split(".")
i = 0
# We can't set this as the default value in the argument, otherwise `CopyCheckTester` will fail, as it uses a
# patched temp directory.
if base_path is None:
base_path = TRANSFORMERS_PATH
# Detail: the `Copied from` statement is originally designed to work with the last part of `TRANSFORMERS_PATH`,
# (which is `transformers`). The same should be applied for `MODEL_TEST_PATH`. However, its last part is `models`
# (to only check and search in it) which is a bit confusing. So we keep the copied statement staring with
# `tests.models.` and change it to `tests` here.
if base_path == MODEL_TEST_PATH:
base_path = "tests"
# First let's find the module where our object lives.
module = parts[i]
while i < len(parts) and not os.path.isfile(os.path.join(base_path, f"{module}.py")):
i += 1
if i < len(parts):
module = os.path.join(module, parts[i])
if i >= len(parts):
raise ValueError(
f"`object_name` should begin with the name of a module of transformers but got {object_name}."
)
with open(os.path.join(base_path, f"{module}.py"), "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Now let's find the class / func in the code!
indent = ""
line_index = 0
for name in parts[i + 1 :]:
while (
line_index < len(lines) and re.search(rf"^{indent}(class|def)\s+{name}(\(|\:)", lines[line_index]) is None
):
line_index += 1
# find the target specified in the current level in `parts` -> increase `indent` so we can search the next
indent += " "
# the index of the first line in the (currently found) block *body*
line_index += 1
if line_index >= len(lines):
raise ValueError(f" {object_name} does not match any function or class in {module}.")
# `indent` is already one level deeper than the (found) class/func block's definition header
# We found the beginning of the class / func, now let's find the end (when the indent diminishes).
# `start_index` is the index of the class/func block's definition header
start_index = line_index - 1
end_index = find_block_end(lines, start_index, len(indent))
code = "".join(lines[start_index:end_index])
return (code, (lines, start_index, end_index)) if return_indices else code
def replace_code(code: str, replace_pattern: str) -> str:
"""Replace `code` by a pattern of the form `with X1->X2,Y1->Y2,Z1->Z2`.
Args:
code (`str`): The code to be modified.
replace_pattern (`str`): The pattern used to modify `code`.
Returns:
`str`: The modified code.
"""
if len(replace_pattern) > 0:
patterns = replace_pattern.replace("with", "").split(",")
patterns = [_re_replace_pattern.search(p) for p in patterns]
for pattern in patterns:
if pattern is None:
continue
obj1, obj2, option = pattern.groups()
code = re.sub(obj1, obj2, code)
if option.strip() == "all-casing":
code = re.sub(obj1.lower(), obj2.lower(), code)
code = re.sub(obj1.upper(), obj2.upper(), code)
return code
def find_code_and_splits(object_name: str, base_path: str, buffer: dict = None):
"""Find the code of an object (specified by `object_name`) and split it into blocks.
Args:
object_name (`str`):
The name of the object, e.g. `transformers.models.bert.modeling_bert.BertAttention` or
`tests.models.llama.test_modeling_llama.LlamaModelTest.test_config`.
base_path (`str`):
The path to the base directory within which the search will be performed. It could be either
`TRANSFORMERS_PATH` or `MODEL_TEST_PATH`.
buffer (`dict`, *optional*):
The buffer used to store the previous results in order to speed up the process.
Returns:
lines (`List[str]`):
The lines of the whole file where the object is defined.
code (`str`):
The object's code.
code_splits (`List[Tuple[str, int, int]]`):
`code` splitted into blocks. See `split_code_into_blocks`.
"""
if buffer is None:
buffer = {}
if (object_name, base_path) in buffer:
lines, code, code_splits = buffer[(object_name, base_path)]
else:
code, (lines, target_start_index, target_end_index) = find_code_in_transformers(
object_name, base_path=base_path, return_indices=True
)
indent = get_indent(code)
# Split the code into blocks
# `indent` is the indent of the class/func definition header, but `code_splits` expects the indent level of the
# block body.
code_splits = split_code_into_blocks(
lines, target_start_index, target_end_index, len(indent) + 4, backtrace=True
)
buffer[(object_name, base_path)] = lines, code, code_splits
return lines, code, code_splits
_re_copy_warning = re.compile(r"^(\s*)#\s*Copied from\s+transformers\.(\S+\.\S+)\s*($|\S.*$)")
_re_copy_warning_for_test_file = re.compile(r"^(\s*)#\s*Copied from\s+tests\.(\S+\.\S+)\s*($|\S.*$)")
_re_replace_pattern = re.compile(r"^\s*(\S+)->(\S+)(\s+.*|$)")
_re_fill_pattern = re.compile(r"<FILL\s+[^>]*>")
def get_indent(code: str) -> str:
"""
Find the indent in the first non empty line in a code sample.
Args:
code (`str`): The code to inspect.
Returns:
`str`: The indent looked at (as string).
"""
lines = code.split("\n")
idx = 0
while idx < len(lines) and len(lines[idx]) == 0:
idx += 1
if idx < len(lines):
return re.search(r"^(\s*)\S", lines[idx]).groups()[0]
return ""
def run_ruff(code):
command = ["ruff", "format", "-", "--config", "pyproject.toml", "--silent"]
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
stdout, _ = process.communicate(input=code.encode())
return stdout.decode()
def stylify(code: str) -> str:
"""
Applies the ruff part of our `make style` command to some code. This formats the code using `ruff format`.
As `ruff` does not provide a python api this cannot be done on the fly.
Args:
code (`str`): The code to format.
Returns:
`str`: The formatted code.
"""
has_indent = len(get_indent(code)) > 0
if has_indent:
code = f"class Bla:\n{code}"
formatted_code = run_ruff(code)
return formatted_code[len("class Bla:\n") :] if has_indent else formatted_code
def check_codes_match(observed_code: str, theoretical_code: str) -> Optional[int]:
"""
Checks if two version of a code match with the exception of the class/function name.
Args:
observed_code (`str`): The code found.
theoretical_code (`str`): The code to match.
Returns:
`Optional[int]`: The index of the first line where there is a difference (if any) and `None` if the codes
match.
"""
observed_code_header = observed_code.split("\n")[0]
theoretical_code_header = theoretical_code.split("\n")[0]
# Catch the function/class name: it is expected that those do not match.
_re_class_match = re.compile(r"class\s+([^\(:]+)(?:\(|:)")
_re_func_match = re.compile(r"def\s+([^\(]+)\(")
for re_pattern in [_re_class_match, _re_func_match]:
if re_pattern.match(observed_code_header) is not None:
try:
observed_obj_name = re_pattern.search(observed_code_header).groups()[0]
except Exception:
raise ValueError(
"Tried to split a class or function. It did not work. Error comes from: \n```\n"
+ observed_code_header
+ "\n```\n"
)
try:
theoretical_name = re_pattern.search(theoretical_code_header).groups()[0]
except Exception:
raise ValueError(
"Tried to split a class or function. It did not work. Error comes from: \n```\n"
+ theoretical_code_header
+ "\n```\n"
)
theoretical_code_header = theoretical_code_header.replace(theoretical_name, observed_obj_name)
# Find the first diff. Line 0 is special since we need to compare with the function/class names ignored.
diff_index = 0
if theoretical_code_header != observed_code_header:
return 0
diff_index = 1
for observed_line, theoretical_line in zip(observed_code.split("\n")[1:], theoretical_code.split("\n")[1:]):
if observed_line != theoretical_line:
return diff_index
diff_index += 1
def is_copy_consistent(filename: str, overwrite: bool = False, buffer: dict = None) -> Optional[List[Tuple[str, int]]]:
"""
Check if the code commented as a copy in a file matches the original.
Args:
filename (`str`):
The name of the file to check.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the copies when they don't match.
buffer (`dict`, *optional*):
The buffer used to store the previous results in order to speed up the process.
Returns:
`Optional[List[Tuple[str, int]]]`: If `overwrite=False`, returns the list of differences as tuples `(str, int)`
with the name of the object having a diff and the line number where theere is the first diff.
"""
base_path = TRANSFORMERS_PATH if not filename.startswith("tests") else MODEL_TEST_PATH
with open(filename, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
diffs = []
line_index = 0
# Not a for loop cause `lines` is going to change (if `overwrite=True`).
while line_index < len(lines):
search_re = _re_copy_warning
if filename.startswith("tests"):
search_re = _re_copy_warning_for_test_file
search = search_re.search(lines[line_index])
if search is None:
line_index += 1
continue
# There is some copied code here, let's retrieve the original.
indent, object_name, replace_pattern = search.groups()
# Find the file lines, the object's code, and its blocks
target_lines, theoretical_code, theoretical_code_splits = find_code_and_splits(
object_name, base_path, buffer=buffer
)
# code replaced by the patterns
theoretical_code_blocks = OrderedDict()
for name, start, end in theoretical_code_splits:
name = replace_code(name, replace_pattern)
code = "".join(target_lines[start:end])
code = replace_code(code, replace_pattern)
theoretical_code_blocks[name] = code
theoretical_indent = get_indent(theoretical_code)
# `start_index` is the index of the first line (the definition header) after `# Copied from`.
# (`indent != theoretical_indent` doesn't seem to occur so far, not sure what this case is for.)
start_index = line_index + 1 if indent == theoretical_indent else line_index
# enter the block body
line_index = start_index + 1
subcode = "\n".join(theoretical_code.split("\n")[1:])
indent = get_indent(subcode)
# Loop to check the observed code, stop when indentation diminishes or if we see a End copy comment.
# We can't call `find_block_end` directly as there is sth. special `# End copy"` here.
should_continue = True
while line_index < len(lines) and should_continue:
line_index += 1
if line_index >= len(lines):
break
line = lines[line_index]
# There is a special pattern `# End copy` to stop early. It's not documented cause it shouldn't really be
# used.
should_continue = _should_continue(line, indent) and re.search(f"^{indent}# End copy", line) is None
# `line_index` is outside the block
# Clean up empty lines at the end (if any).
while len(lines[line_index - 1]) <= 1:
line_index -= 1
# Split the observed code into blocks
observed_code_splits = split_code_into_blocks(lines, start_index, line_index, len(indent), backtrace=True)
is_class = lines[start_index].startswith(f"{' ' * (len(indent) - 4)}class ")
# sanity check
_sanity_check_splits(theoretical_code_splits, observed_code_splits, is_class=is_class)
# observed code in a structured way (a dict mapping block names to blocks' code)
observed_code_blocks = OrderedDict()
for name, start, end in observed_code_splits:
code = "".join(lines[start:end])
observed_code_blocks[name] = code
# Below, we change some names in `theoretical_code_blocks` and `observed_code_blocks`. These mappings map the
# original names to the modified names: this is used to restore the original order of the code blocks.
name_mappings_1 = {k: k for k in theoretical_code_blocks.keys()}
name_mappings_2 = {k: k for k in observed_code_blocks.keys()}
# Update code blocks' name and content:
# If `"# Ignore copy"` is found in a block of the observed code:
# 1. if it's a block only in the observed code --> add it to the theoretical code.
# 2. if it's also in the theoretical code () --> put its content (body) to the corresponding block under the
# same name in the theoretical code.
# In both cases, we change the name to have a prefix `_ignored_` so we know if we can discard them during the
# comparison.
ignored_existing_block_index = 0
ignored_new_block_index = 0
for name in list(observed_code_blocks.keys()):
code = observed_code_blocks[name]
if "# Ignore copy" in code:
if name in theoretical_code_blocks:
# in the target --> just copy the content
del theoretical_code_blocks[name]
theoretical_code_blocks[f"_ignored_existing_block_{ignored_existing_block_index}"] = code
name_mappings_1[name] = f"_ignored_existing_block_{ignored_existing_block_index}"
del observed_code_blocks[name]
observed_code_blocks[f"_ignored_existing_block_{ignored_existing_block_index}"] = code
name_mappings_2[name] = f"_ignored_existing_block_{ignored_existing_block_index}"
ignored_existing_block_index += 1
else:
# not in the target --> add it
theoretical_code_blocks[f"_ignored_new_block_{ignored_new_block_index}"] = code
name_mappings_1[
f"_ignored_new_block_{ignored_new_block_index}"
] = f"_ignored_new_block_{ignored_new_block_index}"
del observed_code_blocks[name]
observed_code_blocks[f"_ignored_new_block_{ignored_new_block_index}"] = code
name_mappings_2[name] = f"_ignored_new_block_{ignored_new_block_index}"
ignored_new_block_index += 1
# Respect the original block order:
# 1. in `theoretical_code_blocks`: the new blocks will follow the existing ones
# 2. in `observed_code_blocks`: the original order are kept with names modified potentially. This is necessary
# to compute the correct `diff_index` if `overwrite=True` and there is a diff.
theoretical_code_blocks = {
name_mappings_1[orig_name]: theoretical_code_blocks[name_mappings_1[orig_name]]
for orig_name in name_mappings_1
}
observed_code_blocks = {
name_mappings_2[orig_name]: observed_code_blocks[name_mappings_2[orig_name]]
for orig_name in name_mappings_2
}
# Ignore the blocks specified to be ignored. This is the version used to check if there is a mismatch
theoretical_code_blocks_clean = {
k: v
for k, v in theoretical_code_blocks.items()
if not (k.startswith(("_ignored_existing_block_", "_ignored_new_block_")))
}
theoretical_code = "".join(list(theoretical_code_blocks_clean.values()))
# stylify `theoretical_code` before compare (this is needed only when `replace_pattern` is not empty)
if replace_pattern:
theoretical_code = stylify(theoretical_code)
# Remove `\n\n` in `theoretical_code` before compare (so no empty line)
while "\n\n" in theoretical_code:
theoretical_code = theoretical_code.replace("\n\n", "\n")
# Compute `observed_code` where we don't include any empty line + keep track the line index between the
# original/processed `observed_code` so we can have the correct `diff_index`.
idx_to_orig_idx_mapping_for_observed_code_lines = {}
idx = -1
orig_idx = -1
observed_code = ""
for name, code in observed_code_blocks.items():
if code.endswith("\n"):
code = code[:-1]
for code_line in code.split("\n"):
orig_idx += 1
if code_line.strip() and not name.startswith(("_ignored_existing_block_", "_ignored_new_block_")):
idx += 1
observed_code += code_line + "\n"
idx_to_orig_idx_mapping_for_observed_code_lines[idx] = orig_idx
# Test for a diff and act accordingly.
diff_index = check_codes_match(observed_code, theoretical_code)
if diff_index is not None:
# switch to the index in the original `observed_code` (i.e. before removing empty lines)
diff_index = idx_to_orig_idx_mapping_for_observed_code_lines[diff_index]
diffs.append([object_name, diff_index + start_index + 1])
if overwrite:
# `theoretical_code_to_write` is a single string but may have several lines.
theoretical_code_to_write = stylify("".join(list(theoretical_code_blocks.values())))
lines = lines[:start_index] + [theoretical_code_to_write] + lines[line_index:]
# Here we treat it as a single entry in `lines`.
line_index = start_index + 1
if overwrite and len(diffs) > 0:
# Warn the user a file has been modified.
print(f"Detected changes, rewriting {filename}.")
with open(filename, "w", encoding="utf-8", newline="\n") as f:
f.writelines(lines)
return diffs
def check_copies(overwrite: bool = False, file: str = None):
"""
Check every file is copy-consistent with the original. Also check the model list in the main README and other
READMEs are consistent.
Args:
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the copies when they don't match.
file (`bool`, *optional*):
The path to a specific file to check and/or fix.
"""
buffer = {}
if file is None:
all_files = glob.glob(os.path.join(TRANSFORMERS_PATH, "**/*.py"), recursive=True)
all_test_files = glob.glob(os.path.join(MODEL_TEST_PATH, "**/*.py"), recursive=True)
all_files = list(all_files) + list(all_test_files)
else:
all_files = [file]
diffs = []
for filename in all_files:
new_diffs = is_copy_consistent(filename, overwrite, buffer)
diffs += [f"- {filename}: copy does not match {d[0]} at line {d[1]}" for d in new_diffs]
if not overwrite and len(diffs) > 0:
diff = "\n".join(diffs)
raise Exception(
"Found the following copy inconsistencies:\n"
+ diff
+ "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them."
)
def check_full_copies(overwrite: bool = False):
"""
Check the files that are full copies of others (as indicated in `FULL_COPIES`) are copy-consistent.
Args:
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the copies when they don't match.
"""
diffs = []
for target, source in FULL_COPIES.items():
with open(source, "r", encoding="utf-8") as f:
source_code = f.read()
with open(target, "r", encoding="utf-8") as f:
target_code = f.read()
if source_code != target_code:
if overwrite:
with open(target, "w", encoding="utf-8") as f:
print(f"Replacing the content of {target} by the one of {source}.")
f.write(source_code)
else:
diffs.append(f"- {target}: copy does not match {source}.")
if not overwrite and len(diffs) > 0:
diff = "\n".join(diffs)
raise Exception(
"Found the following copy inconsistencies:\n"
+ diff
+ "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them."
)
def get_model_list(filename: str, start_prompt: str, end_prompt: str) -> str:
"""
Extracts the model list from a README.
Args:
filename (`str`): The name of the README file to check.
start_prompt (`str`): The string to look for that introduces the model list.
end_prompt (`str`): The string to look for that ends the model list.
Returns:
`str`: The model list.
"""
with open(os.path.join(REPO_PATH, filename), "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Find the start of the list.
start_index = 0
while not lines[start_index].startswith(start_prompt):
start_index += 1
start_index += 1
result = []
current_line = ""
end_index = start_index
# Keep going until the end of the list.
while not lines[end_index].startswith(end_prompt):
if lines[end_index].startswith("1."):
if len(current_line) > 1:
result.append(current_line)
current_line = lines[end_index]
elif len(lines[end_index]) > 1:
current_line = f"{current_line[:-1]} {lines[end_index].lstrip()}"
end_index += 1
if len(current_line) > 1:
result.append(current_line)
return "".join(result)
def convert_to_localized_md(model_list: str, localized_model_list: str, format_str: str) -> Tuple[bool, str]:
"""
Compare the model list from the main README to the one in a localized README.
Args:
model_list (`str`): The model list in the main README.
localized_model_list (`str`): The model list in one of the localized README.
format_str (`str`):
The template for a model entry in the localized README (look at the `format_model_list` in the entries of
`LOCALIZED_READMES` for examples).
Returns:
`Tuple[bool, str]`: A tuple where the first value indicates if the READMEs match or not, and the second value
is the correct localized README.
"""
def _rep(match):
title, model_link, paper_affiliations, paper_title_link, paper_authors, supplements = match.groups()
return format_str.format(
title=title,
model_link=model_link,
paper_affiliations=paper_affiliations,
paper_title_link=paper_title_link,
paper_authors=paper_authors,
supplements=" " + supplements.strip() if len(supplements) != 0 else "",
)
# This regex captures metadata from an English model description, including model title, model link,
# affiliations of the paper, title of the paper, authors of the paper, and supplemental data (see DistilBERT for
# example).
_re_capture_meta = re.compile(
r"\*\*\[([^\]]*)\]\(([^\)]*)\)\*\* \(from ([^)]*)\)[^\[]*([^\)]*\)).*?by (.*?[A-Za-z\*]{2,}?)\. (.*)$"
)
# This regex is used to synchronize title link.
_re_capture_title_link = re.compile(r"\*\*\[([^\]]*)\]\(([^\)]*)\)\*\*")
# This regex is used to synchronize paper title and link.
_re_capture_paper_link = re.compile(r" \[([^\]]*)\]\(([^\)]*)\)")
if len(localized_model_list) == 0:
localized_model_index = {}
else:
try:
localized_model_index = {
re.search(r"\*\*\[([^\]]*)", line).groups()[0]: line
for line in localized_model_list.strip().split("\n")
}
except AttributeError:
raise AttributeError("A model name in localized READMEs cannot be recognized.")
model_keys = [re.search(r"\*\*\[([^\]]*)", line).groups()[0] for line in model_list.strip().split("\n")]
# We exclude keys in localized README not in the main one.
readmes_match = not any(k not in model_keys for k in localized_model_index)
localized_model_index = {k: v for k, v in localized_model_index.items() if k in model_keys}
for model in model_list.strip().split("\n"):
title, model_link = _re_capture_title_link.search(model).groups()
if title not in localized_model_index:
readmes_match = False
# Add an anchor white space behind a model description string for regex.
# If metadata cannot be captured, the English version will be directly copied.
localized_model_index[title] = _re_capture_meta.sub(_rep, model + " ")
elif _re_fill_pattern.search(localized_model_index[title]) is not None:
update = _re_capture_meta.sub(_rep, model + " ")
if update != localized_model_index[title]:
readmes_match = False
localized_model_index[title] = update
else:
# Synchronize title link
converted_model = _re_capture_title_link.sub(
f"**[{title}]({model_link})**", localized_model_index[title], count=1
)
# Synchronize paper title and its link (if found)
paper_title_link = _re_capture_paper_link.search(model)
if paper_title_link is not None:
paper_title, paper_link = paper_title_link.groups()
converted_model = _re_capture_paper_link.sub(
f" [{paper_title}]({paper_link})", converted_model, count=1
)
if converted_model != localized_model_index[title]:
readmes_match = False
localized_model_index[title] = converted_model
sorted_index = sorted(localized_model_index.items(), key=lambda x: x[0].lower())
return readmes_match, "\n".join((x[1] for x in sorted_index)) + "\n"
def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> Tuple[str, int, int, List[str]]:
"""
Find the text in a file between two prompts.
Args:
filename (`str`): The name of the file to look into.
start_prompt (`str`): The string to look for that introduces the content looked for.
end_prompt (`str`): The string to look for that ends the content looked for.
Returns:
Tuple[str, int, int, List[str]]: The content between the two prompts, the index of the start line in the
original file, the index of the end line in the original file and the list of lines of that file.
"""
with open(filename, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Find the start prompt.
start_index = 0
while not lines[start_index].startswith(start_prompt):
start_index += 1
start_index += 1
end_index = start_index
while not lines[end_index].startswith(end_prompt):
end_index += 1
end_index -= 1
while len(lines[start_index]) <= 1:
start_index += 1
while len(lines[end_index]) <= 1:
end_index -= 1
end_index += 1
return "".join(lines[start_index:end_index]), start_index, end_index, lines
# Map a model name with the name it has in the README for the check_readme check
SPECIAL_MODEL_NAMES = {
"Bert Generation": "BERT For Sequence Generation",
"BigBird": "BigBird-RoBERTa",
"Data2VecAudio": "Data2Vec",
"Data2VecText": "Data2Vec",
"Data2VecVision": "Data2Vec",
"DonutSwin": "Swin Transformer",
"Marian": "MarianMT",
"MaskFormerSwin": "Swin Transformer",
"OpenAI GPT-2": "GPT-2",
"OpenAI GPT": "GPT",
"Perceiver": "Perceiver IO",
"SAM": "Segment Anything",
"ViT": "Vision Transformer (ViT)",
}
# Update this list with the models that shouldn't be in the README. This only concerns modular models or those who do
# not have an associated paper.
MODELS_NOT_IN_README = [
"BertJapanese",
"Encoder decoder",
"FairSeq Machine-Translation",
"HerBERT",
"RetriBERT",
"Speech Encoder decoder",
"Speech2Text",
"Speech2Text2",
"TimmBackbone",
"Vision Encoder decoder",
"VisionTextDualEncoder",
"CLIPVisionModel",
"SiglipVisionModel",
"ChineseCLIPVisionModel",
]
# Template for new entries to add in the main README when we have missing models.
README_TEMPLATE = (
"1. **[{model_name}](https://huggingface.co/docs/main/transformers/model_doc/{model_type})** (from "
"<FILL INSTITUTION>) released with the paper [<FILL PAPER TITLE>](<FILL ARKIV LINK>) by <FILL AUTHORS>."
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--file", type=str, default=None, help="A specific file to check and/or fix")
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
check_copies(args.fix_and_overwrite, args.file)
check_full_copies(args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/update_tiny_models.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A script running `create_dummy_models.py` with a pre-defined set of arguments.
This file is intended to be used in a CI workflow file without the need of specifying arguments. It creates and uploads
tiny models for all model classes (if their tiny versions are not on the Hub yet), as well as produces an updated
version of `tests/utils/tiny_model_summary.json`. That updated file should be merged into the `main` branch of
`transformers` so the pipeline testing will use the latest created/updated tiny models.
"""
import argparse
import copy
import json
import multiprocessing
import os
import time
from create_dummy_models import COMPOSITE_MODELS, create_tiny_models
from huggingface_hub import ModelFilter, hf_api
import transformers
from transformers import AutoFeatureExtractor, AutoImageProcessor, AutoTokenizer
from transformers.image_processing_utils import BaseImageProcessor
def get_all_model_names():
model_names = set()
# Each auto modeling files contains multiple mappings. Let's get them in a dynamic way.
for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]:
module = getattr(transformers.models.auto, module_name, None)
if module is None:
continue
# all mappings in a single auto modeling file
mapping_names = [
x
for x in dir(module)
if x.endswith("_MAPPING_NAMES")
and (x.startswith("MODEL_") or x.startswith("TF_MODEL_") or x.startswith("FLAX_MODEL_"))
]
for name in mapping_names:
mapping = getattr(module, name)
if mapping is not None:
for v in mapping.values():
if isinstance(v, (list, tuple)):
model_names.update(v)
elif isinstance(v, str):
model_names.add(v)
return sorted(model_names)
def get_tiny_model_names_from_repo():
# All model names defined in auto mappings
model_names = set(get_all_model_names())
with open("tests/utils/tiny_model_summary.json") as fp:
tiny_model_info = json.load(fp)
tiny_models_names = set()
for model_base_name in tiny_model_info:
tiny_models_names.update(tiny_model_info[model_base_name]["model_classes"])
# Remove a tiny model name if one of its framework implementation hasn't yet a tiny version on the Hub.
not_on_hub = model_names.difference(tiny_models_names)
for model_name in copy.copy(tiny_models_names):
if not model_name.startswith("TF") and f"TF{model_name}" in not_on_hub:
tiny_models_names.remove(model_name)
elif model_name.startswith("TF") and model_name[2:] in not_on_hub:
tiny_models_names.remove(model_name)
return sorted(tiny_models_names)
def get_tiny_model_summary_from_hub(output_path):
special_models = COMPOSITE_MODELS.values()
# All tiny model base names on Hub
model_names = get_all_model_names()
models = hf_api.list_models(
filter=ModelFilter(
author="hf-internal-testing",
)
)
_models = set()
for x in models:
model = x.modelId
org, model = model.split("/")
if not model.startswith("tiny-random-"):
continue
model = model.replace("tiny-random-", "")
if not model[0].isupper():
continue
if model not in model_names and model not in special_models:
continue
_models.add(model)
models = sorted(_models)
# All tiny model names on Hub
summary = {}
for model in models:
repo_id = f"hf-internal-testing/tiny-random-{model}"
model = model.split("-")[0]
try:
repo_info = hf_api.repo_info(repo_id)
content = {
"tokenizer_classes": set(),
"processor_classes": set(),
"model_classes": set(),
"sha": repo_info.sha,
}
except Exception:
continue
try:
time.sleep(1)
tokenizer_fast = AutoTokenizer.from_pretrained(repo_id)
content["tokenizer_classes"].add(tokenizer_fast.__class__.__name__)
except Exception:
pass
try:
time.sleep(1)
tokenizer_slow = AutoTokenizer.from_pretrained(repo_id, use_fast=False)
content["tokenizer_classes"].add(tokenizer_slow.__class__.__name__)
except Exception:
pass
try:
time.sleep(1)
img_p = AutoImageProcessor.from_pretrained(repo_id)
content["processor_classes"].add(img_p.__class__.__name__)
except Exception:
pass
try:
time.sleep(1)
feat_p = AutoFeatureExtractor.from_pretrained(repo_id)
if not isinstance(feat_p, BaseImageProcessor):
content["processor_classes"].add(feat_p.__class__.__name__)
except Exception:
pass
try:
time.sleep(1)
model_class = getattr(transformers, model)
m = model_class.from_pretrained(repo_id)
content["model_classes"].add(m.__class__.__name__)
except Exception:
pass
try:
time.sleep(1)
model_class = getattr(transformers, f"TF{model}")
m = model_class.from_pretrained(repo_id)
content["model_classes"].add(m.__class__.__name__)
except Exception:
pass
content["tokenizer_classes"] = sorted(content["tokenizer_classes"])
content["processor_classes"] = sorted(content["processor_classes"])
content["model_classes"] = sorted(content["model_classes"])
summary[model] = content
with open(os.path.join(output_path, "hub_tiny_model_summary.json"), "w") as fp:
json.dump(summary, fp, ensure_ascii=False, indent=4)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.")
args = parser.parse_args()
# This has to be `spawn` to avoid hanging forever!
multiprocessing.set_start_method("spawn")
output_path = "tiny_models"
all = True
model_types = None
models_to_skip = get_tiny_model_names_from_repo()
no_check = True
upload = True
organization = "hf-internal-testing"
create_tiny_models(
output_path,
all,
model_types,
models_to_skip,
no_check,
upload,
organization,
token=os.environ.get("TOKEN", None),
num_workers=args.num_workers,
)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/notification_service.py | # Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ast
import collections
import functools
import json
import operator
import os
import re
import sys
import time
from typing import Dict, List, Optional, Union
import requests
from get_ci_error_statistics import get_jobs
from get_previous_daily_ci import get_last_daily_ci_reports
from slack_sdk import WebClient
client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"])
NON_MODEL_TEST_MODULES = [
"benchmark",
"deepspeed",
"extended",
"fixtures",
"generation",
"onnx",
"optimization",
"pipelines",
"sagemaker",
"trainer",
"utils",
]
def handle_test_results(test_results):
expressions = test_results.split(" ")
failed = 0
success = 0
# When the output is short enough, the output is surrounded by = signs: "== OUTPUT =="
# When it is too long, those signs are not present.
time_spent = expressions[-2] if "=" in expressions[-1] else expressions[-1]
for i, expression in enumerate(expressions):
if "failed" in expression:
failed += int(expressions[i - 1])
if "passed" in expression:
success += int(expressions[i - 1])
return failed, success, time_spent
def handle_stacktraces(test_results):
# These files should follow the following architecture:
# === FAILURES ===
# <path>:<line>: Error ...
# <path>:<line>: Error ...
# <empty line>
total_stacktraces = test_results.split("\n")[1:-1]
stacktraces = []
for stacktrace in total_stacktraces:
try:
line = stacktrace[: stacktrace.index(" ")].split(":")[-2]
error_message = stacktrace[stacktrace.index(" ") :]
stacktraces.append(f"(line {line}) {error_message}")
except Exception:
stacktraces.append("Cannot retrieve error message.")
return stacktraces
def dicts_to_sum(objects: Union[Dict[str, Dict], List[dict]]):
if isinstance(objects, dict):
lists = objects.values()
else:
lists = objects
# Convert each dictionary to counter
counters = map(collections.Counter, lists)
# Sum all the counters
return functools.reduce(operator.add, counters)
class Message:
def __init__(
self,
title: str,
ci_title: str,
model_results: Dict,
additional_results: Dict,
selected_warnings: List = None,
prev_ci_artifacts=None,
):
self.title = title
self.ci_title = ci_title
# Failures and success of the modeling tests
self.n_model_success = sum(r["success"] for r in model_results.values())
self.n_model_single_gpu_failures = sum(dicts_to_sum(r["failed"])["single"] for r in model_results.values())
self.n_model_multi_gpu_failures = sum(dicts_to_sum(r["failed"])["multi"] for r in model_results.values())
# Some suites do not have a distinction between single and multi GPU.
self.n_model_unknown_failures = sum(dicts_to_sum(r["failed"])["unclassified"] for r in model_results.values())
self.n_model_failures = (
self.n_model_single_gpu_failures + self.n_model_multi_gpu_failures + self.n_model_unknown_failures
)
# Failures and success of the additional tests
self.n_additional_success = sum(r["success"] for r in additional_results.values())
if len(additional_results) > 0:
# `dicts_to_sum` uses `dicts_to_sum` which requires a non empty dictionary. Let's just add an empty entry.
all_additional_failures = dicts_to_sum([r["failed"] for r in additional_results.values()])
self.n_additional_single_gpu_failures = all_additional_failures["single"]
self.n_additional_multi_gpu_failures = all_additional_failures["multi"]
self.n_additional_unknown_gpu_failures = all_additional_failures["unclassified"]
else:
self.n_additional_single_gpu_failures = 0
self.n_additional_multi_gpu_failures = 0
self.n_additional_unknown_gpu_failures = 0
self.n_additional_failures = (
self.n_additional_single_gpu_failures
+ self.n_additional_multi_gpu_failures
+ self.n_additional_unknown_gpu_failures
)
# Results
self.n_failures = self.n_model_failures + self.n_additional_failures
self.n_success = self.n_model_success + self.n_additional_success
self.n_tests = self.n_failures + self.n_success
self.model_results = model_results
self.additional_results = additional_results
self.thread_ts = None
if selected_warnings is None:
selected_warnings = []
self.selected_warnings = selected_warnings
self.prev_ci_artifacts = prev_ci_artifacts
@property
def time(self) -> str:
all_results = [*self.model_results.values(), *self.additional_results.values()]
time_spent = [r["time_spent"].split(", ")[0] for r in all_results if len(r["time_spent"])]
total_secs = 0
for time in time_spent:
time_parts = time.split(":")
# Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute.
if len(time_parts) == 1:
time_parts = [0, 0, time_parts[0]]
hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2])
total_secs += hours * 3600 + minutes * 60 + seconds
hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60
return f"{int(hours)}h{int(minutes)}m{int(seconds)}s"
@property
def header(self) -> Dict:
return {"type": "header", "text": {"type": "plain_text", "text": self.title}}
@property
def ci_title_section(self) -> Dict:
return {"type": "section", "text": {"type": "mrkdwn", "text": self.ci_title}}
@property
def no_failures(self) -> Dict:
return {
"type": "section",
"text": {
"type": "plain_text",
"text": f"๐ There were no failures: all {self.n_tests} tests passed. The suite ran in {self.time}.",
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
@property
def failures(self) -> Dict:
return {
"type": "section",
"text": {
"type": "plain_text",
"text": (
f"There were {self.n_failures} failures, out of {self.n_tests} tests.\n"
f"Number of model failures: {self.n_model_failures}.\n"
f"The suite ran in {self.time}."
),
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
@property
def warnings(self) -> Dict:
# If something goes wrong, let's avoid the CI report failing to be sent.
button_text = "Check warnings (Link not found)"
# Use the workflow run link
job_link = f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}"
for job in github_actions_jobs:
if "Extract warnings in CI artifacts" in job["name"] and job["conclusion"] == "success":
button_text = "Check warnings"
# Use the actual job link
job_link = job["html_url"]
break
huggingface_hub_warnings = [x for x in self.selected_warnings if "huggingface_hub" in x]
text = f"There are {len(self.selected_warnings)} warnings being selected."
text += f"\n{len(huggingface_hub_warnings)} of them are from `huggingface_hub`."
return {
"type": "section",
"text": {
"type": "plain_text",
"text": text,
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": button_text, "emoji": True},
"url": job_link,
},
}
@staticmethod
def get_device_report(report, rjust=6):
if "single" in report and "multi" in report:
return f"{str(report['single']).rjust(rjust)} | {str(report['multi']).rjust(rjust)} | "
elif "single" in report:
return f"{str(report['single']).rjust(rjust)} | {'0'.rjust(rjust)} | "
elif "multi" in report:
return f"{'0'.rjust(rjust)} | {str(report['multi']).rjust(rjust)} | "
@property
def category_failures(self) -> Dict:
model_failures = [v["failed"] for v in self.model_results.values()]
category_failures = {}
for model_failure in model_failures:
for key, value in model_failure.items():
if key not in category_failures:
category_failures[key] = dict(value)
else:
category_failures[key]["unclassified"] += value["unclassified"]
category_failures[key]["single"] += value["single"]
category_failures[key]["multi"] += value["multi"]
individual_reports = []
for key, value in category_failures.items():
device_report = self.get_device_report(value)
if sum(value.values()):
if device_report:
individual_reports.append(f"{device_report}{key}")
else:
individual_reports.append(key)
header = "Single | Multi | Category\n"
category_failures_report = prepare_reports(
title="The following modeling categories had failures", header=header, reports=individual_reports
)
return {"type": "section", "text": {"type": "mrkdwn", "text": category_failures_report}}
def compute_diff_for_failure_reports(self, curr_failure_report, prev_failure_report): # noqa
# Remove the leading and training parts that don't contain failure count information.
model_failures = curr_failure_report.split("\n")[3:-2]
prev_model_failures = prev_failure_report.split("\n")[3:-2]
entries_changed = set(model_failures).difference(prev_model_failures)
prev_map = {}
for f in prev_model_failures:
items = [x.strip() for x in f.split("| ")]
prev_map[items[-1]] = [int(x) for x in items[:-1]]
curr_map = {}
for f in entries_changed:
items = [x.strip() for x in f.split("| ")]
curr_map[items[-1]] = [int(x) for x in items[:-1]]
diff_map = {}
for k, v in curr_map.items():
if k not in prev_map:
diff_map[k] = v
else:
diff = [x - y for x, y in zip(v, prev_map[k])]
if max(diff) > 0:
diff_map[k] = diff
entries_changed = []
for model_name, diff_values in diff_map.items():
diff = [str(x) for x in diff_values]
diff = [f"+{x}" if (x != "0" and not x.startswith("-")) else x for x in diff]
diff = [x.rjust(9) for x in diff]
device_report = " | ".join(diff) + " | "
report = f"{device_report}{model_name}"
entries_changed.append(report)
entries_changed = sorted(entries_changed, key=lambda s: s.split("| ")[-1])
return entries_changed
@property
def model_failures(self) -> List[Dict]:
# Obtain per-model failures
def per_model_sum(model_category_dict):
return dicts_to_sum(model_category_dict["failed"].values())
failures = {}
non_model_failures = {
k: per_model_sum(v) for k, v in self.model_results.items() if sum(per_model_sum(v).values())
}
for k, v in self.model_results.items():
if k in NON_MODEL_TEST_MODULES:
pass
if sum(per_model_sum(v).values()):
dict_failed = dict(v["failed"])
pytorch_specific_failures = dict_failed.pop("PyTorch")
tensorflow_specific_failures = dict_failed.pop("TensorFlow")
other_failures = dicts_to_sum(dict_failed.values())
failures[k] = {
"PyTorch": pytorch_specific_failures,
"TensorFlow": tensorflow_specific_failures,
"other": other_failures,
}
model_reports = []
other_module_reports = []
for key, value in non_model_failures.items():
if key in NON_MODEL_TEST_MODULES:
device_report = self.get_device_report(value)
if sum(value.values()):
if device_report:
report = f"{device_report}{key}"
else:
report = key
other_module_reports.append(report)
for key, value in failures.items():
device_report_values = [
value["PyTorch"]["single"],
value["PyTorch"]["multi"],
value["TensorFlow"]["single"],
value["TensorFlow"]["multi"],
sum(value["other"].values()),
]
if sum(device_report_values):
device_report = " | ".join([str(x).rjust(9) for x in device_report_values]) + " | "
report = f"{device_report}{key}"
model_reports.append(report)
# (Possibly truncated) reports for the current workflow run - to be sent to Slack channels
model_header = "Single PT | Multi PT | Single TF | Multi TF | Other | Category\n"
sorted_model_reports = sorted(model_reports, key=lambda s: s.split("| ")[-1])
model_failures_report = prepare_reports(
title="These following model modules had failures", header=model_header, reports=sorted_model_reports
)
module_header = "Single | Multi | Category\n"
sorted_module_reports = sorted(other_module_reports, key=lambda s: s.split("| ")[-1])
module_failures_report = prepare_reports(
title="The following non-model modules had failures", header=module_header, reports=sorted_module_reports
)
# To be sent to Slack channels
model_failure_sections = [
{"type": "section", "text": {"type": "mrkdwn", "text": model_failures_report}},
{"type": "section", "text": {"type": "mrkdwn", "text": module_failures_report}},
]
# Save the complete (i.e. no truncation) failure tables (of the current workflow run)
# (to be uploaded as artifacts)
model_failures_report = prepare_reports(
title="These following model modules had failures",
header=model_header,
reports=sorted_model_reports,
to_truncate=False,
)
file_path = os.path.join(os.getcwd(), "prev_ci_results/model_failures_report.txt")
with open(file_path, "w", encoding="UTF-8") as fp:
fp.write(model_failures_report)
module_failures_report = prepare_reports(
title="The following non-model modules had failures",
header=module_header,
reports=sorted_module_reports,
to_truncate=False,
)
file_path = os.path.join(os.getcwd(), "prev_ci_results/module_failures_report.txt")
with open(file_path, "w", encoding="UTF-8") as fp:
fp.write(module_failures_report)
if self.prev_ci_artifacts is not None:
# if the last run produces artifact named `prev_ci_results`
if (
"prev_ci_results" in self.prev_ci_artifacts
and "model_failures_report.txt" in self.prev_ci_artifacts["prev_ci_results"]
):
# Compute the difference of the previous/current (model failure) table
prev_model_failures = self.prev_ci_artifacts["prev_ci_results"]["model_failures_report.txt"]
entries_changed = self.compute_diff_for_failure_reports(model_failures_report, prev_model_failures)
if len(entries_changed) > 0:
# Save the complete difference
diff_report = prepare_reports(
title="Changed model modules failures",
header=model_header,
reports=entries_changed,
to_truncate=False,
)
file_path = os.path.join(os.getcwd(), "prev_ci_results/changed_model_failures_report.txt")
with open(file_path, "w", encoding="UTF-8") as fp:
fp.write(diff_report)
# To be sent to Slack channels
diff_report = prepare_reports(
title="*Changed model modules failures*",
header=model_header,
reports=entries_changed,
)
model_failure_sections.append(
{"type": "section", "text": {"type": "mrkdwn", "text": diff_report}},
)
return model_failure_sections
@property
def additional_failures(self) -> Dict:
failures = {k: v["failed"] for k, v in self.additional_results.items()}
errors = {k: v["error"] for k, v in self.additional_results.items()}
individual_reports = []
for key, value in failures.items():
device_report = self.get_device_report(value)
if sum(value.values()) or errors[key]:
report = f"{key}"
if errors[key]:
report = f"[Errored out] {report}"
if device_report:
report = f"{device_report}{report}"
individual_reports.append(report)
header = "Single | Multi | Category\n"
failures_report = prepare_reports(
title="The following non-modeling tests had failures", header=header, reports=individual_reports
)
return {"type": "section", "text": {"type": "mrkdwn", "text": failures_report}}
@property
def payload(self) -> str:
blocks = [self.header]
if self.ci_title:
blocks.append(self.ci_title_section)
if self.n_model_failures > 0 or self.n_additional_failures > 0:
blocks.append(self.failures)
if self.n_model_failures > 0:
blocks.append(self.category_failures)
for block in self.model_failures:
if block["text"]["text"]:
blocks.append(block)
if self.n_additional_failures > 0:
blocks.append(self.additional_failures)
if self.n_model_failures == 0 and self.n_additional_failures == 0:
blocks.append(self.no_failures)
if len(self.selected_warnings) > 0:
blocks.append(self.warnings)
new_failure_blocks = self.get_new_model_failure_blocks(with_header=False)
if len(new_failure_blocks) > 0:
blocks.extend(new_failure_blocks)
return json.dumps(blocks)
@staticmethod
def error_out(title, ci_title="", runner_not_available=False, runner_failed=False, setup_failed=False):
blocks = []
title_block = {"type": "header", "text": {"type": "plain_text", "text": title}}
blocks.append(title_block)
if ci_title:
ci_title_block = {"type": "section", "text": {"type": "mrkdwn", "text": ci_title}}
blocks.append(ci_title_block)
offline_runners = []
if runner_not_available:
text = "๐ CI runners are not available! Tests are not run. ๐ญ"
result = os.environ.get("OFFLINE_RUNNERS")
if result is not None:
offline_runners = json.loads(result)
elif runner_failed:
text = "๐ CI runners have problems! Tests are not run. ๐ญ"
elif setup_failed:
text = "๐ Setup job failed. Tests are not run. ๐ญ"
else:
text = "๐ There was an issue running the tests. ๐ญ"
error_block_1 = {
"type": "header",
"text": {
"type": "plain_text",
"text": text,
},
}
text = ""
if len(offline_runners) > 0:
text = "\n โข " + "\n โข ".join(offline_runners)
text = f"The following runners are offline:\n{text}\n\n"
text += "๐ Let's fix it ASAP! ๐"
error_block_2 = {
"type": "section",
"text": {
"type": "plain_text",
"text": text,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
blocks.extend([error_block_1, error_block_2])
payload = json.dumps(blocks)
print("Sending the following payload")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text=text,
blocks=payload,
)
def post(self):
payload = self.payload
print("Sending the following payload")
print(json.dumps({"blocks": json.loads(payload)}))
text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed."
self.thread_ts = client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
blocks=payload,
text=text,
)
def get_reply_blocks(self, job_name, job_result, failures, device, text):
"""
failures: A list with elements of the form {"line": full test name, "trace": error trace}
"""
# `text` must be less than 3001 characters in Slack SDK
# keep some room for adding "[Truncated]" when necessary
MAX_ERROR_TEXT = 3000 - len("[Truncated]")
failure_text = ""
for idx, error in enumerate(failures):
new_text = failure_text + f'*{error["line"]}*\n_{error["trace"]}_\n\n'
if len(new_text) > MAX_ERROR_TEXT:
# `failure_text` here has length <= 3000
failure_text = failure_text + "[Truncated]"
break
# `failure_text` here has length <= MAX_ERROR_TEXT
failure_text = new_text
title = job_name
if device is not None:
title += f" ({device}-gpu)"
content = {"type": "section", "text": {"type": "mrkdwn", "text": text}}
# TODO: Make sure we always have a valid job link (or at least a way not to break the report sending)
# Currently we get the device from a job's artifact name.
# If a device is found, the job name should contain the device type, for example, `XXX (single-gpu)`.
# This could be done by adding `machine_type` in a job's `strategy`.
# (If `job_result["job_link"][device]` is `None`, we get an error: `... [ERROR] must provide a string ...`)
if job_result["job_link"] is not None and job_result["job_link"][device] is not None:
content["accessory"] = {
"type": "button",
"text": {"type": "plain_text", "text": "GitHub Action job", "emoji": True},
"url": job_result["job_link"][device],
}
return [
{"type": "header", "text": {"type": "plain_text", "text": title.upper(), "emoji": True}},
content,
{"type": "section", "text": {"type": "mrkdwn", "text": failure_text}},
]
def get_new_model_failure_blocks(self, with_header=True):
if self.prev_ci_artifacts is None:
return {}
sorted_dict = sorted(self.model_results.items(), key=lambda t: t[0])
prev_model_results = {}
if (
"prev_ci_results" in self.prev_ci_artifacts
and "model_results.json" in self.prev_ci_artifacts["prev_ci_results"]
):
prev_model_results = json.loads(self.prev_ci_artifacts["prev_ci_results"]["model_results.json"])
all_failure_lines = {}
for job, job_result in sorted_dict:
if len(job_result["failures"]):
devices = sorted(job_result["failures"].keys(), reverse=True)
for device in devices:
failures = job_result["failures"][device]
prev_error_lines = {}
if job in prev_model_results and device in prev_model_results[job]["failures"]:
prev_error_lines = {error["line"] for error in prev_model_results[job]["failures"][device]}
url = None
if job_result["job_link"] is not None and job_result["job_link"][device] is not None:
url = job_result["job_link"][device]
for idx, error in enumerate(failures):
if error["line"] in prev_error_lines:
continue
new_text = f'{error["line"]}\n\n'
if new_text not in all_failure_lines:
all_failure_lines[new_text] = []
all_failure_lines[new_text].append(f"<{url}|{device}>" if url is not None else device)
MAX_ERROR_TEXT = 3000 - len("[Truncated]") - len("```New model failures```\n\n")
failure_text = ""
for line, devices in all_failure_lines.items():
new_text = failure_text + f"{'|'.join(devices)} gpu\n{line}"
if len(new_text) > MAX_ERROR_TEXT:
# `failure_text` here has length <= 3000
failure_text = failure_text + "[Truncated]"
break
# `failure_text` here has length <= MAX_ERROR_TEXT
failure_text = new_text
blocks = []
if failure_text:
if with_header:
blocks.append(
{"type": "header", "text": {"type": "plain_text", "text": "New model failures", "emoji": True}}
)
else:
failure_text = f"*New model failures*\n\n{failure_text}"
blocks.append({"type": "section", "text": {"type": "mrkdwn", "text": failure_text}})
return blocks
def post_reply(self):
if self.thread_ts is None:
raise ValueError("Can only post reply if a post has been made.")
sorted_dict = sorted(self.model_results.items(), key=lambda t: t[0])
for job, job_result in sorted_dict:
if len(job_result["failures"]):
for device, failures in job_result["failures"].items():
text = "\n".join(
sorted([f"*{k}*: {v[device]}" for k, v in job_result["failed"].items() if v[device]])
)
blocks = self.get_reply_blocks(job, job_result, failures, device, text=text)
print("Sending the following reply")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text=f"Results for {job}",
blocks=blocks,
thread_ts=self.thread_ts["ts"],
)
time.sleep(1)
for job, job_result in self.additional_results.items():
if len(job_result["failures"]):
for device, failures in job_result["failures"].items():
blocks = self.get_reply_blocks(
job,
job_result,
failures,
device,
text=f'Number of failures: {job_result["failed"][device]}',
)
print("Sending the following reply")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text=f"Results for {job}",
blocks=blocks,
thread_ts=self.thread_ts["ts"],
)
time.sleep(1)
blocks = self.get_new_model_failure_blocks()
if blocks:
print("Sending the following reply")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text="Results for new failures",
blocks=blocks,
thread_ts=self.thread_ts["ts"],
)
time.sleep(1)
def retrieve_artifact(artifact_path: str, gpu: Optional[str]):
if gpu not in [None, "single", "multi"]:
raise ValueError(f"Invalid GPU for artifact. Passed GPU: `{gpu}`.")
_artifact = {}
if os.path.exists(artifact_path):
files = os.listdir(artifact_path)
for file in files:
try:
with open(os.path.join(artifact_path, file)) as f:
_artifact[file.split(".")[0]] = f.read()
except UnicodeDecodeError as e:
raise ValueError(f"Could not open {os.path.join(artifact_path, file)}.") from e
return _artifact
def retrieve_available_artifacts():
class Artifact:
def __init__(self, name: str, single_gpu: bool = False, multi_gpu: bool = False):
self.name = name
self.single_gpu = single_gpu
self.multi_gpu = multi_gpu
self.paths = []
def __str__(self):
return self.name
def add_path(self, path: str, gpu: str = None):
self.paths.append({"name": self.name, "path": path, "gpu": gpu})
_available_artifacts: Dict[str, Artifact] = {}
directories = filter(os.path.isdir, os.listdir())
for directory in directories:
artifact_name = directory
name_parts = artifact_name.split("_postfix_")
if len(name_parts) > 1:
artifact_name = name_parts[0]
if artifact_name.startswith("single-gpu"):
artifact_name = artifact_name[len("single-gpu") + 1 :]
if artifact_name in _available_artifacts:
_available_artifacts[artifact_name].single_gpu = True
else:
_available_artifacts[artifact_name] = Artifact(artifact_name, single_gpu=True)
_available_artifacts[artifact_name].add_path(directory, gpu="single")
elif artifact_name.startswith("multi-gpu"):
artifact_name = artifact_name[len("multi-gpu") + 1 :]
if artifact_name in _available_artifacts:
_available_artifacts[artifact_name].multi_gpu = True
else:
_available_artifacts[artifact_name] = Artifact(artifact_name, multi_gpu=True)
_available_artifacts[artifact_name].add_path(directory, gpu="multi")
else:
if artifact_name not in _available_artifacts:
_available_artifacts[artifact_name] = Artifact(artifact_name)
_available_artifacts[artifact_name].add_path(directory)
return _available_artifacts
def prepare_reports(title, header, reports, to_truncate=True):
report = ""
MAX_ERROR_TEXT = 3000 - len("[Truncated]")
if not to_truncate:
MAX_ERROR_TEXT = float("inf")
if len(reports) > 0:
# `text` must be less than 3001 characters in Slack SDK
# keep some room for adding "[Truncated]" when necessary
for idx in range(len(reports)):
_report = header + "\n".join(reports[: idx + 1])
new_report = f"{title}:\n```\n{_report}\n```\n"
if len(new_report) > MAX_ERROR_TEXT:
# `report` here has length <= 3000
report = report + "[Truncated]"
break
report = new_report
return report
if __name__ == "__main__":
SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"]
# runner_status = os.environ.get("RUNNER_STATUS")
# runner_env_status = os.environ.get("RUNNER_ENV_STATUS")
setup_status = os.environ.get("SETUP_STATUS")
# runner_not_available = True if runner_status is not None and runner_status != "success" else False
# runner_failed = True if runner_env_status is not None and runner_env_status != "success" else False
# Let's keep the lines regardig runners' status (we might be able to use them again in the future)
runner_not_available = False
runner_failed = False
# Some jobs don't depend (`needs`) on the job `setup`: in this case, the status of the job `setup` is `skipped`.
setup_failed = False if setup_status in ["skipped", "success"] else True
org = "huggingface"
repo = "transformers"
repository_full_name = f"{org}/{repo}"
# This env. variable is set in workflow file (under the job `send_results`).
ci_event = os.environ["CI_EVENT"]
# To find the PR number in a commit title, for example, `Add AwesomeFormer model (#99999)`
pr_number_re = re.compile(r"\(#(\d+)\)$")
title = f"๐ค Results of the {ci_event} tests."
# Add Commit/PR title with a link for push CI
# (check the title in 2 env. variables - depending on the CI is triggered via `push` or `workflow_run` event)
ci_title_push = os.environ.get("CI_TITLE_PUSH")
ci_title_workflow_run = os.environ.get("CI_TITLE_WORKFLOW_RUN")
ci_title = ci_title_push if ci_title_push else ci_title_workflow_run
ci_sha = os.environ.get("CI_SHA")
ci_url = None
if ci_sha:
ci_url = f"https://github.com/{repository_full_name}/commit/{ci_sha}"
if ci_title is not None:
if ci_url is None:
raise ValueError(
"When a title is found (`ci_title`), it means a `push` event or a `workflow_run` even (triggered by "
"another `push` event), and the commit SHA has to be provided in order to create the URL to the "
"commit page."
)
ci_title = ci_title.strip().split("\n")[0].strip()
# Retrieve the PR title and author login to complete the report
commit_number = ci_url.split("/")[-1]
ci_detail_url = f"https://api.github.com/repos/{repository_full_name}/commits/{commit_number}"
ci_details = requests.get(ci_detail_url).json()
ci_author = ci_details["author"]["login"]
merged_by = None
# Find the PR number (if any) and change the url to the actual PR page.
numbers = pr_number_re.findall(ci_title)
if len(numbers) > 0:
pr_number = numbers[0]
ci_detail_url = f"https://api.github.com/repos/{repository_full_name}/pulls/{pr_number}"
ci_details = requests.get(ci_detail_url).json()
ci_author = ci_details["user"]["login"]
ci_url = f"https://github.com/{repository_full_name}/pull/{pr_number}"
merged_by = ci_details["merged_by"]["login"]
if merged_by is None:
ci_title = f"<{ci_url}|{ci_title}>\nAuthor: {ci_author}"
else:
ci_title = f"<{ci_url}|{ci_title}>\nAuthor: {ci_author} | Merged by: {merged_by}"
elif ci_sha:
ci_title = f"<{ci_url}|commit: {ci_sha}>"
else:
ci_title = ""
if runner_not_available or runner_failed or setup_failed:
Message.error_out(title, ci_title, runner_not_available, runner_failed, setup_failed)
exit(0)
# sys.argv[0] is always `utils/notification_service.py`.
arguments = sys.argv[1:]
# In our usage in `.github/workflows/slack-report.yml`, we always pass an argument when calling this script.
# The argument could be an empty string `""` if a job doesn't depend on the job `setup`.
if arguments[0] == "":
models = []
else:
model_list_as_str = arguments[0]
try:
folder_slices = ast.literal_eval(model_list_as_str)
# Need to change from elements like `models/bert` to `models_bert` (the ones used as artifact names).
models = [x.replace("models/", "models_") for folders in folder_slices for x in folders]
except Exception:
Message.error_out(title, ci_title)
raise ValueError("Errored out.")
github_actions_jobs = get_jobs(
workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"]
)
github_actions_job_links = {job["name"]: job["html_url"] for job in github_actions_jobs}
artifact_name_to_job_map = {}
for job in github_actions_jobs:
for step in job["steps"]:
if step["name"].startswith("Test suite reports artifacts: "):
artifact_name = step["name"][len("Test suite reports artifacts: ") :]
artifact_name_to_job_map[artifact_name] = job
break
available_artifacts = retrieve_available_artifacts()
modeling_categories = [
"PyTorch",
"TensorFlow",
"Flax",
"Tokenizers",
"Pipelines",
"Trainer",
"ONNX",
"Auto",
"Unclassified",
]
# This dict will contain all the information relative to each model:
# - Failures: the total, as well as the number of failures per-category defined above
# - Success: total
# - Time spent: as a comma-separated list of elapsed time
# - Failures: as a line-break separated list of errors
model_results = {
model: {
"failed": {m: {"unclassified": 0, "single": 0, "multi": 0} for m in modeling_categories},
"success": 0,
"time_spent": "",
"failures": {},
"job_link": {},
}
for model in models
if f"run_models_gpu_{model}_test_reports" in available_artifacts
}
unclassified_model_failures = []
for model in model_results.keys():
for artifact_path in available_artifacts[f"run_models_gpu_{model}_test_reports"].paths:
artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"])
if "stats" in artifact:
# Link to the GitHub Action job
job = artifact_name_to_job_map[artifact_path["path"]]
model_results[model]["job_link"][artifact_path["gpu"]] = job["html_url"]
failed, success, time_spent = handle_test_results(artifact["stats"])
model_results[model]["success"] += success
model_results[model]["time_spent"] += time_spent[1:-1] + ", "
stacktraces = handle_stacktraces(artifact["failures_line"])
for line in artifact["summary_short"].split("\n"):
if line.startswith("FAILED "):
line = line[len("FAILED ") :]
line = line.split()[0].replace("\n", "")
if artifact_path["gpu"] not in model_results[model]["failures"]:
model_results[model]["failures"][artifact_path["gpu"]] = []
model_results[model]["failures"][artifact_path["gpu"]].append(
{"line": line, "trace": stacktraces.pop(0)}
)
if re.search("test_modeling_tf_", line):
model_results[model]["failed"]["TensorFlow"][artifact_path["gpu"]] += 1
elif re.search("test_modeling_flax_", line):
model_results[model]["failed"]["Flax"][artifact_path["gpu"]] += 1
elif re.search("test_modeling", line):
model_results[model]["failed"]["PyTorch"][artifact_path["gpu"]] += 1
elif re.search("test_tokenization", line):
model_results[model]["failed"]["Tokenizers"][artifact_path["gpu"]] += 1
elif re.search("test_pipelines", line):
model_results[model]["failed"]["Pipelines"][artifact_path["gpu"]] += 1
elif re.search("test_trainer", line):
model_results[model]["failed"]["Trainer"][artifact_path["gpu"]] += 1
elif re.search("onnx", line):
model_results[model]["failed"]["ONNX"][artifact_path["gpu"]] += 1
elif re.search("auto", line):
model_results[model]["failed"]["Auto"][artifact_path["gpu"]] += 1
else:
model_results[model]["failed"]["Unclassified"][artifact_path["gpu"]] += 1
unclassified_model_failures.append(line)
# Additional runs
additional_files = {
"PyTorch pipelines": "run_pipelines_torch_gpu_test_reports",
"TensorFlow pipelines": "run_pipelines_tf_gpu_test_reports",
"Examples directory": "run_examples_gpu_test_reports",
"Torch CUDA extension tests": "run_torch_cuda_extensions_gpu_test_reports",
}
if ci_event in ["push", "Nightly CI"] or ci_event.startswith("Past CI"):
del additional_files["Examples directory"]
del additional_files["PyTorch pipelines"]
del additional_files["TensorFlow pipelines"]
elif ci_event.startswith("Scheduled CI (AMD)"):
del additional_files["TensorFlow pipelines"]
del additional_files["Torch CUDA extension tests"]
elif ci_event.startswith("Push CI (AMD)"):
additional_files = {}
# A map associating the job names (specified by `inputs.job` in a workflow file) with the keys of
# `additional_files`. This is used to remove some entries in `additional_files` that are not concerned by a
# specific job. See below.
job_to_test_map = {
"run_pipelines_torch_gpu": "PyTorch pipelines",
"run_pipelines_tf_gpu": "TensorFlow pipelines",
"run_examples_gpu": "Examples directory",
"run_torch_cuda_extensions_gpu": "Torch CUDA extension tests",
}
# Remove some entries in `additional_files` if they are not concerned.
test_name = None
job_name = os.getenv("CI_TEST_JOB")
if job_name in job_to_test_map:
test_name = job_to_test_map[job_name]
additional_files = {k: v for k, v in additional_files.items() if k == test_name}
additional_results = {
key: {
"failed": {"unclassified": 0, "single": 0, "multi": 0},
"success": 0,
"time_spent": "",
"error": False,
"failures": {},
"job_link": {},
}
for key in additional_files.keys()
}
for key in additional_results.keys():
# If a whole suite of test fails, the artifact isn't available.
if additional_files[key] not in available_artifacts:
additional_results[key]["error"] = True
continue
for artifact_path in available_artifacts[additional_files[key]].paths:
# Link to the GitHub Action job
job = artifact_name_to_job_map[artifact_path["path"]]
additional_results[key]["job_link"][artifact_path["gpu"]] = job["html_url"]
artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"])
stacktraces = handle_stacktraces(artifact["failures_line"])
failed, success, time_spent = handle_test_results(artifact["stats"])
additional_results[key]["failed"][artifact_path["gpu"] or "unclassified"] += failed
additional_results[key]["success"] += success
additional_results[key]["time_spent"] += time_spent[1:-1] + ", "
if len(artifact["errors"]):
additional_results[key]["error"] = True
if failed:
for line in artifact["summary_short"].split("\n"):
if line.startswith("FAILED "):
line = line[len("FAILED ") :]
line = line.split()[0].replace("\n", "")
if artifact_path["gpu"] not in additional_results[key]["failures"]:
additional_results[key]["failures"][artifact_path["gpu"]] = []
additional_results[key]["failures"][artifact_path["gpu"]].append(
{"line": line, "trace": stacktraces.pop(0)}
)
# Let's only check the warning for the model testing job. Currently, the job `run_extract_warnings` is only run
# when `inputs.job` (in the workflow file) is `run_models_gpu`. The reason is: otherwise we need to save several
# artifacts with different names which complicates the logic for an insignificant part of the CI workflow reporting.
selected_warnings = []
if job_name == "run_models_gpu":
if "warnings_in_ci" in available_artifacts:
directory = available_artifacts["warnings_in_ci"].paths[0]["path"]
with open(os.path.join(directory, "selected_warnings.json")) as fp:
selected_warnings = json.load(fp)
if not os.path.isdir(os.path.join(os.getcwd(), "prev_ci_results")):
os.makedirs(os.path.join(os.getcwd(), "prev_ci_results"))
# Only the model testing job is concerned: this condition is to avoid other jobs to upload the empty list as
# results.
if job_name == "run_models_gpu":
with open("prev_ci_results/model_results.json", "w", encoding="UTF-8") as fp:
json.dump(model_results, fp, indent=4, ensure_ascii=False)
prev_ci_artifacts = None
target_workflow = "huggingface/transformers/.github/workflows/self-scheduled.yml@refs/heads/main"
if os.environ.get("CI_WORKFLOW_REF") == target_workflow:
# Get the last previously completed CI's failure tables
artifact_names = ["prev_ci_results"]
output_dir = os.path.join(os.getcwd(), "previous_reports")
os.makedirs(output_dir, exist_ok=True)
prev_ci_artifacts = get_last_daily_ci_reports(
artifact_names=artifact_names, output_dir=output_dir, token=os.environ["ACCESS_REPO_INFO_TOKEN"]
)
message = Message(
title,
ci_title,
model_results,
additional_results,
selected_warnings=selected_warnings,
prev_ci_artifacts=prev_ci_artifacts,
)
# send report only if there is any failure (for push CI)
if message.n_failures or (ci_event != "push" and not ci_event.startswith("Push CI (AMD)")):
message.post()
message.post_reply()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/notification_service_doc_tests.py | # Copyright 2022 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import re
import time
from typing import Dict, List
from get_ci_error_statistics import get_jobs
from slack_sdk import WebClient
client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"])
def handle_test_results(test_results):
expressions = test_results.split(" ")
failed = 0
success = 0
# When the output is short enough, the output is surrounded by = signs: "== OUTPUT =="
# When it is too long, those signs are not present.
time_spent = expressions[-2] if "=" in expressions[-1] else expressions[-1]
for i, expression in enumerate(expressions):
if "failed" in expression:
failed += int(expressions[i - 1])
if "passed" in expression:
success += int(expressions[i - 1])
return failed, success, time_spent
def extract_first_line_failure(failures_short_lines):
failures = {}
file = None
in_error = False
for line in failures_short_lines.split("\n"):
if re.search(r"_ \[doctest\]", line):
in_error = True
file = line.split(" ")[2]
elif in_error and not line.split(" ")[0].isdigit():
failures[file] = line
in_error = False
return failures
class Message:
def __init__(self, title: str, doc_test_results: Dict):
self.title = title
self.n_success = sum(job_result["n_success"] for job_result in doc_test_results.values())
self.n_failures = sum(job_result["n_failures"] for job_result in doc_test_results.values())
self.n_tests = self.n_success + self.n_failures
# Failures and success of the modeling tests
self.doc_test_results = doc_test_results
@property
def time(self) -> str:
all_results = [*self.doc_test_results.values()]
time_spent = [r["time_spent"].split(", ")[0] for r in all_results if len(r["time_spent"])]
total_secs = 0
for time in time_spent:
time_parts = time.split(":")
# Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute.
if len(time_parts) == 1:
time_parts = [0, 0, time_parts[0]]
hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2])
total_secs += hours * 3600 + minutes * 60 + seconds
hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60
return f"{int(hours)}h{int(minutes)}m{int(seconds)}s"
@property
def header(self) -> Dict:
return {"type": "header", "text": {"type": "plain_text", "text": self.title}}
@property
def no_failures(self) -> Dict:
return {
"type": "section",
"text": {
"type": "plain_text",
"text": f"๐ There were no failures: all {self.n_tests} tests passed. The suite ran in {self.time}.",
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
@property
def failures(self) -> Dict:
return {
"type": "section",
"text": {
"type": "plain_text",
"text": (
f"There were {self.n_failures} failures, out of {self.n_tests} tests.\nThe suite ran in"
f" {self.time}."
),
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
@property
def category_failures(self) -> List[Dict]:
failure_blocks = []
MAX_ERROR_TEXT = 3000 - len("The following examples had failures:\n\n\n\n") - len("[Truncated]\n")
line_length = 40
category_failures = {k: v["failed"] for k, v in doc_test_results.items() if isinstance(v, dict)}
def single_category_failures(category, failures):
text = ""
if len(failures) == 0:
return ""
text += f"*{category} failures*:".ljust(line_length // 2).rjust(line_length // 2) + "\n"
for idx, failure in enumerate(failures):
new_text = text + f"`{failure}`\n"
if len(new_text) > MAX_ERROR_TEXT:
text = text + "[Truncated]\n"
break
text = new_text
return text
for category, failures in category_failures.items():
report = single_category_failures(category, failures)
if len(report) == 0:
continue
block = {
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"The following examples had failures:\n\n\n{report}\n",
},
}
failure_blocks.append(block)
return failure_blocks
@property
def payload(self) -> str:
blocks = [self.header]
if self.n_failures > 0:
blocks.append(self.failures)
if self.n_failures > 0:
blocks.extend(self.category_failures)
if self.n_failures == 0:
blocks.append(self.no_failures)
return json.dumps(blocks)
@staticmethod
def error_out():
payload = [
{
"type": "section",
"text": {
"type": "plain_text",
"text": "There was an issue running the tests.",
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
]
print("Sending the following payload")
print(json.dumps({"blocks": json.loads(payload)}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text="There was an issue running the tests.",
blocks=payload,
)
def post(self):
print("Sending the following payload")
print(json.dumps({"blocks": json.loads(self.payload)}))
text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed."
self.thread_ts = client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
blocks=self.payload,
text=text,
)
def get_reply_blocks(self, job_name, job_link, failures, text):
# `text` must be less than 3001 characters in Slack SDK
# keep some room for adding "[Truncated]" when necessary
MAX_ERROR_TEXT = 3000 - len("[Truncated]")
failure_text = ""
for key, value in failures.items():
new_text = failure_text + f"*{key}*\n_{value}_\n\n"
if len(new_text) > MAX_ERROR_TEXT:
# `failure_text` here has length <= 3000
failure_text = failure_text + "[Truncated]"
break
# `failure_text` here has length <= MAX_ERROR_TEXT
failure_text = new_text
title = job_name
content = {"type": "section", "text": {"type": "mrkdwn", "text": text}}
if job_link is not None:
content["accessory"] = {
"type": "button",
"text": {"type": "plain_text", "text": "GitHub Action job", "emoji": True},
"url": job_link,
}
return [
{"type": "header", "text": {"type": "plain_text", "text": title, "emoji": True}},
content,
{"type": "section", "text": {"type": "mrkdwn", "text": failure_text}},
]
def post_reply(self):
if self.thread_ts is None:
raise ValueError("Can only post reply if a post has been made.")
sorted_dict = sorted(self.doc_test_results.items(), key=lambda t: t[0])
for job_name, job_result in sorted_dict:
if len(job_result["failures"]) > 0:
text = f"*Num failures* :{len(job_result['failed'])} \n"
failures = job_result["failures"]
blocks = self.get_reply_blocks(job_name, job_result["job_link"], failures, text=text)
print("Sending the following reply")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
text=f"Results for {job_name}",
blocks=blocks,
thread_ts=self.thread_ts["ts"],
)
time.sleep(1)
def retrieve_artifact(name: str):
_artifact = {}
if os.path.exists(name):
files = os.listdir(name)
for file in files:
try:
with open(os.path.join(name, file), encoding="utf-8") as f:
_artifact[file.split(".")[0]] = f.read()
except UnicodeDecodeError as e:
raise ValueError(f"Could not open {os.path.join(name, file)}.") from e
return _artifact
def retrieve_available_artifacts():
class Artifact:
def __init__(self, name: str):
self.name = name
self.paths = []
def __str__(self):
return self.name
def add_path(self, path: str):
self.paths.append({"name": self.name, "path": path})
_available_artifacts: Dict[str, Artifact] = {}
directories = filter(os.path.isdir, os.listdir())
for directory in directories:
artifact_name = directory
if artifact_name not in _available_artifacts:
_available_artifacts[artifact_name] = Artifact(artifact_name)
_available_artifacts[artifact_name].add_path(directory)
return _available_artifacts
if __name__ == "__main__":
SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"]
github_actions_jobs = get_jobs(
workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"]
)
artifact_name_to_job_map = {}
for job in github_actions_jobs:
for step in job["steps"]:
if step["name"].startswith("Test suite reports artifacts: "):
artifact_name = step["name"][len("Test suite reports artifacts: ") :]
artifact_name_to_job_map[artifact_name] = job
break
available_artifacts = retrieve_available_artifacts()
doc_test_results = {}
# `artifact_key` is the artifact path
for artifact_key, artifact_obj in available_artifacts.items():
artifact_path = artifact_obj.paths[0]
if not artifact_path["path"].startswith("doc_tests_gpu_test_reports_"):
continue
# change "_" back to "/" (to show the job name as path)
job_name = artifact_path["path"].replace("doc_tests_gpu_test_reports_", "").replace("_", "/")
# This dict (for each job) will contain all the information relative to each doc test job, in particular:
# - failed: list of failed tests
# - failures: dict in the format 'test': 'error_message'
job_result = {}
doc_test_results[job_name] = job_result
job = artifact_name_to_job_map[artifact_path["path"]]
job_result["job_link"] = job["html_url"]
job_result["category"] = "Python Examples" if job_name.startswith("src/") else "MD Examples"
artifact = retrieve_artifact(artifact_path["path"])
if "stats" in artifact:
failed, success, time_spent = handle_test_results(artifact["stats"])
job_result["n_failures"] = failed
job_result["n_success"] = success
job_result["time_spent"] = time_spent[1:-1] + ", "
job_result["failed"] = []
job_result["failures"] = {}
all_failures = extract_first_line_failure(artifact["failures_short"])
for line in artifact["summary_short"].split("\n"):
if re.search("FAILED", line):
line = line.replace("FAILED ", "")
line = line.split()[0].replace("\n", "")
if "::" in line:
file_path, test = line.split("::")
else:
file_path, test = line, line
job_result["failed"].append(test)
failure = all_failures[test] if test in all_failures else "N/A"
job_result["failures"][test] = failure
# Save and to be uploaded as artifact
os.makedirs("doc_test_results", exist_ok=True)
with open("doc_test_results/doc_test_results.json", "w", encoding="UTF-8") as fp:
json.dump(doc_test_results, fp, ensure_ascii=False, indent=4)
message = Message("๐ค Results of the doc tests.", doc_test_results)
message.post()
message.post_reply()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/get_previous_daily_ci.py | import os
import zipfile
import requests
from get_ci_error_statistics import download_artifact, get_artifacts_links
def get_daily_ci_runs(token, num_runs=7):
"""Get the workflow runs of the scheduled (daily) CI.
This only selects the runs triggered by the `schedule` event on the `main` branch.
"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
# The id of a workflow (not of a workflow run)
workflow_id = "636036"
url = f"https://api.github.com/repos/huggingface/transformers/actions/workflows/{workflow_id}/runs"
# On `main` branch + event being `schedule` + not returning PRs + only `num_runs` results
url += f"?branch=main&event=schedule&exclude_pull_requests=true&per_page={num_runs}"
result = requests.get(url, headers=headers).json()
return result["workflow_runs"]
def get_last_daily_ci_runs(token):
"""Get the last completed workflow run id of the scheduled (daily) CI."""
workflow_runs = get_daily_ci_runs(token)
workflow_run_id = None
for workflow_run in workflow_runs:
if workflow_run["status"] == "completed":
workflow_run_id = workflow_run["id"]
break
return workflow_run_id
def get_last_daily_ci_artifacts(artifact_names, output_dir, token):
"""Get the artifacts of last completed workflow run id of the scheduled (daily) CI."""
workflow_run_id = get_last_daily_ci_runs(token)
if workflow_run_id is not None:
artifacts_links = get_artifacts_links(worflow_run_id=workflow_run_id, token=token)
for artifact_name in artifact_names:
if artifact_name in artifacts_links:
artifact_url = artifacts_links[artifact_name]
download_artifact(
artifact_name=artifact_name, artifact_url=artifact_url, output_dir=output_dir, token=token
)
def get_last_daily_ci_reports(artifact_names, output_dir, token):
"""Get the artifacts' content of the last completed workflow run id of the scheduled (daily) CI."""
get_last_daily_ci_artifacts(artifact_names, output_dir, token)
results = {}
for artifact_name in artifact_names:
artifact_zip_path = os.path.join(output_dir, f"{artifact_name}.zip")
if os.path.isfile(artifact_zip_path):
results[artifact_name] = {}
with zipfile.ZipFile(artifact_zip_path) as z:
for filename in z.namelist():
if not os.path.isdir(filename):
# read the file
with z.open(filename) as f:
results[artifact_name][filename] = f.read().decode("UTF-8")
return results
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_support_list.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks the supports of 3rd party libraries are listed in the documentation file. Currently, this includes:
- flash attention support
- SDPA support
Use from the root of the repo with (as used in `make repo-consistency`):
```bash
python utils/check_support_list.py
```
It has no auto-fix mode.
"""
import os
from glob import glob
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_doctest_list.py
REPO_PATH = "."
def check_flash_support_list():
with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f:
doctext = f.read()
doctext = doctext.split("FlashAttention-2 is currently supported for the following architectures:")[1]
doctext = doctext.split("You can request to add FlashAttention-2 support")[0]
patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py"))
patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py"))
patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py"))
patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax))
archs_supporting_fa2 = []
for filename in patterns:
with open(filename, "r") as f:
text = f.read()
if "_supports_flash_attn_2 = True" in text:
model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "")
archs_supporting_fa2.append(model_name)
for arch in archs_supporting_fa2:
if arch not in doctext:
raise ValueError(
f"{arch} should be in listed in the flash attention documentation but is not. Please update the documentation."
)
def check_sdpa_support_list():
with open(os.path.join(REPO_PATH, "docs/source/en/perf_infer_gpu_one.md"), "r") as f:
doctext = f.read()
doctext = doctext.split(
"For now, Transformers supports SDPA inference and training for the following architectures:"
)[1]
doctext = doctext.split("Note that FlashAttention can only be used for models using the")[0]
patterns = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_*.py"))
patterns_tf = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_tf_*.py"))
patterns_flax = glob(os.path.join(REPO_PATH, "src/transformers/models/**/modeling_flax_*.py"))
patterns = list(set(patterns) - set(patterns_tf) - set(patterns_flax))
archs_supporting_sdpa = []
for filename in patterns:
with open(filename, "r") as f:
text = f.read()
if "_supports_sdpa = True" in text:
model_name = os.path.basename(filename).replace(".py", "").replace("modeling_", "")
archs_supporting_sdpa.append(model_name)
for arch in archs_supporting_sdpa:
if arch not in doctext:
raise ValueError(
f"{arch} should be in listed in the SDPA documentation but is not. Please update the documentation."
)
if __name__ == "__main__":
check_flash_support_list()
check_sdpa_support_list()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/pr_slow_ci_models.py | # Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is used to get the models for which to run slow CI.
A new model added in a pull request will be included, as well as models specified in a commit message with a prefix
`[run-slow]`, `[run_slow]` or `[run slow]`. For example, the commit message `[run_slow]bert, gpt2` will give `bert` and
`gpt2`.
Usage:
```bash
python utils/pr_slow_ci_models.py.py
```
"""
import argparse
import re
from pathlib import Path
from typing import List
from git import Repo
PATH_TO_REPO = Path(__file__).parent.parent.resolve()
def get_new_python_files_between_commits(base_commit: str, commits: List[str]) -> List[str]:
"""
Get the list of added python files between a base commit and one or several commits.
Args:
repo (`git.Repo`):
A git repository (for instance the Transformers repo).
base_commit (`str`):
The commit reference of where to compare for the diff. This is the current commit, not the branching point!
commits (`List[str]`):
The list of commits with which to compare the repo at `base_commit` (so the branching point).
Returns:
`List[str]`: The list of python files added between a base commit and one or several commits.
"""
code_diff = []
for commit in commits:
for diff_obj in commit.diff(base_commit):
# We always add new python files
if diff_obj.change_type == "A" and diff_obj.b_path.endswith(".py"):
code_diff.append(diff_obj.b_path)
return code_diff
def get_new_python_files() -> List[str]:
"""
Return a list of python files that have been added between the current head and the main branch.
Returns:
`List[str]`: The list of python files added.
"""
repo = Repo(PATH_TO_REPO)
try:
# For the cases where the main branch exists locally
main = repo.refs.main
except AttributeError:
# On GitHub Actions runners, it doesn't have local main branch
main = repo.remotes.origin.refs.main
print(f"main is at {main.commit}")
print(f"Current head is at {repo.head.commit}")
branching_commits = repo.merge_base(main, repo.head)
for commit in branching_commits:
print(f"Branching commit: {commit}")
return get_new_python_files_between_commits(repo.head.commit, branching_commits)
def get_new_model():
new_files = get_new_python_files()
reg = re.compile(r"src/transformers/(models/.*)/modeling_.*\.py")
new_model = ""
for x in new_files:
find_new_model = reg.findall(x)
if len(find_new_model) > 0:
new_model = find_new_model[0]
# It's unlikely we have 2 new modeling files in a pull request.
break
return new_model
def parse_commit_message(commit_message: str) -> str:
"""
Parses the commit message to find the models specified in it to run slow CI.
Args:
commit_message (`str`): The commit message of the current commit.
Returns:
`str`: The substring in `commit_message` after `[run-slow]`, [run_slow]` or [run slow]`. If no such prefix is
found, the empty string is returned.
"""
if commit_message is None:
return ""
command_search = re.search(r"\[([^\]]*)\](.*)", commit_message)
if command_search is None:
return ""
command = command_search.groups()[0]
command = command.lower().replace("-", " ").replace("_", " ")
run_slow = command == "run slow"
if run_slow:
models = command_search.groups()[1].strip()
return models
else:
return ""
def get_models(commit_message: str):
models = parse_commit_message(commit_message)
return [f"models/{x}" for x in models.replace(",", " ").split()]
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--commit_message", type=str, default="", help="The commit message.")
args = parser.parse_args()
new_model = get_new_model()
specified_models = get_models(args.commit_message)
models = ([] if new_model == "" else [new_model]) + specified_models
print(sorted(set(models)))
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_self_hosted_runner.py | import argparse
import json
import subprocess
def get_runner_status(target_runners, token):
offline_runners = []
cmd = (
f'curl -H "Accept: application/vnd.github+json" -H "Authorization: Bearer {token}"'
" https://api.github.com/repos/huggingface/transformers/actions/runners"
)
output = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)
o = output.stdout.decode("utf-8")
status = json.loads(o)
runners = status["runners"]
for runner in runners:
if runner["name"] in target_runners:
if runner["status"] == "offline":
offline_runners.append(runner)
# save the result so we can report them on Slack
with open("offline_runners.txt", "w") as fp:
fp.write(json.dumps(offline_runners))
if len(offline_runners) > 0:
failed = "\n".join([x["name"] for x in offline_runners])
raise ValueError(f"The following runners are offline:\n{failed}")
if __name__ == "__main__":
def list_str(values):
return values.split(",")
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument(
"--target_runners",
default=None,
type=list_str,
required=True,
help="Comma-separated list of runners to check status.",
)
parser.add_argument(
"--token", default=None, type=str, required=True, help="A token that has actions:read permission."
)
args = parser.parse_args()
get_runner_status(args.target_runners, args.token)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/important_models.txt | models/llama
models/mistral
models/mixtral
models/gemma | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/print_env.py | #!/usr/bin/env python3
# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# this script dumps information about the environment
import os
import sys
import transformers
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
print("Python version:", sys.version)
print("transformers version:", transformers.__version__)
try:
import torch
print("Torch version:", torch.__version__)
print("Cuda available:", torch.cuda.is_available())
print("Cuda version:", torch.version.cuda)
print("CuDNN version:", torch.backends.cudnn.version())
print("Number of GPUs available:", torch.cuda.device_count())
print("NCCL version:", torch.cuda.nccl.version())
except ImportError:
print("Torch version:", None)
try:
import deepspeed
print("DeepSpeed version:", deepspeed.__version__)
except ImportError:
print("DeepSpeed version:", None)
try:
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
print("TF GPUs available:", bool(tf.config.list_physical_devices("GPU")))
print("Number of TF GPUs available:", len(tf.config.list_physical_devices("GPU")))
except ImportError:
print("TensorFlow version:", None)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_doc_toc.py | # coding=utf-8
# Copyright 2022 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is responsible for cleaning the model section of the table of content by removing duplicates and sorting
the entries in alphabetical order.
Usage (from the root of the repo):
Check that the table of content is properly sorted (used in `make quality`):
```bash
python utils/check_doc_toc.py
```
Auto-sort the table of content if it is not properly sorted (used in `make style`):
```bash
python utils/check_doc_toc.py --fix_and_overwrite
```
"""
import argparse
from collections import defaultdict
from typing import List
import yaml
PATH_TO_TOC = "docs/source/en/_toctree.yml"
def clean_model_doc_toc(model_doc: List[dict]) -> List[dict]:
"""
Cleans a section of the table of content of the model documentation (one specific modality) by removing duplicates
and sorting models alphabetically.
Args:
model_doc (`List[dict]`):
The list of dictionaries extracted from the `_toctree.yml` file for this specific modality.
Returns:
`List[dict]`: List of dictionaries like the input, but cleaned up and sorted.
"""
counts = defaultdict(int)
for doc in model_doc:
counts[doc["local"]] += 1
duplicates = [key for key, value in counts.items() if value > 1]
new_doc = []
for duplicate_key in duplicates:
titles = list({doc["title"] for doc in model_doc if doc["local"] == duplicate_key})
if len(titles) > 1:
raise ValueError(
f"{duplicate_key} is present several times in the documentation table of content at "
"`docs/source/en/_toctree.yml` with different *Title* values. Choose one of those and remove the "
"others."
)
# Only add this once
new_doc.append({"local": duplicate_key, "title": titles[0]})
# Add none duplicate-keys
new_doc.extend([doc for doc in model_doc if counts[doc["local"]] == 1])
# Sort
return sorted(new_doc, key=lambda s: s["title"].lower())
def check_model_doc(overwrite: bool = False):
"""
Check that the content of the table of content in `_toctree.yml` is clean (no duplicates and sorted for the model
API doc) and potentially auto-cleans it.
Args:
overwrite (`bool`, *optional*, defaults to `False`):
Whether to just check if the TOC is clean or to auto-clean it (when `overwrite=True`).
"""
with open(PATH_TO_TOC, encoding="utf-8") as f:
content = yaml.safe_load(f.read())
# Get to the API doc
api_idx = 0
while content[api_idx]["title"] != "API":
api_idx += 1
api_doc = content[api_idx]["sections"]
# Then to the model doc
model_idx = 0
while api_doc[model_idx]["title"] != "Models":
model_idx += 1
model_doc = api_doc[model_idx]["sections"]
# Extract the modalities and clean them one by one.
modalities_docs = [(idx, section) for idx, section in enumerate(model_doc) if "sections" in section]
diff = False
for idx, modality_doc in modalities_docs:
old_modality_doc = modality_doc["sections"]
new_modality_doc = clean_model_doc_toc(old_modality_doc)
if old_modality_doc != new_modality_doc:
diff = True
if overwrite:
model_doc[idx]["sections"] = new_modality_doc
if diff:
if overwrite:
api_doc[model_idx]["sections"] = model_doc
content[api_idx]["sections"] = api_doc
with open(PATH_TO_TOC, "w", encoding="utf-8") as f:
f.write(yaml.dump(content, allow_unicode=True))
else:
raise ValueError(
"The model doc part of the table of content is not properly sorted, run `make style` to fix this."
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
check_model_doc(args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/get_ci_error_statistics.py | import argparse
import json
import math
import os
import time
import traceback
import zipfile
from collections import Counter
import requests
def get_jobs(workflow_run_id, token=None):
"""Extract jobs in a GitHub Actions workflow run"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100"
result = requests.get(url, headers=headers).json()
jobs = []
try:
jobs.extend(result["jobs"])
pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100)
for i in range(pages_to_iterate_over):
result = requests.get(url + f"&page={i + 2}", headers=headers).json()
jobs.extend(result["jobs"])
return jobs
except Exception:
print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}")
return []
def get_job_links(workflow_run_id, token=None):
"""Extract job names and their job links in a GitHub Actions workflow run"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100"
result = requests.get(url, headers=headers).json()
job_links = {}
try:
job_links.update({job["name"]: job["html_url"] for job in result["jobs"]})
pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100)
for i in range(pages_to_iterate_over):
result = requests.get(url + f"&page={i + 2}", headers=headers).json()
job_links.update({job["name"]: job["html_url"] for job in result["jobs"]})
return job_links
except Exception:
print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}")
return {}
def get_artifacts_links(worflow_run_id, token=None):
"""Get all artifact links from a workflow run"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{worflow_run_id}/artifacts?per_page=100"
result = requests.get(url, headers=headers).json()
artifacts = {}
try:
artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]})
pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100)
for i in range(pages_to_iterate_over):
result = requests.get(url + f"&page={i + 2}", headers=headers).json()
artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]})
return artifacts
except Exception:
print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}")
return {}
def download_artifact(artifact_name, artifact_url, output_dir, token):
"""Download a GitHub Action artifact from a URL.
The URL is of the form `https://api.github.com/repos/huggingface/transformers/actions/artifacts/{ARTIFACT_ID}/zip`,
but it can't be used to download directly. We need to get a redirect URL first.
See https://docs.github.com/en/rest/actions/artifacts#download-an-artifact
"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
result = requests.get(artifact_url, headers=headers, allow_redirects=False)
download_url = result.headers["Location"]
response = requests.get(download_url, allow_redirects=True)
file_path = os.path.join(output_dir, f"{artifact_name}.zip")
with open(file_path, "wb") as fp:
fp.write(response.content)
def get_errors_from_single_artifact(artifact_zip_path, job_links=None):
"""Extract errors from a downloaded artifact (in .zip format)"""
errors = []
failed_tests = []
job_name = None
with zipfile.ZipFile(artifact_zip_path) as z:
for filename in z.namelist():
if not os.path.isdir(filename):
# read the file
if filename in ["failures_line.txt", "summary_short.txt", "job_name.txt"]:
with z.open(filename) as f:
for line in f:
line = line.decode("UTF-8").strip()
if filename == "failures_line.txt":
try:
# `error_line` is the place where `error` occurs
error_line = line[: line.index(": ")]
error = line[line.index(": ") + len(": ") :]
errors.append([error_line, error])
except Exception:
# skip un-related lines
pass
elif filename == "summary_short.txt" and line.startswith("FAILED "):
# `test` is the test method that failed
test = line[len("FAILED ") :]
failed_tests.append(test)
elif filename == "job_name.txt":
job_name = line
if len(errors) != len(failed_tests):
raise ValueError(
f"`errors` and `failed_tests` should have the same number of elements. Got {len(errors)} for `errors` "
f"and {len(failed_tests)} for `failed_tests` instead. The test reports in {artifact_zip_path} have some"
" problem."
)
job_link = None
if job_name and job_links:
job_link = job_links.get(job_name, None)
# A list with elements of the form (line of error, error, failed test)
result = [x + [y] + [job_link] for x, y in zip(errors, failed_tests)]
return result
def get_all_errors(artifact_dir, job_links=None):
"""Extract errors from all artifact files"""
errors = []
paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if p.endswith(".zip")]
for p in paths:
errors.extend(get_errors_from_single_artifact(p, job_links=job_links))
return errors
def reduce_by_error(logs, error_filter=None):
"""count each error"""
counter = Counter()
counter.update([x[1] for x in logs])
counts = counter.most_common()
r = {}
for error, count in counts:
if error_filter is None or error not in error_filter:
r[error] = {"count": count, "failed_tests": [(x[2], x[0]) for x in logs if x[1] == error]}
r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True))
return r
def get_model(test):
"""Get the model name from a test method"""
test = test.split("::")[0]
if test.startswith("tests/models/"):
test = test.split("/")[2]
else:
test = None
return test
def reduce_by_model(logs, error_filter=None):
"""count each error per model"""
logs = [(x[0], x[1], get_model(x[2])) for x in logs]
logs = [x for x in logs if x[2] is not None]
tests = {x[2] for x in logs}
r = {}
for test in tests:
counter = Counter()
# count by errors in `test`
counter.update([x[1] for x in logs if x[2] == test])
counts = counter.most_common()
error_counts = {error: count for error, count in counts if (error_filter is None or error not in error_filter)}
n_errors = sum(error_counts.values())
if n_errors > 0:
r[test] = {"count": n_errors, "errors": error_counts}
r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True))
return r
def make_github_table(reduced_by_error):
header = "| no. | error | status |"
sep = "|-:|:-|:-|"
lines = [header, sep]
for error in reduced_by_error:
count = reduced_by_error[error]["count"]
line = f"| {count} | {error[:100]} | |"
lines.append(line)
return "\n".join(lines)
def make_github_table_per_model(reduced_by_model):
header = "| model | no. of errors | major error | count |"
sep = "|-:|-:|-:|-:|"
lines = [header, sep]
for model in reduced_by_model:
count = reduced_by_model[model]["count"]
error, _count = list(reduced_by_model[model]["errors"].items())[0]
line = f"| {model} | {count} | {error[:60]} | {_count} |"
lines.append(line)
return "\n".join(lines)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.")
parser.add_argument(
"--output_dir",
type=str,
required=True,
help="Where to store the downloaded artifacts and other result files.",
)
parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.")
args = parser.parse_args()
os.makedirs(args.output_dir, exist_ok=True)
_job_links = get_job_links(args.workflow_run_id, token=args.token)
job_links = {}
# To deal with `workflow_call` event, where a job name is the combination of the job names in the caller and callee.
# For example, `PyTorch 1.11 / Model tests (models/albert, single-gpu)`.
if _job_links:
for k, v in _job_links.items():
# This is how GitHub actions combine job names.
if " / " in k:
index = k.find(" / ")
k = k[index + len(" / ") :]
job_links[k] = v
with open(os.path.join(args.output_dir, "job_links.json"), "w", encoding="UTF-8") as fp:
json.dump(job_links, fp, ensure_ascii=False, indent=4)
artifacts = get_artifacts_links(args.workflow_run_id, token=args.token)
with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp:
json.dump(artifacts, fp, ensure_ascii=False, indent=4)
for idx, (name, url) in enumerate(artifacts.items()):
download_artifact(name, url, args.output_dir, args.token)
# Be gentle to GitHub
time.sleep(1)
errors = get_all_errors(args.output_dir, job_links=job_links)
# `e[1]` is the error
counter = Counter()
counter.update([e[1] for e in errors])
# print the top 30 most common test errors
most_common = counter.most_common(30)
for item in most_common:
print(item)
with open(os.path.join(args.output_dir, "errors.json"), "w", encoding="UTF-8") as fp:
json.dump(errors, fp, ensure_ascii=False, indent=4)
reduced_by_error = reduce_by_error(errors)
reduced_by_model = reduce_by_model(errors)
s1 = make_github_table(reduced_by_error)
s2 = make_github_table_per_model(reduced_by_model)
with open(os.path.join(args.output_dir, "reduced_by_error.txt"), "w", encoding="UTF-8") as fp:
fp.write(s1)
with open(os.path.join(args.output_dir, "reduced_by_model.txt"), "w", encoding="UTF-8") as fp:
fp.write(s2)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/create_dummy_models.py | # coding=utf-8
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import collections.abc
import copy
import inspect
import json
import multiprocessing
import os
import shutil
import tempfile
import traceback
from pathlib import Path
from check_config_docstrings import get_checkpoint_from_config_class
from datasets import load_dataset
from get_test_info import get_model_to_tester_mapping, get_tester_classes_for_model
from huggingface_hub import Repository, create_repo, hf_api, upload_folder
from transformers import (
CONFIG_MAPPING,
FEATURE_EXTRACTOR_MAPPING,
IMAGE_PROCESSOR_MAPPING,
PROCESSOR_MAPPING,
TOKENIZER_MAPPING,
AutoTokenizer,
LayoutLMv3TokenizerFast,
PreTrainedTokenizer,
PreTrainedTokenizerFast,
logging,
)
from transformers.feature_extraction_utils import FeatureExtractionMixin
from transformers.file_utils import is_tf_available, is_torch_available
from transformers.image_processing_utils import BaseImageProcessor
from transformers.models.auto.configuration_auto import AutoConfig, model_type_to_module_name
from transformers.models.fsmt import configuration_fsmt
from transformers.processing_utils import ProcessorMixin, transformers_module
from transformers.tokenization_utils_base import PreTrainedTokenizerBase
# make sure tokenizer plays nice with multiprocessing
os.environ["TOKENIZERS_PARALLELISM"] = "false"
logging.set_verbosity_error()
logging.disable_progress_bar()
logger = logging.get_logger(__name__)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
if not is_torch_available():
raise ValueError("Please install PyTorch.")
if not is_tf_available():
raise ValueError("Please install TensorFlow.")
FRAMEWORKS = ["pytorch", "tensorflow"]
INVALID_ARCH = []
TARGET_VOCAB_SIZE = 1024
data = {"training_ds": None, "testing_ds": None}
COMPOSITE_MODELS = {
"EncoderDecoderModel": "EncoderDecoderModel-bert-bert",
"SpeechEncoderDecoderModel": "SpeechEncoderDecoderModel-wav2vec2-bert",
"VisionEncoderDecoderModel": "VisionEncoderDecoderModel-vit-gpt2",
"VisionTextDualEncoderModel": "VisionTextDualEncoderModel-vit-bert",
}
# This list contains the model architectures for which a tiny version could not be created.
# Avoid to add new architectures here - unless we have verified carefully that it's (almost) impossible to create them.
# One such case is: no model tester class is implemented for a model type (like `MT5`) because its architecture is
# identical to another one (`MT5` is based on `T5`), but trained on different datasets or with different techniques.
UNCONVERTIBLE_MODEL_ARCHITECTURES = {
"BertGenerationEncoder",
"BertGenerationDecoder",
"CamembertForSequenceClassification",
"CamembertForMultipleChoice",
"CamembertForMaskedLM",
"CamembertForCausalLM",
"CamembertForTokenClassification",
"CamembertForQuestionAnswering",
"CamembertModel",
"TFCamembertForMultipleChoice",
"TFCamembertForTokenClassification",
"TFCamembertForQuestionAnswering",
"TFCamembertForSequenceClassification",
"TFCamembertForMaskedLM",
"TFCamembertModel",
"TFCamembertForCausalLM",
"DecisionTransformerModel",
"GraphormerModel",
"InformerModel",
"JukeboxModel",
"MarianForCausalLM",
"MaskFormerSwinModel",
"MaskFormerSwinBackbone",
"MT5Model",
"MT5ForConditionalGeneration",
"UMT5ForConditionalGeneration",
"TFMT5ForConditionalGeneration",
"TFMT5Model",
"QDQBertForSequenceClassification",
"QDQBertForMaskedLM",
"QDQBertModel",
"QDQBertForTokenClassification",
"QDQBertLMHeadModel",
"QDQBertForMultipleChoice",
"QDQBertForQuestionAnswering",
"QDQBertForNextSentencePrediction",
"ReformerModelWithLMHead",
"RetriBertModel",
"Speech2Text2ForCausalLM",
"TimeSeriesTransformerModel",
"TrajectoryTransformerModel",
"TrOCRForCausalLM",
"XLMProphetNetForConditionalGeneration",
"XLMProphetNetForCausalLM",
"XLMProphetNetModel",
"XLMRobertaModel",
"XLMRobertaForTokenClassification",
"XLMRobertaForMultipleChoice",
"XLMRobertaForMaskedLM",
"XLMRobertaForCausalLM",
"XLMRobertaForSequenceClassification",
"XLMRobertaForQuestionAnswering",
"TFXLMRobertaForSequenceClassification",
"TFXLMRobertaForMaskedLM",
"TFXLMRobertaForCausalLM",
"TFXLMRobertaForQuestionAnswering",
"TFXLMRobertaModel",
"TFXLMRobertaForMultipleChoice",
"TFXLMRobertaForTokenClassification",
}
def get_processor_types_from_config_class(config_class, allowed_mappings=None):
"""Return a tuple of processors for `config_class`.
We use `tuple` here to include (potentially) both slow & fast tokenizers.
"""
# To make a uniform return type
def _to_tuple(x):
if not isinstance(x, collections.abc.Sequence):
x = (x,)
else:
x = tuple(x)
return x
if allowed_mappings is None:
allowed_mappings = ["processor", "tokenizer", "image_processor", "feature_extractor"]
processor_types = ()
# Check first if a model has `ProcessorMixin`. Otherwise, check if it has tokenizers, and/or an image processor or
# a feature extractor
if config_class in PROCESSOR_MAPPING and "processor" in allowed_mappings:
processor_types = _to_tuple(PROCESSOR_MAPPING[config_class])
else:
if config_class in TOKENIZER_MAPPING and "tokenizer" in allowed_mappings:
processor_types = TOKENIZER_MAPPING[config_class]
if config_class in IMAGE_PROCESSOR_MAPPING and "image_processor" in allowed_mappings:
processor_types += _to_tuple(IMAGE_PROCESSOR_MAPPING[config_class])
elif config_class in FEATURE_EXTRACTOR_MAPPING and "feature_extractor" in allowed_mappings:
processor_types += _to_tuple(FEATURE_EXTRACTOR_MAPPING[config_class])
# Remark: some configurations have no processor at all. For example, generic composite models like
# `EncoderDecoderModel` is used for any (compatible) text models. Also, `DecisionTransformer` doesn't
# require any processor.
# We might get `None` for some tokenizers - remove them here.
processor_types = tuple(p for p in processor_types if p is not None)
return processor_types
def get_architectures_from_config_class(config_class, arch_mappings, models_to_skip=None):
"""Return a tuple of all possible architectures attributed to a configuration class `config_class`.
For example, BertConfig -> [BertModel, BertForMaskedLM, ..., BertForQuestionAnswering].
"""
# A model architecture could appear in several mappings. For example, `BartForConditionalGeneration` is in
# - MODEL_FOR_PRETRAINING_MAPPING_NAMES
# - MODEL_WITH_LM_HEAD_MAPPING_NAMES
# - MODEL_FOR_MASKED_LM_MAPPING_NAMES
# - MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES
# We avoid the duplication.
architectures = set()
if models_to_skip is None:
models_to_skip = []
models_to_skip = UNCONVERTIBLE_MODEL_ARCHITECTURES.union(models_to_skip)
for mapping in arch_mappings:
if config_class in mapping:
models = mapping[config_class]
models = tuple(models) if isinstance(models, collections.abc.Sequence) else (models,)
for model in models:
if model.__name__ not in models_to_skip:
architectures.add(model)
architectures = tuple(architectures)
return architectures
def get_config_class_from_processor_class(processor_class):
"""Get the config class from a processor class.
Some config/model classes use tokenizers/feature_extractors from other models. For example, `GPT-J` uses
`GPT2Tokenizer`. If no checkpoint is found for a config class, or a checkpoint is found without necessary file(s) to
create the processor for `processor_class`, we get the config class that corresponds to `processor_class` and use it
to find a checkpoint in order to create the processor.
"""
processor_prefix = processor_class.__name__
for postfix in ["TokenizerFast", "Tokenizer", "ImageProcessor", "FeatureExtractor", "Processor"]:
processor_prefix = processor_prefix.replace(postfix, "")
# `Wav2Vec2CTCTokenizer` -> `Wav2Vec2Config`
if processor_prefix == "Wav2Vec2CTC":
processor_prefix = "Wav2Vec2"
# Find the new configuration class
new_config_name = f"{processor_prefix}Config"
new_config_class = getattr(transformers_module, new_config_name)
return new_config_class
def build_processor(config_class, processor_class, allow_no_checkpoint=False):
"""Create a processor for `processor_class`.
If a processor is not able to be built with the original arguments, this method tries to change the arguments and
call itself recursively, by inferring a new `config_class` or a new `processor_class` from another one, in order to
find a checkpoint containing the necessary files to build a processor.
The processor is not saved here. Instead, it will be saved in `convert_processors` after further changes in
`convert_processors`. For each model architecture`, a copy will be created and saved along the built model.
"""
# Currently, this solely uses the docstring in the source file of `config_class` to find a checkpoint.
checkpoint = get_checkpoint_from_config_class(config_class)
if checkpoint is None:
# try to get the checkpoint from the config class for `processor_class`.
# This helps cases like `XCLIPConfig` and `VideoMAEFeatureExtractor` to find a checkpoint from `VideoMAEConfig`.
config_class_from_processor_class = get_config_class_from_processor_class(processor_class)
checkpoint = get_checkpoint_from_config_class(config_class_from_processor_class)
processor = None
try:
processor = processor_class.from_pretrained(checkpoint)
except Exception as e:
logger.error(f"{e.__class__.__name__}: {e}")
# Try to get a new processor class from checkpoint. This is helpful for a checkpoint without necessary file to load
# processor while `processor_class` is an Auto class. For example, `sew` has `Wav2Vec2Processor` in
# `PROCESSOR_MAPPING_NAMES`, its `tokenizer_class` is `AutoTokenizer`, and the checkpoint
# `https://huggingface.co/asapp/sew-tiny-100k` has no tokenizer file, but we can get
# `tokenizer_class: Wav2Vec2CTCTokenizer` from the config file. (The new processor class won't be able to load from
# `checkpoint`, but it helps this recursive method to find a way to build a processor).
if (
processor is None
and checkpoint is not None
and issubclass(processor_class, (PreTrainedTokenizerBase, AutoTokenizer))
):
try:
config = AutoConfig.from_pretrained(checkpoint)
except Exception as e:
logger.error(f"{e.__class__.__name__}: {e}")
config = None
if config is not None:
if not isinstance(config, config_class):
raise ValueError(
f"`config` (which is of type {config.__class__.__name__}) should be an instance of `config_class`"
f" ({config_class.__name__})!"
)
tokenizer_class = config.tokenizer_class
new_processor_class = None
if tokenizer_class is not None:
new_processor_class = getattr(transformers_module, tokenizer_class)
if new_processor_class != processor_class:
processor = build_processor(config_class, new_processor_class)
# If `tokenizer_class` is not specified in `config`, let's use `config` to get the process class via auto
# mappings, but only allow the tokenizer mapping being used. This is to make `Wav2Vec2Conformer` build
if processor is None:
new_processor_classes = get_processor_types_from_config_class(
config.__class__, allowed_mappings=["tokenizer"]
)
# Used to avoid infinite recursion between a pair of fast/slow tokenizer types
names = [
x.__name__.replace("Fast", "") for x in [processor_class, new_processor_class] if x is not None
]
new_processor_classes = [
x for x in new_processor_classes if x is not None and x.__name__.replace("Fast", "") not in names
]
if len(new_processor_classes) > 0:
new_processor_class = new_processor_classes[0]
# Let's use fast tokenizer if there is any
for x in new_processor_classes:
if x.__name__.endswith("Fast"):
new_processor_class = x
break
processor = build_processor(config_class, new_processor_class)
if processor is None:
# Try to build each component (tokenizer & feature extractor) of a `ProcessorMixin`.
if issubclass(processor_class, ProcessorMixin):
attrs = {}
for attr_name in processor_class.attributes:
attrs[attr_name] = []
# This could be a tuple (for tokenizers). For example, `CLIPProcessor` has
# - feature_extractor_class = "CLIPFeatureExtractor"
# - tokenizer_class = ("CLIPTokenizer", "CLIPTokenizerFast")
attr_class_names = getattr(processor_class, f"{attr_name}_class")
if not isinstance(attr_class_names, tuple):
attr_class_names = (attr_class_names,)
for name in attr_class_names:
attr_class = getattr(transformers_module, name)
attr = build_processor(config_class, attr_class)
if attr is not None:
attrs[attr_name].append(attr)
# try to build a `ProcessorMixin`, so we can return a single value
if all(len(v) > 0 for v in attrs.values()):
try:
processor = processor_class(**{k: v[0] for k, v in attrs.items()})
except Exception as e:
logger.error(f"{e.__class__.__name__}: {e}")
else:
# `checkpoint` might lack some file(s) to load a processor. For example, `facebook/hubert-base-ls960`
# has no tokenizer file to load `Wav2Vec2CTCTokenizer`. In this case, we try to build a processor
# with the configuration class (for example, `Wav2Vec2Config`) corresponding to `processor_class`.
config_class_from_processor_class = get_config_class_from_processor_class(processor_class)
if config_class_from_processor_class != config_class:
processor = build_processor(config_class_from_processor_class, processor_class)
# Try to create an image processor or a feature extractor without any checkpoint
if (
processor is None
and allow_no_checkpoint
and (issubclass(processor_class, BaseImageProcessor) or issubclass(processor_class, FeatureExtractionMixin))
):
try:
processor = processor_class()
except Exception as e:
logger.error(f"{e.__class__.__name__}: {e}")
# validation
if processor is not None:
if not (isinstance(processor, processor_class) or processor_class.__name__.startswith("Auto")):
raise ValueError(
f"`processor` (which is of type {processor.__class__.__name__}) should be an instance of"
f" {processor_class.__name__} or an Auto class!"
)
return processor
def get_tiny_config(config_class, model_class=None, **model_tester_kwargs):
"""Retrieve a tiny configuration from `config_class` using each model's `ModelTester`.
Args:
config_class: Subclass of `PreTrainedConfig`.
Returns:
An instance of `config_class` with tiny hyperparameters
"""
model_type = config_class.model_type
# For model type like `data2vec-vision` and `donut-swin`, we can't get the config/model file name directly via
# `model_type` as it would be sth. like `configuration_data2vec_vision.py`.
# A simple way is to use `inspect.getsourcefile(config_class)`.
config_source_file = inspect.getsourcefile(config_class)
# The modeling file name without prefix (`modeling_`) and postfix (`.py`)
modeling_name = config_source_file.split(os.path.sep)[-1].replace("configuration_", "").replace(".py", "")
try:
print("Importing", model_type_to_module_name(model_type))
module_name = model_type_to_module_name(model_type)
if not modeling_name.startswith(module_name):
raise ValueError(f"{modeling_name} doesn't start with {module_name}!")
test_file = os.path.join("tests", "models", module_name, f"test_modeling_{modeling_name}.py")
models_to_model_testers = get_model_to_tester_mapping(test_file)
# Find the model tester class
model_tester_class = None
tester_classes = []
if model_class is not None:
tester_classes = get_tester_classes_for_model(test_file, model_class)
else:
for _tester_classes in models_to_model_testers.values():
tester_classes.extend(_tester_classes)
if len(tester_classes) > 0:
# sort with the length of the class names first, then the alphabetical order
# This is to avoid `T5EncoderOnlyModelTest` is used instead of `T5ModelTest`, which has
# `is_encoder_decoder=False` and causes some pipeline tests failing (also failures in `Optimum` CI).
# TODO: More fine grained control of the desired tester class.
model_tester_class = sorted(tester_classes, key=lambda x: (len(x.__name__), x.__name__))[0]
except ModuleNotFoundError:
error = f"Tiny config not created for {model_type} - cannot find the testing module from the model name."
raise ValueError(error)
if model_tester_class is None:
error = f"Tiny config not created for {model_type} - no model tester is found in the testing module."
raise ValueError(error)
# CLIP-like models have `text_model_tester` and `vision_model_tester`, and we need to pass `vocab_size` to
# `text_model_tester` via `text_kwargs`. The same trick is also necessary for `Flava`.
if "vocab_size" in model_tester_kwargs:
if "text_kwargs" in inspect.signature(model_tester_class.__init__).parameters.keys():
vocab_size = model_tester_kwargs.pop("vocab_size")
model_tester_kwargs["text_kwargs"] = {"vocab_size": vocab_size}
# `parent` is an instance of `unittest.TestCase`, but we don't need it here.
model_tester = model_tester_class(parent=None, **model_tester_kwargs)
if hasattr(model_tester, "get_pipeline_config"):
config = model_tester.get_pipeline_config()
elif hasattr(model_tester, "prepare_config_and_inputs"):
# `PoolFormer` has no `get_config` defined. Furthermore, it's better to use `prepare_config_and_inputs` even if
# `get_config` is defined, since there might be some extra changes in `prepare_config_and_inputs`.
config = model_tester.prepare_config_and_inputs()[0]
elif hasattr(model_tester, "get_config"):
config = model_tester.get_config()
else:
error = (
f"Tiny config not created for {model_type} - the model tester {model_tester_class.__name__} lacks"
" necessary method to create config."
)
raise ValueError(error)
# make sure this is long enough (some model tester has `20` for this attr.) to pass `text-generation`
# pipeline tests.
max_positions = []
for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]:
if getattr(config, key, 0) > 0:
max_positions.append(getattr(config, key))
if getattr(config, "text_config", None) is not None:
if getattr(config.text_config, key, None) is not None:
max_positions.append(getattr(config.text_config, key))
if len(max_positions) > 0:
max_position = max(200, min(max_positions))
for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]:
if getattr(config, key, 0) > 0:
setattr(config, key, max_position)
if getattr(config, "text_config", None) is not None:
if getattr(config.text_config, key, None) is not None:
setattr(config.text_config, key, max_position)
return config
def convert_tokenizer(tokenizer_fast: PreTrainedTokenizerFast):
new_tokenizer = tokenizer_fast.train_new_from_iterator(
data["training_ds"]["text"], TARGET_VOCAB_SIZE, show_progress=False
)
# Make sure it at least runs
if not isinstance(new_tokenizer, LayoutLMv3TokenizerFast):
new_tokenizer(data["testing_ds"]["text"])
return new_tokenizer
def convert_feature_extractor(feature_extractor, tiny_config):
to_convert = False
kwargs = {}
if hasattr(tiny_config, "image_size"):
kwargs["size"] = tiny_config.image_size
kwargs["crop_size"] = tiny_config.image_size
to_convert = True
elif (
hasattr(tiny_config, "vision_config")
and tiny_config.vision_config is not None
and hasattr(tiny_config.vision_config, "image_size")
):
kwargs["size"] = tiny_config.vision_config.image_size
kwargs["crop_size"] = tiny_config.vision_config.image_size
to_convert = True
# Speech2TextModel specific.
if hasattr(tiny_config, "input_feat_per_channel"):
kwargs["feature_size"] = tiny_config.input_feat_per_channel
kwargs["num_mel_bins"] = tiny_config.input_feat_per_channel
to_convert = True
if to_convert:
feature_extractor = feature_extractor.__class__(**kwargs)
return feature_extractor
def convert_processors(processors, tiny_config, output_folder, result):
"""Change a processor to work with smaller inputs.
For tokenizers, we try to reduce their vocabulary size.
For feature extractor, we use smaller image size or change
other attributes using the values from `tiny_config`. See `convert_feature_extractor`.
This method should not fail: we catch the errors and put them in `result["warnings"]` with descriptive messages.
"""
def _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False):
"""Set tokenizer(s) to `None` if the fast/slow tokenizers have different values for `vocab_size` or `length`.
If `keep_fast_tokenizer=True`, the fast tokenizer will be kept.
"""
# sanity check 1: fast and slow tokenizers should be compatible (vocab_size)
if fast_tokenizer is not None and slow_tokenizer is not None:
if fast_tokenizer.vocab_size != slow_tokenizer.vocab_size:
warning_messagae = (
"The fast/slow tokenizers "
f"({fast_tokenizer.__class__.__name__}/{slow_tokenizer.__class__.__name__}) have different "
"vocabulary size: "
f"fast_tokenizer.vocab_size = {fast_tokenizer.vocab_size} and "
f"slow_tokenizer.vocab_size = {slow_tokenizer.vocab_size}."
)
result["warnings"].append(warning_messagae)
if not keep_fast_tokenizer:
fast_tokenizer = None
slow_tokenizer = None
# sanity check 2: fast and slow tokenizers should be compatible (length)
if fast_tokenizer is not None and slow_tokenizer is not None:
if len(fast_tokenizer) != len(slow_tokenizer):
warning_messagae = (
f"The fast/slow tokenizers () have different length: "
f"len(fast_tokenizer) = {len(fast_tokenizer)} and "
f"len(slow_tokenizer) = {len(slow_tokenizer)}."
)
result["warnings"].append(warning_messagae)
if not keep_fast_tokenizer:
fast_tokenizer = None
slow_tokenizer = None
return fast_tokenizer, slow_tokenizer
tokenizers = []
feature_extractors = []
for processor in processors:
if isinstance(processor, PreTrainedTokenizerBase):
if processor.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}:
tokenizers.append(processor)
elif isinstance(processor, BaseImageProcessor):
if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}:
feature_extractors.append(processor)
elif isinstance(processor, FeatureExtractionMixin):
if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}:
feature_extractors.append(processor)
elif isinstance(processor, ProcessorMixin):
if hasattr(processor, "tokenizer"):
if processor.tokenizer.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}:
tokenizers.append(processor.tokenizer)
# Currently, we only have these 2 possibilities
if hasattr(processor, "image_processor"):
if processor.image_processor.__class__.__name__ not in {
x.__class__.__name__ for x in feature_extractors
}:
feature_extractors.append(processor.image_processor)
elif hasattr(processor, "feature_extractor"):
if processor.feature_extractor.__class__.__name__ not in {
x.__class__.__name__ for x in feature_extractors
}:
feature_extractors.append(processor.feature_extractor)
# check the built processors have the unique type
num_types = len({x.__class__.__name__ for x in feature_extractors})
if num_types >= 2:
raise ValueError(f"`feature_extractors` should contain at most 1 type, but it contains {num_types} types!")
num_types = len({x.__class__.__name__.replace("Fast", "") for x in tokenizers})
if num_types >= 2:
raise ValueError(f"`tokenizers` should contain at most 1 tokenizer type, but it contains {num_types} types!")
fast_tokenizer = None
slow_tokenizer = None
for tokenizer in tokenizers:
if isinstance(tokenizer, PreTrainedTokenizerFast):
fast_tokenizer = tokenizer
else:
slow_tokenizer = tokenizer
# If the (original) fast/slow tokenizers don't correspond, keep only the fast tokenizer.
# This doesn't necessarily imply the fast/slow tokenizers in a single Hub repo. has issues.
# It's more of an issue in `build_processor` which tries to get a checkpoint with as much effort as possible.
# For `YosoModel` (which uses `AlbertTokenizer(Fast)`), its real (Hub) checkpoint doesn't contain valid files to
# load the slower tokenizer (`AlbertTokenizer`), and it ends up finding the (canonical) checkpoint of `AlbertModel`,
# which has different vocabulary.
# TODO: Try to improve `build_processor`'s definition and/or usage to avoid the above situation in the first place.
fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=True)
original_fast_tokenizer, original_slow_tokenizer = fast_tokenizer, slow_tokenizer
if fast_tokenizer:
try:
# Wav2Vec2ForCTC , ByT5Tokenizer etc. all are already small enough and have no fast version that can
# be retrained
if fast_tokenizer.vocab_size > TARGET_VOCAB_SIZE:
fast_tokenizer = convert_tokenizer(fast_tokenizer)
except Exception:
result["warnings"].append(
(
f"Failed to convert the fast tokenizer for {fast_tokenizer.__class__.__name__}.",
traceback.format_exc(),
)
)
# If `fast_tokenizer` exists, `slow_tokenizer` should correspond to it.
if fast_tokenizer:
# Make sure the fast tokenizer can be saved
try:
# We don't save it to `output_folder` at this moment - only at the end of this function.
with tempfile.TemporaryDirectory() as tmpdir:
fast_tokenizer.save_pretrained(tmpdir)
try:
slow_tokenizer = AutoTokenizer.from_pretrained(tmpdir, use_fast=False)
except Exception:
result["warnings"].append(
(
f"Failed to load the slow tokenizer saved from {fast_tokenizer.__class__.__name__}.",
traceback.format_exc(),
)
)
# Let's just keep the fast version
slow_tokenizer = None
except Exception:
result["warnings"].append(
(
f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.",
traceback.format_exc(),
)
)
fast_tokenizer = None
# If the (possibly converted) fast/slow tokenizers don't correspond, set them to `None`, and use the original
# tokenizers.
fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False)
# If there is any conversion failed, we keep the original tokenizers.
if (original_fast_tokenizer is not None and fast_tokenizer is None) or (
original_slow_tokenizer is not None and slow_tokenizer is None
):
warning_messagae = (
"There are some issues when converting the fast/slow tokenizers. The original tokenizers from the Hub "
" will be used instead."
)
result["warnings"].append(warning_messagae)
# Let's use the original version at the end (`original_fast_tokenizer` and `original_slow_tokenizer`)
fast_tokenizer = original_fast_tokenizer
slow_tokenizer = original_slow_tokenizer
# Make sure the fast tokenizer can be saved
if fast_tokenizer:
# We don't save it to `output_folder` at this moment - only at the end of this function.
with tempfile.TemporaryDirectory() as tmpdir:
try:
fast_tokenizer.save_pretrained(tmpdir)
except Exception:
result["warnings"].append(
(
f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.",
traceback.format_exc(),
)
)
fast_tokenizer = None
# Make sure the slow tokenizer can be saved
if slow_tokenizer:
# We don't save it to `output_folder` at this moment - only at the end of this function.
with tempfile.TemporaryDirectory() as tmpdir:
try:
slow_tokenizer.save_pretrained(tmpdir)
except Exception:
result["warnings"].append(
(
f"Failed to save the slow tokenizer for {slow_tokenizer.__class__.__name__}.",
traceback.format_exc(),
)
)
slow_tokenizer = None
# update feature extractors using the tiny config
try:
feature_extractors = [convert_feature_extractor(p, tiny_config) for p in feature_extractors]
except Exception:
result["warnings"].append(
(
"Failed to convert feature extractors.",
traceback.format_exc(),
)
)
feature_extractors = []
if hasattr(tiny_config, "max_position_embeddings") and tiny_config.max_position_embeddings > 0:
if fast_tokenizer is not None:
if fast_tokenizer.__class__.__name__ in [
"RobertaTokenizerFast",
"XLMRobertaTokenizerFast",
"LongformerTokenizerFast",
"MPNetTokenizerFast",
]:
fast_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2
else:
fast_tokenizer.model_max_length = tiny_config.max_position_embeddings
if slow_tokenizer is not None:
if slow_tokenizer.__class__.__name__ in [
"RobertaTokenizer",
"XLMRobertaTokenizer",
"LongformerTokenizer",
"MPNetTokenizer",
]:
slow_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2
else:
slow_tokenizer.model_max_length = tiny_config.max_position_embeddings
processors = [fast_tokenizer, slow_tokenizer] + feature_extractors
processors = [p for p in processors if p is not None]
for p in processors:
p.save_pretrained(output_folder)
return processors
def get_checkpoint_dir(output_dir, model_arch):
"""Get framework-agnostic architecture name. Used to save all PT/TF/Flax models into the same directory."""
arch_name = model_arch.__name__
if arch_name.startswith("TF"):
arch_name = arch_name[2:]
elif arch_name.startswith("Flax"):
arch_name = arch_name[4:]
return os.path.join(output_dir, arch_name)
def build_model(model_arch, tiny_config, output_dir):
"""Create and save a model for `model_arch`.
Also copy the set of processors to each model (under the same model type) output folder.
"""
checkpoint_dir = get_checkpoint_dir(output_dir, model_arch)
processor_output_dir = os.path.join(output_dir, "processors")
# copy the (same set of) processors (for a model type) to the model arch. specific folder
if os.path.isdir(processor_output_dir):
shutil.copytree(processor_output_dir, checkpoint_dir, dirs_exist_ok=True)
tiny_config = copy.deepcopy(tiny_config)
if any(model_arch.__name__.endswith(x) for x in ["ForCausalLM", "LMHeadModel"]):
tiny_config.is_encoder_decoder = False
tiny_config.is_decoder = True
model = model_arch(config=tiny_config)
model.save_pretrained(checkpoint_dir)
model.from_pretrained(checkpoint_dir)
return model
def fill_result_with_error(result, error, trace, models_to_create):
"""Fill `result` with errors for all target model arch if we can't build processor"""
error = (error, trace)
result["error"] = error
for framework in FRAMEWORKS:
if framework in models_to_create:
result[framework] = {}
for model_arch in models_to_create[framework]:
result[framework][model_arch.__name__] = {"model": None, "checkpoint": None, "error": error}
result["processor"] = {p.__class__.__name__: p.__class__.__name__ for p in result["processor"].values()}
def upload_model(model_dir, organization, token):
"""Upload the tiny models"""
arch_name = model_dir.split(os.path.sep)[-1]
repo_name = f"tiny-random-{arch_name}"
repo_id = f"{organization}/{repo_name}"
repo_exist = False
error = None
try:
create_repo(repo_id=repo_id, exist_ok=False, repo_type="model", token=token)
except Exception as e:
error = e
if "You already created" in str(e):
error = None
logger.warning("Remote repository exists and will be cloned.")
repo_exist = True
try:
create_repo(repo_id=repo_id, exist_ok=True, repo_type="model", token=token)
except Exception as e:
error = e
if error is not None:
raise error
with tempfile.TemporaryDirectory() as tmpdir:
repo = Repository(local_dir=tmpdir, clone_from=repo_id, token=token)
repo.git_pull()
shutil.copytree(model_dir, tmpdir, dirs_exist_ok=True)
if repo_exist:
# Open a PR on the existing Hub repo.
hub_pr_url = upload_folder(
folder_path=model_dir,
repo_id=repo_id,
repo_type="model",
commit_message=f"Update tiny models for {arch_name}",
commit_description=f"Upload tiny models for {arch_name}",
create_pr=True,
token=token,
)
logger.warning(f"PR open in {hub_pr_url}.")
# TODO: We need this information?
else:
# Push to Hub repo directly
repo.git_add(auto_lfs_track=True)
repo.git_commit(f"Upload tiny models for {arch_name}")
repo.git_push(blocking=True) # this prints a progress bar with the upload
logger.warning(f"Tiny models {arch_name} pushed to {repo_id}.")
def build_composite_models(config_class, output_dir):
import tempfile
from transformers import (
BertConfig,
BertLMHeadModel,
BertModel,
BertTokenizer,
BertTokenizerFast,
EncoderDecoderModel,
GPT2Config,
GPT2LMHeadModel,
GPT2Tokenizer,
GPT2TokenizerFast,
SpeechEncoderDecoderModel,
TFEncoderDecoderModel,
TFVisionEncoderDecoderModel,
TFVisionTextDualEncoderModel,
VisionEncoderDecoderModel,
VisionTextDualEncoderModel,
ViTConfig,
ViTFeatureExtractor,
ViTModel,
Wav2Vec2Config,
Wav2Vec2Model,
Wav2Vec2Processor,
)
# These will be removed at the end if they are empty
result = {"error": None, "warnings": []}
if config_class.model_type == "encoder-decoder":
encoder_config_class = BertConfig
decoder_config_class = BertConfig
encoder_processor = (BertTokenizerFast, BertTokenizer)
decoder_processor = (BertTokenizerFast, BertTokenizer)
encoder_class = BertModel
decoder_class = BertLMHeadModel
model_class = EncoderDecoderModel
tf_model_class = TFEncoderDecoderModel
elif config_class.model_type == "vision-encoder-decoder":
encoder_config_class = ViTConfig
decoder_config_class = GPT2Config
encoder_processor = (ViTFeatureExtractor,)
decoder_processor = (GPT2TokenizerFast, GPT2Tokenizer)
encoder_class = ViTModel
decoder_class = GPT2LMHeadModel
model_class = VisionEncoderDecoderModel
tf_model_class = TFVisionEncoderDecoderModel
elif config_class.model_type == "speech-encoder-decoder":
encoder_config_class = Wav2Vec2Config
decoder_config_class = BertConfig
encoder_processor = (Wav2Vec2Processor,)
decoder_processor = (BertTokenizerFast, BertTokenizer)
encoder_class = Wav2Vec2Model
decoder_class = BertLMHeadModel
model_class = SpeechEncoderDecoderModel
tf_model_class = None
elif config_class.model_type == "vision-text-dual-encoder":
# Not encoder-decoder, but encoder-encoder. We just keep the same name as above to make code easier
encoder_config_class = ViTConfig
decoder_config_class = BertConfig
encoder_processor = (ViTFeatureExtractor,)
decoder_processor = (BertTokenizerFast, BertTokenizer)
encoder_class = ViTModel
decoder_class = BertModel
model_class = VisionTextDualEncoderModel
tf_model_class = TFVisionTextDualEncoderModel
with tempfile.TemporaryDirectory() as tmpdir:
try:
# build encoder
models_to_create = {"processor": encoder_processor, "pytorch": (encoder_class,), "tensorflow": []}
encoder_output_dir = os.path.join(tmpdir, "encoder")
build(encoder_config_class, models_to_create, encoder_output_dir)
# build decoder
models_to_create = {"processor": decoder_processor, "pytorch": (decoder_class,), "tensorflow": []}
decoder_output_dir = os.path.join(tmpdir, "decoder")
build(decoder_config_class, models_to_create, decoder_output_dir)
# build encoder-decoder
encoder_path = os.path.join(encoder_output_dir, encoder_class.__name__)
decoder_path = os.path.join(decoder_output_dir, decoder_class.__name__)
if config_class.model_type != "vision-text-dual-encoder":
# Specify these explicitly for encoder-decoder like models, but not for `vision-text-dual-encoder` as it
# has no decoder.
decoder_config = decoder_config_class.from_pretrained(decoder_path)
decoder_config.is_decoder = True
decoder_config.add_cross_attention = True
model = model_class.from_encoder_decoder_pretrained(
encoder_path,
decoder_path,
decoder_config=decoder_config,
)
elif config_class.model_type == "vision-text-dual-encoder":
model = model_class.from_vision_text_pretrained(encoder_path, decoder_path)
model_path = os.path.join(
output_dir,
f"{model_class.__name__}-{encoder_config_class.model_type}-{decoder_config_class.model_type}",
)
model.save_pretrained(model_path)
if tf_model_class is not None:
model = tf_model_class.from_pretrained(model_path)
model.save_pretrained(model_path)
# copy the processors
encoder_processor_path = os.path.join(encoder_output_dir, "processors")
decoder_processor_path = os.path.join(decoder_output_dir, "processors")
if os.path.isdir(encoder_processor_path):
shutil.copytree(encoder_processor_path, model_path, dirs_exist_ok=True)
if os.path.isdir(decoder_processor_path):
shutil.copytree(decoder_processor_path, model_path, dirs_exist_ok=True)
# fill `result`
result["processor"] = {x.__name__: x.__name__ for x in encoder_processor + decoder_processor}
result["pytorch"] = {model_class.__name__: {"model": model_class.__name__, "checkpoint": model_path}}
result["tensorflow"] = {}
if tf_model_class is not None:
result["tensorflow"] = {
tf_model_class.__name__: {"model": tf_model_class.__name__, "checkpoint": model_path}
}
except Exception:
result["error"] = (
f"Failed to build models for {config_class.__name__}.",
traceback.format_exc(),
)
if not result["error"]:
del result["error"]
if not result["warnings"]:
del result["warnings"]
return result
def get_token_id_from_tokenizer(token_id_name, tokenizer, original_token_id):
"""Use `tokenizer` to get the values of `bos_token_id`, `eos_token_ids`, etc.
The argument `token_id_name` should be a string ending with `_token_id`, and `original_token_id` should be an
integer that will be return if `tokenizer` has no token corresponding to `token_id_name`.
"""
token_id = original_token_id
if not token_id_name.endswith("_token_id"):
raise ValueError(f"`token_id_name` is {token_id_name}, which doesn't end with `_token_id`!")
token = getattr(tokenizer, token_id_name.replace("_token_id", "_token"), None)
if token is not None:
if isinstance(tokenizer, PreTrainedTokenizerFast):
token_id = tokenizer._convert_token_to_id_with_added_voc(token)
else:
token_id = tokenizer._convert_token_to_id(token)
return token_id
def get_config_overrides(config_class, processors):
# `Bark` configuration is too special. Let's just not handle this for now.
if config_class.__name__ == "BarkConfig":
return {}
config_overrides = {}
# Check if there is any tokenizer (prefer fast version if any)
tokenizer = None
for processor in processors:
if isinstance(processor, PreTrainedTokenizerFast):
tokenizer = processor
break
elif isinstance(processor, PreTrainedTokenizer):
tokenizer = processor
if tokenizer is None:
return config_overrides
# Get some properties of the (already converted) tokenizer (smaller vocab size, special token ids, etc.)
# We use `len(tokenizer)` instead of `tokenizer.vocab_size` to avoid potential issues for tokenizers with non-empty
# `added_tokens_encoder`. One example is the `DebertaV2Tokenizer` where the mask token is the extra token.
vocab_size = len(tokenizer)
# The original checkpoint has length `35998`, but it doesn't have ids `30400` and `30514` but instead `35998` and
# `35999`.
if config_class.__name__ == "GPTSanJapaneseConfig":
vocab_size += 2
config_overrides["vocab_size"] = vocab_size
# Used to create a new model tester with `tokenizer.vocab_size` in order to get the (updated) special token ids.
model_tester_kwargs = {"vocab_size": vocab_size}
# `FSMTModelTester` accepts `src_vocab_size` and `tgt_vocab_size` but not `vocab_size`.
if config_class.__name__ == "FSMTConfig":
del model_tester_kwargs["vocab_size"]
model_tester_kwargs["src_vocab_size"] = tokenizer.src_vocab_size
model_tester_kwargs["tgt_vocab_size"] = tokenizer.tgt_vocab_size
_tiny_config = get_tiny_config(config_class, **model_tester_kwargs)
# handle the possibility of `text_config` inside `_tiny_config` for clip-like models (`owlvit`, `groupvit`, etc.)
if hasattr(_tiny_config, "text_config"):
_tiny_config = _tiny_config.text_config
# Collect values of some special token ids
for attr in dir(_tiny_config):
if attr.endswith("_token_id"):
token_id = getattr(_tiny_config, attr)
if token_id is not None:
# Using the token id values from `tokenizer` instead of from `_tiny_config`.
token_id = get_token_id_from_tokenizer(attr, tokenizer, original_token_id=token_id)
config_overrides[attr] = token_id
if config_class.__name__ == "FSMTConfig":
config_overrides["src_vocab_size"] = tokenizer.src_vocab_size
config_overrides["tgt_vocab_size"] = tokenizer.tgt_vocab_size
# `FSMTConfig` has `DecoderConfig` as `decoder` attribute.
config_overrides["decoder"] = configuration_fsmt.DecoderConfig(
vocab_size=tokenizer.tgt_vocab_size, bos_token_id=config_overrides["eos_token_id"]
)
return config_overrides
def build(config_class, models_to_create, output_dir):
"""Create all models for a certain model type.
Args:
config_class (`PretrainedConfig`):
A subclass of `PretrainedConfig` that is used to determine `models_to_create`.
models_to_create (`dict`):
A dictionary containing the processor/model classes that we want to create the instances. These models are
of the same model type which is associated to `config_class`.
output_dir (`str`):
The directory to save all the checkpoints. Each model architecture will be saved in a subdirectory under
it. Models in different frameworks with the same architecture will be saved in the same subdirectory.
"""
if data["training_ds"] is None or data["testing_ds"] is None:
ds = load_dataset("wikitext", "wikitext-2-raw-v1")
data["training_ds"] = ds["train"]
data["testing_ds"] = ds["test"]
if config_class.model_type in [
"encoder-decoder",
"vision-encoder-decoder",
"speech-encoder-decoder",
"vision-text-dual-encoder",
]:
return build_composite_models(config_class, output_dir)
result = {k: {} for k in models_to_create}
# These will be removed at the end if they are empty
result["error"] = None
result["warnings"] = []
# Build processors
processor_classes = models_to_create["processor"]
if len(processor_classes) == 0:
error = f"No processor class could be found in {config_class.__name__}."
fill_result_with_error(result, error, None, models_to_create)
logger.error(result["error"][0])
return result
for processor_class in processor_classes:
try:
processor = build_processor(config_class, processor_class, allow_no_checkpoint=True)
if processor is not None:
result["processor"][processor_class] = processor
except Exception:
error = f"Failed to build processor for {processor_class.__name__}."
trace = traceback.format_exc()
fill_result_with_error(result, error, trace, models_to_create)
logger.error(result["error"][0])
return result
if len(result["processor"]) == 0:
error = f"No processor could be built for {config_class.__name__}."
fill_result_with_error(result, error, None, models_to_create)
logger.error(result["error"][0])
return result
try:
tiny_config = get_tiny_config(config_class)
except Exception as e:
error = f"Failed to get tiny config for {config_class.__name__}: {e}"
trace = traceback.format_exc()
fill_result_with_error(result, error, trace, models_to_create)
logger.error(result["error"][0])
return result
# Convert the processors (reduce vocabulary size, smaller image size, etc.)
processors = list(result["processor"].values())
processor_output_folder = os.path.join(output_dir, "processors")
try:
processors = convert_processors(processors, tiny_config, processor_output_folder, result)
except Exception:
error = "Failed to convert the processors."
trace = traceback.format_exc()
result["warnings"].append((error, trace))
if len(processors) == 0:
error = f"No processor is returned by `convert_processors` for {config_class.__name__}."
fill_result_with_error(result, error, None, models_to_create)
logger.error(result["error"][0])
return result
try:
config_overrides = get_config_overrides(config_class, processors)
except Exception as e:
error = f"Failure occurs while calling `get_config_overrides`: {e}"
trace = traceback.format_exc()
fill_result_with_error(result, error, trace, models_to_create)
logger.error(result["error"][0])
return result
# Just for us to see this easily in the report
if "vocab_size" in config_overrides:
result["vocab_size"] = config_overrides["vocab_size"]
# Update attributes that `vocab_size` involves
for k, v in config_overrides.items():
if hasattr(tiny_config, k):
setattr(tiny_config, k, v)
# So far, we only have to deal with `text_config`, as `config_overrides` contains text-related attributes only.
# `FuyuConfig` saves data under both FuyuConfig and its `text_config`. This is not good, but let's just update
# every involved fields to avoid potential failure.
if (
hasattr(tiny_config, "text_config")
and tiny_config.text_config is not None
and hasattr(tiny_config.text_config, k)
):
setattr(tiny_config.text_config, k, v)
# If `text_config_dict` exists, we need to update its value here too in order to # make
# `save_pretrained -> from_pretrained` work.
if hasattr(tiny_config, "text_config_dict"):
tiny_config.text_config_dict[k] = v
if result["warnings"]:
logger.warning(result["warnings"][0][0])
# update `result["processor"]`
result["processor"] = {type(p).__name__: p.__class__.__name__ for p in processors}
for pytorch_arch in models_to_create["pytorch"]:
result["pytorch"][pytorch_arch.__name__] = {}
error = None
try:
model = build_model(pytorch_arch, tiny_config, output_dir=output_dir)
except Exception as e:
model = None
error = f"Failed to create the pytorch model for {pytorch_arch}: {e}"
trace = traceback.format_exc()
result["pytorch"][pytorch_arch.__name__]["model"] = model.__class__.__name__ if model is not None else None
result["pytorch"][pytorch_arch.__name__]["checkpoint"] = (
get_checkpoint_dir(output_dir, pytorch_arch) if model is not None else None
)
if error is not None:
result["pytorch"][pytorch_arch.__name__]["error"] = (error, trace)
logger.error(f"{pytorch_arch.__name__}: {error}")
for tensorflow_arch in models_to_create["tensorflow"]:
# Make PT/TF weights compatible
pt_arch_name = tensorflow_arch.__name__[2:] # Remove `TF`
pt_arch = getattr(transformers_module, pt_arch_name)
result["tensorflow"][tensorflow_arch.__name__] = {}
error = None
if pt_arch.__name__ in result["pytorch"] and result["pytorch"][pt_arch.__name__]["checkpoint"] is not None:
ckpt = get_checkpoint_dir(output_dir, pt_arch)
# Use the same weights from PyTorch.
try:
model = tensorflow_arch.from_pretrained(ckpt)
model.save_pretrained(ckpt)
except Exception as e:
# Conversion may fail. Let's not create a model with different weights to avoid confusion (for now).
model = None
error = f"Failed to convert the pytorch model to the tensorflow model for {pt_arch}: {e}"
trace = traceback.format_exc()
else:
try:
model = build_model(tensorflow_arch, tiny_config, output_dir=output_dir)
except Exception as e:
model = None
error = f"Failed to create the tensorflow model for {tensorflow_arch}: {e}"
trace = traceback.format_exc()
result["tensorflow"][tensorflow_arch.__name__]["model"] = (
model.__class__.__name__ if model is not None else None
)
result["tensorflow"][tensorflow_arch.__name__]["checkpoint"] = (
get_checkpoint_dir(output_dir, tensorflow_arch) if model is not None else None
)
if error is not None:
result["tensorflow"][tensorflow_arch.__name__]["error"] = (error, trace)
logger.error(f"{tensorflow_arch.__name__}: {error}")
if not result["error"]:
del result["error"]
if not result["warnings"]:
del result["warnings"]
return result
def build_tiny_model_summary(results, organization=None, token=None):
"""Build a summary: a dictionary of the form
{
model architecture name:
{
"tokenizer_classes": [...],
"processor_classes": [...],
"model_classes": [...],
}
..
}
"""
tiny_model_summary = {}
for config_name in results:
processors = [key for key, value in results[config_name]["processor"].items()]
tokenizer_classes = sorted([x for x in processors if x.endswith("TokenizerFast") or x.endswith("Tokenizer")])
processor_classes = sorted([x for x in processors if x not in tokenizer_classes])
for framework in FRAMEWORKS:
if framework not in results[config_name]:
continue
for arch_name in results[config_name][framework]:
model_classes = [arch_name]
base_arch_name = arch_name[2:] if arch_name.startswith("TF") else arch_name
# tiny model is not created for `arch_name`
if results[config_name][framework][arch_name]["model"] is None:
model_classes = []
if base_arch_name not in tiny_model_summary:
tiny_model_summary[base_arch_name] = {}
tiny_model_summary[base_arch_name].update(
{
"tokenizer_classes": tokenizer_classes,
"processor_classes": processor_classes,
}
)
tiny_model_summary[base_arch_name]["model_classes"] = sorted(
tiny_model_summary[base_arch_name].get("model_classes", []) + model_classes
)
if organization is not None:
repo_name = f"tiny-random-{base_arch_name}"
# composite models' checkpoints have more precise repo. names on the Hub.
if base_arch_name in COMPOSITE_MODELS:
repo_name = f"tiny-random-{COMPOSITE_MODELS[base_arch_name]}"
repo_id = f"{organization}/{repo_name}"
try:
commit_hash = hf_api.repo_info(repo_id, token=token).sha
except Exception:
# The directory is not created, but processor(s) is/are included in `results`.
logger.warning(f"Failed to get information for {repo_id}.\n{traceback.format_exc()}")
del tiny_model_summary[base_arch_name]
continue
tiny_model_summary[base_arch_name]["sha"] = commit_hash
return tiny_model_summary
def build_failed_report(results, include_warning=True):
failed_results = {}
for config_name in results:
if "error" in results[config_name]:
if config_name not in failed_results:
failed_results[config_name] = {}
failed_results[config_name] = {"error": results[config_name]["error"]}
if include_warning and "warnings" in results[config_name]:
if config_name not in failed_results:
failed_results[config_name] = {}
failed_results[config_name]["warnings"] = results[config_name]["warnings"]
for framework in FRAMEWORKS:
if framework not in results[config_name]:
continue
for arch_name in results[config_name][framework]:
if "error" in results[config_name][framework][arch_name]:
if config_name not in failed_results:
failed_results[config_name] = {}
if framework not in failed_results[config_name]:
failed_results[config_name][framework] = {}
if arch_name not in failed_results[config_name][framework]:
failed_results[config_name][framework][arch_name] = {}
error = results[config_name][framework][arch_name]["error"]
failed_results[config_name][framework][arch_name]["error"] = error
return failed_results
def build_simple_report(results):
text = ""
failed_text = ""
for config_name in results:
for framework in FRAMEWORKS:
if framework not in results[config_name]:
continue
for arch_name in results[config_name][framework]:
if "error" in results[config_name][framework][arch_name]:
result = results[config_name][framework][arch_name]["error"]
failed_text += f"{arch_name}: {result[0]}\n"
else:
result = ("OK",)
text += f"{arch_name}: {result[0]}\n"
return text, failed_text
def update_tiny_model_summary_file(report_path):
with open(os.path.join(report_path, "tiny_model_summary.json")) as fp:
new_data = json.load(fp)
with open("tests/utils/tiny_model_summary.json") as fp:
data = json.load(fp)
for key, value in new_data.items():
if key not in data:
data[key] = value
else:
for attr in ["tokenizer_classes", "processor_classes", "model_classes"]:
# we might get duplication here. We will remove them below when creating `updated_data`.
data[key][attr].extend(value[attr])
new_sha = value.get("sha", None)
if new_sha is not None:
data[key]["sha"] = new_sha
updated_data = {}
for key in sorted(data.keys()):
updated_data[key] = {}
for attr, value in data[key].items():
# deduplication and sort
updated_data[key][attr] = sorted(set(value)) if attr != "sha" else value
with open(os.path.join(report_path, "updated_tiny_model_summary.json"), "w") as fp:
json.dump(updated_data, fp, indent=4, ensure_ascii=False)
def create_tiny_models(
output_path,
all,
model_types,
models_to_skip,
no_check,
upload,
organization,
token,
num_workers=1,
):
clone_path = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
if os.getcwd() != clone_path:
raise ValueError(f"This script should be run from the root of the clone of `transformers` {clone_path}")
report_path = os.path.join(output_path, "reports")
os.makedirs(report_path)
_pytorch_arch_mappings = [
x
for x in dir(transformers_module)
if x.startswith("MODEL_") and x.endswith("_MAPPING") and x != "MODEL_NAMES_MAPPING"
]
_tensorflow_arch_mappings = [
x for x in dir(transformers_module) if x.startswith("TF_MODEL_") and x.endswith("_MAPPING")
]
pytorch_arch_mappings = [getattr(transformers_module, x) for x in _pytorch_arch_mappings]
tensorflow_arch_mappings = [getattr(transformers_module, x) for x in _tensorflow_arch_mappings]
config_classes = CONFIG_MAPPING.values()
if not all:
config_classes = [CONFIG_MAPPING[model_type] for model_type in model_types]
# A map from config classes to tuples of processors (tokenizer, feature extractor, processor) classes
processor_type_map = {c: get_processor_types_from_config_class(c) for c in config_classes}
to_create = {}
for c in config_classes:
processors = processor_type_map[c]
models = get_architectures_from_config_class(c, pytorch_arch_mappings, models_to_skip)
tf_models = get_architectures_from_config_class(c, tensorflow_arch_mappings, models_to_skip)
if len(models) + len(tf_models) > 0:
to_create[c] = {"processor": processors, "pytorch": models, "tensorflow": tf_models}
results = {}
if num_workers <= 1:
for c, models_to_create in list(to_create.items()):
print(f"Create models for {c.__name__} ...")
result = build(c, models_to_create, output_dir=os.path.join(output_path, c.model_type))
results[c.__name__] = result
print("=" * 40)
else:
all_build_args = []
for c, models_to_create in list(to_create.items()):
all_build_args.append((c, models_to_create, os.path.join(output_path, c.model_type)))
with multiprocessing.Pool() as pool:
results = pool.starmap(build, all_build_args)
results = {buid_args[0].__name__: result for buid_args, result in zip(all_build_args, results)}
if upload:
if organization is None:
raise ValueError("The argument `organization` could not be `None`. No model is uploaded")
to_upload = []
for model_type in os.listdir(output_path):
# This is the directory containing the reports
if model_type == "reports":
continue
for arch in os.listdir(os.path.join(output_path, model_type)):
if arch == "processors":
continue
to_upload.append(os.path.join(output_path, model_type, arch))
to_upload = sorted(to_upload)
upload_results = {}
if len(to_upload) > 0:
for model_dir in to_upload:
try:
upload_model(model_dir, organization, token)
except Exception as e:
error = f"Failed to upload {model_dir}. {e.__class__.__name__}: {e}"
logger.error(error)
upload_results[model_dir] = error
with open(os.path.join(report_path, "failed_uploads.json"), "w") as fp:
json.dump(upload_results, fp, indent=4)
# Build the tiny model summary file. The `tokenizer_classes` and `processor_classes` could be both empty lists.
# When using the items in this file to update the file `tests/utils/tiny_model_summary.json`, the model
# architectures with `tokenizer_classes` and `processor_classes` being both empty should **NOT** be added to
# `tests/utils/tiny_model_summary.json`.
tiny_model_summary = build_tiny_model_summary(results, organization=organization, token=token)
with open(os.path.join(report_path, "tiny_model_summary.json"), "w") as fp:
json.dump(tiny_model_summary, fp, indent=4)
with open(os.path.join(report_path, "tiny_model_creation_report.json"), "w") as fp:
json.dump(results, fp, indent=4)
# Build the warning/failure report (json format): same format as the complete `results` except this contains only
# warnings or errors.
failed_results = build_failed_report(results)
with open(os.path.join(report_path, "failed_report.json"), "w") as fp:
json.dump(failed_results, fp, indent=4)
simple_report, failed_report = build_simple_report(results)
# The simplified report: a .txt file with each line of format:
# {model architecture name}: {OK or error message}
with open(os.path.join(report_path, "simple_report.txt"), "w") as fp:
fp.write(simple_report)
# The simplified failure report: same above except this only contains line with errors
with open(os.path.join(report_path, "simple_failed_report.txt"), "w") as fp:
fp.write(failed_report)
update_tiny_model_summary_file(report_path=os.path.join(output_path, "reports"))
if __name__ == "__main__":
# This has to be `spawn` to avoid hanging forever!
multiprocessing.set_start_method("spawn")
def list_str(values):
return values.split(",")
parser = argparse.ArgumentParser()
parser.add_argument("--all", action="store_true", help="Will create all tiny models.")
parser.add_argument(
"--no_check",
action="store_true",
help="If set, will not check the validity of architectures. Use with caution.",
)
parser.add_argument(
"-m",
"--model_types",
type=list_str,
help="Comma-separated list of model type(s) from which the tiny models will be created.",
)
parser.add_argument(
"--models_to_skip",
type=list_str,
help=(
"Comma-separated list of model class names(s) from which the tiny models won't be created.\nThis is usually "
"the list of model classes that have their tiny versions already uploaded to the Hub."
),
)
parser.add_argument("--upload", action="store_true", help="If to upload the created tiny models to the Hub.")
parser.add_argument(
"--organization",
default=None,
type=str,
help="The organization on the Hub to which the tiny models will be uploaded.",
)
parser.add_argument(
"--token", default=None, type=str, help="A valid authentication token for HuggingFace Hub with write access."
)
parser.add_argument("output_path", type=Path, help="Path indicating where to store generated model.")
parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.")
args = parser.parse_args()
if not args.all and not args.model_types:
raise ValueError("Please provide at least one model type or pass `--all` to export all architectures.")
create_tiny_models(
args.output_path,
args.all,
args.model_types,
args.models_to_skip,
args.no_check,
args.upload,
args.organization,
args.token,
args.num_workers,
)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_model_tester.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
from get_test_info import get_tester_classes
if __name__ == "__main__":
failures = []
pattern = os.path.join("tests", "models", "**", "test_modeling_*.py")
test_files = glob.glob(pattern)
# TODO: deal with TF/Flax too
test_files = [
x for x in test_files if not (x.startswith("test_modeling_tf_") or x.startswith("test_modeling_flax_"))
]
for test_file in test_files:
tester_classes = get_tester_classes(test_file)
for tester_class in tester_classes:
# A few tester classes don't have `parent` parameter in `__init__`.
# TODO: deal this better
try:
tester = tester_class(parent=None)
except Exception:
continue
if hasattr(tester, "get_config"):
config = tester.get_config()
for k, v in config.to_dict().items():
if isinstance(v, int):
target = None
if k in ["vocab_size"]:
target = 100
elif k in ["max_position_embeddings"]:
target = 128
elif k in ["hidden_size", "d_model"]:
target = 40
elif k == ["num_layers", "num_hidden_layers", "num_encoder_layers", "num_decoder_layers"]:
target = 5
if target is not None and v > target:
failures.append(
f"{tester_class.__name__} will produce a `config` of type `{config.__class__.__name__}`"
f' with config["{k}"] = {v} which is too large for testing! Set its value to be smaller'
f" than {target}."
)
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_table.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks the big table in the file docs/source/en/index.md and potentially updates it.
Use from the root of the repo with:
```bash
python utils/check_inits.py
```
for a check that will error in case of inconsistencies (used by `make repo-consistency`).
To auto-fix issues run:
```bash
python utils/check_inits.py --fix_and_overwrite
```
which is used by `make fix-copies`.
"""
import argparse
import collections
import os
import re
from typing import List
from transformers.utils import direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_table.py
TRANSFORMERS_PATH = "src/transformers"
PATH_TO_DOCS = "docs/source/en"
REPO_PATH = "."
def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> str:
"""
Find the text in filename between two prompts.
Args:
filename (`str`): The file to search into.
start_prompt (`str`): A string to look for at the start of the content searched.
end_prompt (`str`): A string that will mark the end of the content to look for.
Returns:
`str`: The content between the prompts.
"""
with open(filename, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Find the start prompt.
start_index = 0
while not lines[start_index].startswith(start_prompt):
start_index += 1
start_index += 1
# Now go until the end prompt.
end_index = start_index
while not lines[end_index].startswith(end_prompt):
end_index += 1
end_index -= 1
while len(lines[start_index]) <= 1:
start_index += 1
while len(lines[end_index]) <= 1:
end_index -= 1
end_index += 1
return "".join(lines[start_index:end_index]), start_index, end_index, lines
# Regexes that match TF/Flax/PT model names. Add here suffixes that are used to identify models, separated by |
_re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
_re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
# Will match any TF or Flax model too so need to be in an else branch after the two previous regexes.
_re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
# This is to make sure the transformers module imported is the one in the repo.
transformers_module = direct_transformers_import(TRANSFORMERS_PATH)
def camel_case_split(identifier: str) -> List[str]:
"""
Split a camel-cased name into words.
Args:
identifier (`str`): The camel-cased name to parse.
Returns:
`List[str]`: The list of words in the identifier (as seprated by capital letters).
Example:
```py
>>> camel_case_split("CamelCasedClass")
["Camel", "Cased", "Class"]
```
"""
# Regex thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python
matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier)
return [m.group(0) for m in matches]
def _center_text(text: str, width: int) -> str:
"""
Utility that will add spaces on the left and right of a text to make it centered for a given width.
Args:
text (`str`): The text to center.
width (`int`): The desired length of the result.
Returns:
`str`: A text of length `width` with the original `text` in the middle.
"""
text_length = 2 if text == "โ
" or text == "โ" else len(text)
left_indent = (width - text_length) // 2
right_indent = width - text_length - left_indent
return " " * left_indent + text + " " * right_indent
SPECIAL_MODEL_NAME_LINK_MAPPING = {
"Data2VecAudio": "[Data2VecAudio](model_doc/data2vec)",
"Data2VecText": "[Data2VecText](model_doc/data2vec)",
"Data2VecVision": "[Data2VecVision](model_doc/data2vec)",
"DonutSwin": "[DonutSwin](model_doc/donut)",
}
MODEL_NAMES_WITH_SAME_CONFIG = {
"BARThez": "BART",
"BARTpho": "BART",
"BertJapanese": "BERT",
"BERTweet": "BERT",
"BORT": "BERT",
"ByT5": "T5",
"CPM": "OpenAI GPT-2",
"DePlot": "Pix2Struct",
"DialoGPT": "OpenAI GPT-2",
"DiT": "BEiT",
"FLAN-T5": "T5",
"FLAN-UL2": "T5",
"HerBERT": "BERT",
"LayoutXLM": "LayoutLMv2",
"Llama2": "LLaMA",
"Llama3": "LLaMA",
"MADLAD-400": "T5",
"MatCha": "Pix2Struct",
"mBART-50": "mBART",
"Megatron-GPT2": "OpenAI GPT-2",
"mLUKE": "LUKE",
"MMS": "Wav2Vec2",
"NLLB": "M2M100",
"PhoBERT": "BERT",
"T5v1.1": "T5",
"TAPEX": "BART",
"UL2": "T5",
"Wav2Vec2Phoneme": "Wav2Vec2",
"XLM-V": "XLM-RoBERTa",
"XLS-R": "Wav2Vec2",
"XLSR-Wav2Vec2": "Wav2Vec2",
}
MODEL_NAMES_TO_IGNORE = ["CLIPVisionModel", "SiglipVisionModel", "ChineseCLIPVisionModel"]
def get_model_table_from_auto_modules() -> str:
"""
Generates an up-to-date model table from the content of the auto modules.
"""
# Dictionary model names to config.
config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES
model_name_to_config = {
name: config_maping_names[code]
for code, name in transformers_module.MODEL_NAMES_MAPPING.items()
if code in config_maping_names
}
model_name_to_prefix = {name: config.replace("Config", "") for name, config in model_name_to_config.items()}
# Dictionaries flagging if each model prefix has a backend in PT/TF/Flax.
pt_models = collections.defaultdict(bool)
tf_models = collections.defaultdict(bool)
flax_models = collections.defaultdict(bool)
# Let's lookup through all transformers object (once).
for attr_name in dir(transformers_module):
lookup_dict = None
if _re_tf_models.match(attr_name) is not None:
lookup_dict = tf_models
attr_name = _re_tf_models.match(attr_name).groups()[0]
elif _re_flax_models.match(attr_name) is not None:
lookup_dict = flax_models
attr_name = _re_flax_models.match(attr_name).groups()[0]
elif _re_pt_models.match(attr_name) is not None:
lookup_dict = pt_models
attr_name = _re_pt_models.match(attr_name).groups()[0]
if lookup_dict is not None:
while len(attr_name) > 0:
if attr_name in model_name_to_prefix.values():
lookup_dict[attr_name] = True
break
# Try again after removing the last word in the name
attr_name = "".join(camel_case_split(attr_name)[:-1])
# Let's build that table!
model_names = list(model_name_to_config.keys()) + list(MODEL_NAMES_WITH_SAME_CONFIG.keys())
# model name to doc link mapping
model_names_mapping = transformers_module.models.auto.configuration_auto.MODEL_NAMES_MAPPING
model_name_to_link_mapping = {value: f"[{value}](model_doc/{key})" for key, value in model_names_mapping.items()}
# update mapping with special model names
model_name_to_link_mapping = {
k: SPECIAL_MODEL_NAME_LINK_MAPPING[k] if k in SPECIAL_MODEL_NAME_LINK_MAPPING else v
for k, v in model_name_to_link_mapping.items()
}
# MaskFormerSwin and TimmBackbone are backbones and so not meant to be loaded and used on their own. Instead, they define architectures which can be loaded using the AutoBackbone API.
names_to_exclude = ["MaskFormerSwin", "TimmBackbone", "Speech2Text2"]
model_names = [name for name in model_names if name not in names_to_exclude]
model_names.sort(key=str.lower)
columns = ["Model", "PyTorch support", "TensorFlow support", "Flax Support"]
# We'll need widths to properly display everything in the center (+2 is to leave one extra space on each side).
widths = [len(c) + 2 for c in columns]
widths[0] = max([len(doc_link) for doc_link in model_name_to_link_mapping.values()]) + 2
# Build the table per se
table = "|" + "|".join([_center_text(c, w) for c, w in zip(columns, widths)]) + "|\n"
# Use ":-----:" format to center-aligned table cell texts
table += "|" + "|".join([":" + "-" * (w - 2) + ":" for w in widths]) + "|\n"
check = {True: "โ
", False: "โ"}
for name in model_names:
if name in MODEL_NAMES_TO_IGNORE:
continue
if name in MODEL_NAMES_WITH_SAME_CONFIG.keys():
prefix = model_name_to_prefix[MODEL_NAMES_WITH_SAME_CONFIG[name]]
else:
prefix = model_name_to_prefix[name]
line = [
model_name_to_link_mapping[name],
check[pt_models[prefix]],
check[tf_models[prefix]],
check[flax_models[prefix]],
]
table += "|" + "|".join([_center_text(l, w) for l, w in zip(line, widths)]) + "|\n"
return table
def check_model_table(overwrite=False):
"""
Check the model table in the index.md is consistent with the state of the lib and potentially fix it.
Args:
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the table when it's not up to date.
"""
current_table, start_index, end_index, lines = _find_text_in_file(
filename=os.path.join(PATH_TO_DOCS, "index.md"),
start_prompt="<!--This table is updated automatically from the auto modules",
end_prompt="<!-- End table-->",
)
new_table = get_model_table_from_auto_modules()
if current_table != new_table:
if overwrite:
with open(os.path.join(PATH_TO_DOCS, "index.md"), "w", encoding="utf-8", newline="\n") as f:
f.writelines(lines[:start_index] + [new_table] + lines[end_index:])
else:
raise ValueError(
"The model table in the `index.md` has not been updated. Run `make fix-copies` to fix this."
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
check_model_table(args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/split_doctest_jobs.py | # Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is used to get the files against which we will run doc testing.
This uses `tests_fetcher.get_all_doctest_files` then groups the test files by their directory paths.
The files in `docs/source/en/model_doc` or `docs/source/en/tasks` are **NOT** grouped together with other files in the
same directory: the objective is to run doctest against them in independent GitHub Actions jobs.
Assume we are under `transformers` root directory:
To get a map (dictionary) between directory (or file) paths and the corresponding files
```bash
python utils/split_doctest_jobs.py
```
or to get a list of lists of directory (or file) paths
```bash
python utils/split_doctest_jobs.py --only_return_keys --num_splits 4
```
(this is used to allow GitHub Actions to generate more than 256 jobs using matrix)
"""
import argparse
from collections import defaultdict
from pathlib import Path
from tests_fetcher import get_all_doctest_files
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--only_return_keys",
action="store_true",
help="if to only return the keys (which is a list of list of files' directory or file paths).",
)
parser.add_argument(
"--num_splits",
type=int,
default=1,
help="the number of splits into which the (flat) list of direcotry/file paths will be split. This has effect only if `only_return_keys` is `True`.",
)
args = parser.parse_args()
all_doctest_files = get_all_doctest_files()
raw_test_collection_map = defaultdict(list)
for file in all_doctest_files:
file_dir = "/".join(Path(file).parents[0].parts)
raw_test_collection_map[file_dir].append(file)
refined_test_collection_map = {}
for file_dir in raw_test_collection_map.keys():
if file_dir in ["docs/source/en/model_doc", "docs/source/en/tasks"]:
for file in raw_test_collection_map[file_dir]:
refined_test_collection_map[file] = file
else:
refined_test_collection_map[file_dir] = " ".join(sorted(raw_test_collection_map[file_dir]))
sorted_file_dirs = sorted(refined_test_collection_map.keys())
test_collection_map = {}
for file_dir in sorted_file_dirs:
test_collection_map[file_dir] = refined_test_collection_map[file_dir]
num_jobs = len(test_collection_map)
num_jobs_per_splits = num_jobs // args.num_splits
file_directory_splits = []
end = 0
for idx in range(args.num_splits):
start = end
end = start + num_jobs_per_splits + (1 if idx < num_jobs % args.num_splits else 0)
file_directory_splits.append(sorted_file_dirs[start:end])
if args.only_return_keys:
print(file_directory_splits)
else:
print(dict(test_collection_map))
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/slow_documentation_tests.txt | docs/source/en/generation_strategies.md
docs/source/en/model_doc/code_llama.md
docs/source/en/model_doc/ctrl.md
docs/source/en/model_doc/kosmos-2.md
docs/source/en/model_doc/seamless_m4t.md
docs/source/en/model_doc/seamless_m4t_v2.md
docs/source/en/task_summary.md
docs/source/en/tasks/prompting.md
src/transformers/models/blip_2/modeling_blip_2.py
src/transformers/models/ctrl/modeling_ctrl.py
src/transformers/models/fuyu/modeling_fuyu.py
src/transformers/models/idefics2/modeling_idefics2.py
src/transformers/models/kosmos2/modeling_kosmos2.py
src/transformers/models/musicgen_melody/modeling_musicgen_melody.py
src/transformers/models/musicgen_melody/processing_musicgen_melody.py
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/get_test_info.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib
import os
import sys
# This is required to make the module import works (when the python process is running from the root of the repo)
sys.path.append(".")
r"""
The argument `test_file` in this file refers to a model test file. This should be a string of the from
`tests/models/*/test_modeling_*.py`.
"""
def get_module_path(test_file):
"""Return the module path of a model test file."""
components = test_file.split(os.path.sep)
if components[0:2] != ["tests", "models"]:
raise ValueError(
"`test_file` should start with `tests/models/` (with `/` being the OS specific path separator). Got "
f"{test_file} instead."
)
test_fn = components[-1]
if not test_fn.endswith("py"):
raise ValueError(f"`test_file` should be a python file. Got {test_fn} instead.")
if not test_fn.startswith("test_modeling_"):
raise ValueError(
f"`test_file` should point to a file name of the form `test_modeling_*.py`. Got {test_fn} instead."
)
components = components[:-1] + [test_fn.replace(".py", "")]
test_module_path = ".".join(components)
return test_module_path
def get_test_module(test_file):
"""Get the module of a model test file."""
test_module_path = get_module_path(test_file)
test_module = importlib.import_module(test_module_path)
return test_module
def get_tester_classes(test_file):
"""Get all classes in a model test file whose names ends with `ModelTester`."""
tester_classes = []
test_module = get_test_module(test_file)
for attr in dir(test_module):
if attr.endswith("ModelTester"):
tester_classes.append(getattr(test_module, attr))
# sort with class names
return sorted(tester_classes, key=lambda x: x.__name__)
def get_test_classes(test_file):
"""Get all [test] classes in a model test file with attribute `all_model_classes` that are non-empty.
These are usually the (model) test classes containing the (non-slow) tests to run and are subclasses of one of the
classes `ModelTesterMixin`, `TFModelTesterMixin` or `FlaxModelTesterMixin`, as well as a subclass of
`unittest.TestCase`. Exceptions include `RagTestMixin` (and its subclasses).
"""
test_classes = []
test_module = get_test_module(test_file)
for attr in dir(test_module):
attr_value = getattr(test_module, attr)
# (TF/Flax)ModelTesterMixin is also an attribute in specific model test module. Let's exclude them by checking
# `all_model_classes` is not empty (which also excludes other special classes).
model_classes = getattr(attr_value, "all_model_classes", [])
if len(model_classes) > 0:
test_classes.append(attr_value)
# sort with class names
return sorted(test_classes, key=lambda x: x.__name__)
def get_model_classes(test_file):
"""Get all model classes that appear in `all_model_classes` attributes in a model test file."""
test_classes = get_test_classes(test_file)
model_classes = set()
for test_class in test_classes:
model_classes.update(test_class.all_model_classes)
# sort with class names
return sorted(model_classes, key=lambda x: x.__name__)
def get_model_tester_from_test_class(test_class):
"""Get the model tester class of a model test class."""
test = test_class()
if hasattr(test, "setUp"):
test.setUp()
model_tester = None
if hasattr(test, "model_tester"):
# `(TF/Flax)ModelTesterMixin` has this attribute default to `None`. Let's skip this case.
if test.model_tester is not None:
model_tester = test.model_tester.__class__
return model_tester
def get_test_classes_for_model(test_file, model_class):
"""Get all [test] classes in `test_file` that have `model_class` in their `all_model_classes`."""
test_classes = get_test_classes(test_file)
target_test_classes = []
for test_class in test_classes:
if model_class in test_class.all_model_classes:
target_test_classes.append(test_class)
# sort with class names
return sorted(target_test_classes, key=lambda x: x.__name__)
def get_tester_classes_for_model(test_file, model_class):
"""Get all model tester classes in `test_file` that are associated to `model_class`."""
test_classes = get_test_classes_for_model(test_file, model_class)
tester_classes = []
for test_class in test_classes:
tester_class = get_model_tester_from_test_class(test_class)
if tester_class is not None:
tester_classes.append(tester_class)
# sort with class names
return sorted(tester_classes, key=lambda x: x.__name__)
def get_test_to_tester_mapping(test_file):
"""Get a mapping from [test] classes to model tester classes in `test_file`.
This uses `get_test_classes` which may return classes that are NOT subclasses of `unittest.TestCase`.
"""
test_classes = get_test_classes(test_file)
test_tester_mapping = {test_class: get_model_tester_from_test_class(test_class) for test_class in test_classes}
return test_tester_mapping
def get_model_to_test_mapping(test_file):
"""Get a mapping from model classes to test classes in `test_file`."""
model_classes = get_model_classes(test_file)
model_test_mapping = {
model_class: get_test_classes_for_model(test_file, model_class) for model_class in model_classes
}
return model_test_mapping
def get_model_to_tester_mapping(test_file):
"""Get a mapping from model classes to model tester classes in `test_file`."""
model_classes = get_model_classes(test_file)
model_to_tester_mapping = {
model_class: get_tester_classes_for_model(test_file, model_class) for model_class in model_classes
}
return model_to_tester_mapping
def to_json(o):
"""Make the information succinct and easy to read.
Avoid the full class representation like `<class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>` when
displaying the results. Instead, we use class name (`BertForMaskedLM`) for the readability.
"""
if isinstance(o, str):
return o
elif isinstance(o, type):
return o.__name__
elif isinstance(o, (list, tuple)):
return [to_json(x) for x in o]
elif isinstance(o, dict):
return {to_json(k): to_json(v) for k, v in o.items()}
else:
return o
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/get_modified_files.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# this script reports modified .py files under the desired list of top-level sub-dirs passed as a list of arguments, e.g.:
# python ./utils/get_modified_files.py utils src tests examples
#
# it uses git to find the forking point and which files were modified - i.e. files not under git won't be considered
# since the output of this script is fed into Makefile commands it doesn't print a newline after the results
import re
import subprocess
import sys
fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8")
modified_files = (
subprocess.check_output(f"git diff --diff-filter=d --name-only {fork_point_sha}".split()).decode("utf-8").split()
)
joined_dirs = "|".join(sys.argv[1:])
regex = re.compile(rf"^({joined_dirs}).*?\.py$")
relevant_modified_files = [x for x in modified_files if regex.match(x)]
print(" ".join(relevant_modified_files), end="")
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_dummies.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is responsible for making sure the dummies in utils/dummies_xxx.py are up to date with the main init.
Why dummies? This is to make sure that a user can always import all objects from `transformers`, even if they don't
have the necessary extra libs installed. Those objects will then raise helpful error message whenever the user tries
to access one of their methods.
Usage (from the root of the repo):
Check that the dummy files are up to date (used in `make repo-consistency`):
```bash
python utils/check_dummies.py
```
Update the dummy files if needed (used in `make fix-copies`):
```bash
python utils/check_dummies.py --fix_and_overwrite
```
"""
import argparse
import os
import re
from typing import Dict, List, Optional
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_dummies.py
PATH_TO_TRANSFORMERS = "src/transformers"
# Matches is_xxx_available()
_re_backend = re.compile(r"is\_([a-z_]*)_available()")
# Matches from xxx import bla
_re_single_line_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n")
# Matches if not is_xxx_available()
_re_test_backend = re.compile(r"^\s+if\s+not\s+\(?is\_[a-z_]*\_available\(\)")
# Template for the dummy objects.
DUMMY_CONSTANT = """
{0} = None
"""
DUMMY_CLASS = """
class {0}(metaclass=DummyObject):
_backends = {1}
def __init__(self, *args, **kwargs):
requires_backends(self, {1})
"""
DUMMY_FUNCTION = """
def {0}(*args, **kwargs):
requires_backends({0}, {1})
"""
def find_backend(line: str) -> Optional[str]:
"""
Find one (or multiple) backend in a code line of the init.
Args:
line (`str`): A code line in an init file.
Returns:
Optional[`str`]: If one (or several) backend is found, returns it. In the case of multiple backends (the line
contains `if is_xxx_available() and `is_yyy_available()`) returns all backends joined on `_and_` (so
`xxx_and_yyy` for instance).
"""
if _re_test_backend.search(line) is None:
return None
backends = [b[0] for b in _re_backend.findall(line)]
backends.sort()
return "_and_".join(backends)
def read_init() -> Dict[str, List[str]]:
"""
Read the init and extract backend-specific objects.
Returns:
Dict[str, List[str]]: A dictionary mapping backend name to the list of object names requiring that backend.
"""
with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Get to the point we do the actual imports for type checking
line_index = 0
while not lines[line_index].startswith("if TYPE_CHECKING"):
line_index += 1
backend_specific_objects = {}
# Go through the end of the file
while line_index < len(lines):
# If the line is an if is_backend_available, we grab all objects associated.
backend = find_backend(lines[line_index])
if backend is not None:
while not lines[line_index].startswith(" else:"):
line_index += 1
line_index += 1
objects = []
# Until we unindent, add backend objects to the list
while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8):
line = lines[line_index]
single_line_import_search = _re_single_line_import.search(line)
if single_line_import_search is not None:
# Single-line imports
objects.extend(single_line_import_search.groups()[0].split(", "))
elif line.startswith(" " * 12):
# Multiple-line imports (with 3 indent level)
objects.append(line[12:-2])
line_index += 1
backend_specific_objects[backend] = objects
else:
line_index += 1
return backend_specific_objects
def create_dummy_object(name: str, backend_name: str) -> str:
"""
Create the code for a dummy object.
Args:
name (`str`): The name of the object.
backend_name (`str`): The name of the backend required for that object.
Returns:
`str`: The code of the dummy object.
"""
if name.isupper():
return DUMMY_CONSTANT.format(name)
elif name.islower():
return DUMMY_FUNCTION.format(name, backend_name)
else:
return DUMMY_CLASS.format(name, backend_name)
def create_dummy_files(backend_specific_objects: Optional[Dict[str, List[str]]] = None) -> Dict[str, str]:
"""
Create the content of the dummy files.
Args:
backend_specific_objects (`Dict[str, List[str]]`, *optional*):
The mapping backend name to list of backend-specific objects. If not passed, will be obtained by calling
`read_init()`.
Returns:
`Dict[str, str]`: A dictionary mapping backend name to code of the corresponding backend file.
"""
if backend_specific_objects is None:
backend_specific_objects = read_init()
dummy_files = {}
for backend, objects in backend_specific_objects.items():
backend_name = "[" + ", ".join(f'"{b}"' for b in backend.split("_and_")) + "]"
dummy_file = "# This file is autogenerated by the command `make fix-copies`, do not edit.\n"
dummy_file += "from ..utils import DummyObject, requires_backends\n\n"
dummy_file += "\n".join([create_dummy_object(o, backend_name) for o in objects])
dummy_files[backend] = dummy_file
return dummy_files
def check_dummies(overwrite: bool = False):
"""
Check if the dummy files are up to date and maybe `overwrite` with the right content.
Args:
overwrite (`bool`, *optional*, default to `False`):
Whether or not to overwrite the content of the dummy files. Will raise an error if they are not up to date
when `overwrite=False`.
"""
dummy_files = create_dummy_files()
# For special correspondence backend name to shortcut as used in utils/dummy_xxx_objects.py
short_names = {"torch": "pt"}
# Locate actual dummy modules and read their content.
path = os.path.join(PATH_TO_TRANSFORMERS, "utils")
dummy_file_paths = {
backend: os.path.join(path, f"dummy_{short_names.get(backend, backend)}_objects.py")
for backend in dummy_files.keys()
}
actual_dummies = {}
for backend, file_path in dummy_file_paths.items():
if os.path.isfile(file_path):
with open(file_path, "r", encoding="utf-8", newline="\n") as f:
actual_dummies[backend] = f.read()
else:
actual_dummies[backend] = ""
# Compare actual with what they should be.
for backend in dummy_files.keys():
if dummy_files[backend] != actual_dummies[backend]:
if overwrite:
print(
f"Updating transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py as the main "
"__init__ has new objects."
)
with open(dummy_file_paths[backend], "w", encoding="utf-8", newline="\n") as f:
f.write(dummy_files[backend])
else:
raise ValueError(
"The main __init__ has objects that are not present in "
f"transformers.utils.dummy_{short_names.get(backend, backend)}_objects.py. Run `make fix-copies` "
"to fix this."
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
check_dummies(args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_config_attributes.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import os
import re
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_config_docstrings.py
PATH_TO_TRANSFORMERS = "src/transformers"
# This is to make sure the transformers module imported is the one in the repo.
transformers = direct_transformers_import(PATH_TO_TRANSFORMERS)
CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING
SPECIAL_CASES_TO_ALLOW = {
# 'max_position_embeddings' is not used in modeling file, but needed for eval frameworks like Huggingface's lighteval (https://github.com/huggingface/lighteval/blob/af24080ea4f16eaf1683e353042a2dfc9099f038/src/lighteval/models/base_model.py#L264).
# periods and offsers are not used in modeling file, but used in the configuration file to define `layers_block_type` and `layers_num_experts`.
"JambaConfig": [
"max_position_embeddings",
"attn_layer_offset",
"attn_layer_period",
"expert_layer_offset",
"expert_layer_period",
],
# used to compute the property `self.chunk_length`
"EncodecConfig": ["overlap"],
# used to compute the property `self.layers_block_type`
"RecurrentGemmaConfig": ["block_types"],
# used as in the config to define `intermediate_size`
"MambaConfig": ["expand"],
# used as `self.bert_model = BertModel(config, ...)`
"DPRConfig": True,
"FuyuConfig": True,
# not used in modeling files, but it's an important information
"FSMTConfig": ["langs"],
# used internally in the configuration class file
"GPTNeoConfig": ["attention_types"],
# used internally in the configuration class file
"EsmConfig": ["is_folding_model"],
# used during training (despite we don't have training script for these models yet)
"Mask2FormerConfig": ["ignore_value"],
# `ignore_value` used during training (despite we don't have training script for these models yet)
# `norm` used in conversion script (despite not using in the modeling file)
"OneFormerConfig": ["ignore_value", "norm"],
# used during preprocessing and collation, see `collating_graphormer.py`
"GraphormerConfig": ["spatial_pos_max"],
# used internally in the configuration class file
"T5Config": ["feed_forward_proj"],
# used internally in the configuration class file
# `tokenizer_class` get default value `T5Tokenizer` intentionally
"MT5Config": ["feed_forward_proj", "tokenizer_class"],
"UMT5Config": ["feed_forward_proj", "tokenizer_class"],
# used internally in the configuration class file
"LongT5Config": ["feed_forward_proj"],
# used internally in the configuration class file
"Pop2PianoConfig": ["feed_forward_proj"],
# used internally in the configuration class file
"SwitchTransformersConfig": ["feed_forward_proj"],
# having default values other than `1e-5` - we can't fix them without breaking
"BioGptConfig": ["layer_norm_eps"],
# having default values other than `1e-5` - we can't fix them without breaking
"GLPNConfig": ["layer_norm_eps"],
# having default values other than `1e-5` - we can't fix them without breaking
"SegformerConfig": ["layer_norm_eps"],
# having default values other than `1e-5` - we can't fix them without breaking
"CvtConfig": ["layer_norm_eps"],
# having default values other than `1e-5` - we can't fix them without breaking
"PerceiverConfig": ["layer_norm_eps"],
# used internally to calculate the feature size
"InformerConfig": ["num_static_real_features", "num_time_features"],
# used internally to calculate the feature size
"TimeSeriesTransformerConfig": ["num_static_real_features", "num_time_features"],
# used internally to calculate the feature size
"AutoformerConfig": ["num_static_real_features", "num_time_features"],
# used internally to calculate `mlp_dim`
"SamVisionConfig": ["mlp_ratio"],
# For (head) training, but so far not implemented
"ClapAudioConfig": ["num_classes"],
# Not used, but providing useful information to users
"SpeechT5HifiGanConfig": ["sampling_rate"],
# used internally in the configuration class file
"UdopConfig": ["feed_forward_proj"],
# Actually used in the config or generation config, in that case necessary for the sub-components generation
"SeamlessM4TConfig": [
"max_new_tokens",
"t2u_max_new_tokens",
"t2u_decoder_attention_heads",
"t2u_decoder_ffn_dim",
"t2u_decoder_layers",
"t2u_encoder_attention_heads",
"t2u_encoder_ffn_dim",
"t2u_encoder_layers",
"t2u_max_position_embeddings",
],
# Actually used in the config or generation config, in that case necessary for the sub-components generation
"SeamlessM4Tv2Config": [
"max_new_tokens",
"t2u_decoder_attention_heads",
"t2u_decoder_ffn_dim",
"t2u_decoder_layers",
"t2u_encoder_attention_heads",
"t2u_encoder_ffn_dim",
"t2u_encoder_layers",
"t2u_max_position_embeddings",
"t2u_variance_pred_dropout",
"t2u_variance_predictor_embed_dim",
"t2u_variance_predictor_hidden_dim",
"t2u_variance_predictor_kernel_size",
],
}
# TODO (ydshieh): Check the failing cases, try to fix them or move some cases to the above block once we are sure
SPECIAL_CASES_TO_ALLOW.update(
{
"CLIPSegConfig": True,
"DeformableDetrConfig": True,
"DetaConfig": True,
"DinatConfig": True,
"DonutSwinConfig": True,
"EfficientFormerConfig": True,
"FastSpeech2ConformerConfig": True,
"FSMTConfig": True,
"JukeboxConfig": True,
"LayoutLMv2Config": True,
"MaskFormerSwinConfig": True,
"MT5Config": True,
# For backward compatibility with trust remote code models
"MptConfig": True,
"MptAttentionConfig": True,
"NatConfig": True,
"OneFormerConfig": True,
"PerceiverConfig": True,
"RagConfig": True,
"SpeechT5Config": True,
"SwinConfig": True,
"Swin2SRConfig": True,
"Swinv2Config": True,
"SwitchTransformersConfig": True,
"TableTransformerConfig": True,
"TapasConfig": True,
"UniSpeechConfig": True,
"UniSpeechSatConfig": True,
"WavLMConfig": True,
"WhisperConfig": True,
# TODO: @Arthur (for `alignment_head` and `alignment_layer`)
"JukeboxPriorConfig": True,
# TODO: @Younes (for `is_decoder`)
"Pix2StructTextConfig": True,
"IdeficsConfig": True,
"IdeficsVisionConfig": True,
"IdeficsPerceiverConfig": True,
}
)
def check_attribute_being_used(config_class, attributes, default_value, source_strings):
"""Check if any name in `attributes` is used in one of the strings in `source_strings`
Args:
config_class (`type`):
The configuration class for which the arguments in its `__init__` will be checked.
attributes (`List[str]`):
The name of an argument (or attribute) and its variant names if any.
default_value (`Any`):
A default value for the attribute in `attributes` assigned in the `__init__` of `config_class`.
source_strings (`List[str]`):
The python source code strings in the same modeling directory where `config_class` is defined. The file
containing the definition of `config_class` should be excluded.
"""
attribute_used = False
for attribute in attributes:
for modeling_source in source_strings:
# check if we can find `config.xxx`, `getattr(config, "xxx", ...)` or `getattr(self.config, "xxx", ...)`
if (
f"config.{attribute}" in modeling_source
or f'getattr(config, "{attribute}"' in modeling_source
or f'getattr(self.config, "{attribute}"' in modeling_source
):
attribute_used = True
# Deal with multi-line cases
elif (
re.search(
rf'getattr[ \t\v\n\r\f]*\([ \t\v\n\r\f]*(self\.)?config,[ \t\v\n\r\f]*"{attribute}"',
modeling_source,
)
is not None
):
attribute_used = True
# `SequenceSummary` is called with `SequenceSummary(config)`
elif attribute in [
"summary_type",
"summary_use_proj",
"summary_activation",
"summary_last_dropout",
"summary_proj_to_labels",
"summary_first_dropout",
]:
if "SequenceSummary" in modeling_source:
attribute_used = True
if attribute_used:
break
if attribute_used:
break
# common and important attributes, even if they do not always appear in the modeling files
attributes_to_allow = [
"bos_index",
"eos_index",
"pad_index",
"unk_index",
"mask_index",
"image_size",
"use_cache",
"out_features",
"out_indices",
"sampling_rate",
# backbone related arguments passed to load_backbone
"use_pretrained_backbone",
"backbone",
"backbone_config",
"use_timm_backbone",
"backbone_kwargs",
]
attributes_used_in_generation = ["encoder_no_repeat_ngram_size"]
# Special cases to be allowed
case_allowed = True
if not attribute_used:
case_allowed = False
for attribute in attributes:
# Allow if the default value in the configuration class is different from the one in `PretrainedConfig`
if attribute in ["is_encoder_decoder"] and default_value is True:
case_allowed = True
elif attribute in ["tie_word_embeddings"] and default_value is False:
case_allowed = True
# Allow cases without checking the default value in the configuration class
elif attribute in attributes_to_allow + attributes_used_in_generation:
case_allowed = True
elif attribute.endswith("_token_id"):
case_allowed = True
# configuration class specific cases
if not case_allowed:
allowed_cases = SPECIAL_CASES_TO_ALLOW.get(config_class.__name__, [])
case_allowed = allowed_cases is True or attribute in allowed_cases
return attribute_used or case_allowed
def check_config_attributes_being_used(config_class):
"""Check the arguments in `__init__` of `config_class` are used in the modeling files in the same directory
Args:
config_class (`type`):
The configuration class for which the arguments in its `__init__` will be checked.
"""
# Get the parameters in `__init__` of the configuration class, and the default values if any
signature = dict(inspect.signature(config_class.__init__).parameters)
parameter_names = [x for x in list(signature.keys()) if x not in ["self", "kwargs"]]
parameter_defaults = [signature[param].default for param in parameter_names]
# If `attribute_map` exists, an attribute can have different names to be used in the modeling files, and as long
# as one variant is used, the test should pass
reversed_attribute_map = {}
if len(config_class.attribute_map) > 0:
reversed_attribute_map = {v: k for k, v in config_class.attribute_map.items()}
# Get the path to modeling source files
config_source_file = inspect.getsourcefile(config_class)
model_dir = os.path.dirname(config_source_file)
# Let's check against all frameworks: as long as one framework uses an attribute, we are good.
modeling_paths = [os.path.join(model_dir, fn) for fn in os.listdir(model_dir) if fn.startswith("modeling_")]
# Get the source code strings
modeling_sources = []
for path in modeling_paths:
if os.path.isfile(path):
with open(path, encoding="utf8") as fp:
modeling_sources.append(fp.read())
unused_attributes = []
for config_param, default_value in zip(parameter_names, parameter_defaults):
# `attributes` here is all the variant names for `config_param`
attributes = [config_param]
# some configuration classes have non-empty `attribute_map`, and both names could be used in the
# corresponding modeling files. As long as one of them appears, it is fine.
if config_param in reversed_attribute_map:
attributes.append(reversed_attribute_map[config_param])
if not check_attribute_being_used(config_class, attributes, default_value, modeling_sources):
unused_attributes.append(attributes[0])
return sorted(unused_attributes)
def check_config_attributes():
"""Check the arguments in `__init__` of all configuration classes are used in python files"""
configs_with_unused_attributes = {}
for _config_class in list(CONFIG_MAPPING.values()):
# Skip deprecated models
if "models.deprecated" in _config_class.__module__:
continue
# Some config classes are not in `CONFIG_MAPPING` (e.g. `CLIPVisionConfig`, `Blip2VisionConfig`, etc.)
config_classes_in_module = [
cls
for name, cls in inspect.getmembers(
inspect.getmodule(_config_class),
lambda x: inspect.isclass(x)
and issubclass(x, PretrainedConfig)
and inspect.getmodule(x) == inspect.getmodule(_config_class),
)
]
for config_class in config_classes_in_module:
unused_attributes = check_config_attributes_being_used(config_class)
if len(unused_attributes) > 0:
configs_with_unused_attributes[config_class.__name__] = unused_attributes
if len(configs_with_unused_attributes) > 0:
error = "The following configuration classes contain unused attributes in the corresponding modeling files:\n"
for name, attributes in configs_with_unused_attributes.items():
error += f"{name}: {attributes}\n"
raise ValueError(error)
if __name__ == "__main__":
check_config_attributes()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/download_glue_data.py | """ Script for downloading all GLUE data.
Original source: https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e
Note: for legal reasons, we are unable to host MRPC.
You can either use the version hosted by the SentEval team, which is already tokenized,
or you can download the original data from (https://download.microsoft.com/download/D/4/6/D46FF87A-F6B9-4252-AA8B-3604ED519838/MSRParaphraseCorpus.msi) and extract the data from it manually.
For Windows users, you can run the .msi file. For Mac and Linux users, consider an external library such as 'cabextract' (see below for an example).
You should then rename and place specific files in a folder (see below for an example).
mkdir MRPC
cabextract MSRParaphraseCorpus.msi -d MRPC
cat MRPC/_2DEC3DBE877E4DB192D17C0256E90F1D | tr -d $'\r' > MRPC/msr_paraphrase_train.txt
cat MRPC/_D7B391F9EAFF4B1B8BCE8F21B20B1B61 | tr -d $'\r' > MRPC/msr_paraphrase_test.txt
rm MRPC/_*
rm MSRParaphraseCorpus.msi
1/30/19: It looks like SentEval is no longer hosting their extracted and tokenized MRPC data, so you'll need to download the data from the original source for now.
2/11/19: It looks like SentEval actually *is* hosting the extracted data. Hooray!
"""
import argparse
import os
import sys
import urllib.request
import zipfile
TASKS = ["CoLA", "SST", "MRPC", "QQP", "STS", "MNLI", "SNLI", "QNLI", "RTE", "WNLI", "diagnostic"]
TASK2PATH = {
"CoLA": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4",
"SST": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8",
"MRPC": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc",
"QQP": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQQP.zip?alt=media&token=700c6acf-160d-4d89-81d1-de4191d02cb5",
"STS": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSTS-B.zip?alt=media&token=bddb94a7-8706-4e0d-a694-1109e12273b5",
"MNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FMNLI.zip?alt=media&token=50329ea1-e339-40e2-809c-10c40afff3ce",
"SNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSNLI.zip?alt=media&token=4afcfbb2-ff0c-4b2d-a09a-dbf07926f4df",
"QNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQNLIv2.zip?alt=media&token=6fdcf570-0fc5-4631-8456-9505272d1601",
"RTE": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb",
"WNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FWNLI.zip?alt=media&token=068ad0a0-ded7-4bd7-99a5-5e00222e0faf",
"diagnostic": "https://storage.googleapis.com/mtl-sentence-representations.appspot.com/tsvsWithoutLabels%2FAX.tsv?GoogleAccessId=firebase-adminsdk-0khhl@mtl-sentence-representations.iam.gserviceaccount.com&Expires=2498860800&Signature=DuQ2CSPt2Yfre0C%2BiISrVYrIFaZH1Lc7hBVZDD4ZyR7fZYOMNOUGpi8QxBmTNOrNPjR3z1cggo7WXFfrgECP6FBJSsURv8Ybrue8Ypt%2FTPxbuJ0Xc2FhDi%2BarnecCBFO77RSbfuz%2Bs95hRrYhTnByqu3U%2FYZPaj3tZt5QdfpH2IUROY8LiBXoXS46LE%2FgOQc%2FKN%2BA9SoscRDYsnxHfG0IjXGwHN%2Bf88q6hOmAxeNPx6moDulUF6XMUAaXCSFU%2BnRO2RDL9CapWxj%2BDl7syNyHhB7987hZ80B%2FwFkQ3MEs8auvt5XW1%2Bd4aCU7ytgM69r8JDCwibfhZxpaa4gd50QXQ%3D%3D",
}
MRPC_TRAIN = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt"
MRPC_TEST = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt"
def download_and_extract(task, data_dir):
print(f"Downloading and extracting {task}...")
data_file = f"{task}.zip"
urllib.request.urlretrieve(TASK2PATH[task], data_file)
with zipfile.ZipFile(data_file) as zip_ref:
zip_ref.extractall(data_dir)
os.remove(data_file)
print("\tCompleted!")
def format_mrpc(data_dir, path_to_data):
print("Processing MRPC...")
mrpc_dir = os.path.join(data_dir, "MRPC")
if not os.path.isdir(mrpc_dir):
os.mkdir(mrpc_dir)
if path_to_data:
mrpc_train_file = os.path.join(path_to_data, "msr_paraphrase_train.txt")
mrpc_test_file = os.path.join(path_to_data, "msr_paraphrase_test.txt")
else:
print("Local MRPC data not specified, downloading data from %s" % MRPC_TRAIN)
mrpc_train_file = os.path.join(mrpc_dir, "msr_paraphrase_train.txt")
mrpc_test_file = os.path.join(mrpc_dir, "msr_paraphrase_test.txt")
urllib.request.urlretrieve(MRPC_TRAIN, mrpc_train_file)
urllib.request.urlretrieve(MRPC_TEST, mrpc_test_file)
if not os.path.isfile(mrpc_train_file):
raise ValueError(f"Train data not found at {mrpc_train_file}")
if not os.path.isfile(mrpc_test_file):
raise ValueError(f"Test data not found at {mrpc_test_file}")
urllib.request.urlretrieve(TASK2PATH["MRPC"], os.path.join(mrpc_dir, "dev_ids.tsv"))
dev_ids = []
with open(os.path.join(mrpc_dir, "dev_ids.tsv"), encoding="utf8") as ids_fh:
for row in ids_fh:
dev_ids.append(row.strip().split("\t"))
with open(mrpc_train_file, encoding="utf8") as data_fh, open(
os.path.join(mrpc_dir, "train.tsv"), "w", encoding="utf8"
) as train_fh, open(os.path.join(mrpc_dir, "dev.tsv"), "w", encoding="utf8") as dev_fh:
header = data_fh.readline()
train_fh.write(header)
dev_fh.write(header)
for row in data_fh:
label, id1, id2, s1, s2 = row.strip().split("\t")
if [id1, id2] in dev_ids:
dev_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))
else:
train_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))
with open(mrpc_test_file, encoding="utf8") as data_fh, open(
os.path.join(mrpc_dir, "test.tsv"), "w", encoding="utf8"
) as test_fh:
header = data_fh.readline()
test_fh.write("index\t#1 ID\t#2 ID\t#1 String\t#2 String\n")
for idx, row in enumerate(data_fh):
label, id1, id2, s1, s2 = row.strip().split("\t")
test_fh.write("%d\t%s\t%s\t%s\t%s\n" % (idx, id1, id2, s1, s2))
print("\tCompleted!")
def download_diagnostic(data_dir):
print("Downloading and extracting diagnostic...")
if not os.path.isdir(os.path.join(data_dir, "diagnostic")):
os.mkdir(os.path.join(data_dir, "diagnostic"))
data_file = os.path.join(data_dir, "diagnostic", "diagnostic.tsv")
urllib.request.urlretrieve(TASK2PATH["diagnostic"], data_file)
print("\tCompleted!")
return
def get_tasks(task_names):
task_names = task_names.split(",")
if "all" in task_names:
tasks = TASKS
else:
tasks = []
for task_name in task_names:
if task_name not in TASKS:
raise ValueError(f"Task {task_name} not found!")
tasks.append(task_name)
return tasks
def main(arguments):
parser = argparse.ArgumentParser()
parser.add_argument("--data_dir", help="directory to save data to", type=str, default="glue_data")
parser.add_argument(
"--tasks", help="tasks to download data for as a comma separated string", type=str, default="all"
)
parser.add_argument(
"--path_to_mrpc",
help="path to directory containing extracted MRPC data, msr_paraphrase_train.txt and msr_paraphrase_text.txt",
type=str,
default="",
)
args = parser.parse_args(arguments)
if not os.path.isdir(args.data_dir):
os.mkdir(args.data_dir)
tasks = get_tasks(args.tasks)
for task in tasks:
if task == "MRPC":
format_mrpc(args.data_dir, args.path_to_mrpc)
elif task == "diagnostic":
download_diagnostic(args.data_dir)
else:
download_and_extract(task, args.data_dir)
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/sort_auto_mappings.py | # coding=utf-8
# Copyright 2022 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that sorts the names in the auto mappings defines in the auto modules in alphabetical order.
Use from the root of the repo with:
```bash
python utils/sort_auto_mappings.py
```
to auto-fix all the auto mappings (used in `make style`).
To only check if the mappings are properly sorted (as used in `make quality`), do:
```bash
python utils/sort_auto_mappings.py --check_only
```
"""
import argparse
import os
import re
from typing import Optional
# Path are set with the intent you should run this script from the root of the repo.
PATH_TO_AUTO_MODULE = "src/transformers/models/auto"
# re pattern that matches mapping introductions:
# SUPER_MODEL_MAPPING_NAMES = OrderedDict or SUPER_MODEL_MAPPING = OrderedDict
_re_intro_mapping = re.compile(r"[A-Z_]+_MAPPING(\s+|_[A-Z_]+\s+)=\s+OrderedDict")
# re pattern that matches identifiers in mappings
_re_identifier = re.compile(r'\s*\(\s*"(\S[^"]+)"')
def sort_auto_mapping(fname: str, overwrite: bool = False) -> Optional[bool]:
"""
Sort all auto mappings in a file.
Args:
fname (`str`): The name of the file where we want to sort auto-mappings.
overwrite (`bool`, *optional*, defaults to `False`): Whether or not to fix and overwrite the file.
Returns:
`Optional[bool]`: Returns `None` if `overwrite=True`. Otherwise returns `True` if the file has an auto-mapping
improperly sorted, `False` if the file is okay.
"""
with open(fname, "r", encoding="utf-8") as f:
content = f.read()
lines = content.split("\n")
new_lines = []
line_idx = 0
while line_idx < len(lines):
if _re_intro_mapping.search(lines[line_idx]) is not None:
# Start of a new mapping!
indent = len(re.search(r"^(\s*)\S", lines[line_idx]).groups()[0]) + 8
while not lines[line_idx].startswith(" " * indent + "("):
new_lines.append(lines[line_idx])
line_idx += 1
blocks = []
while lines[line_idx].strip() != "]":
# Blocks either fit in one line or not
if lines[line_idx].strip() == "(":
start_idx = line_idx
while not lines[line_idx].startswith(" " * indent + ")"):
line_idx += 1
blocks.append("\n".join(lines[start_idx : line_idx + 1]))
else:
blocks.append(lines[line_idx])
line_idx += 1
# Sort blocks by their identifiers
blocks = sorted(blocks, key=lambda x: _re_identifier.search(x).groups()[0])
new_lines += blocks
else:
new_lines.append(lines[line_idx])
line_idx += 1
if overwrite:
with open(fname, "w", encoding="utf-8") as f:
f.write("\n".join(new_lines))
else:
return "\n".join(new_lines) != content
def sort_all_auto_mappings(overwrite: bool = False):
"""
Sort all auto mappings in the library.
Args:
overwrite (`bool`, *optional*, defaults to `False`): Whether or not to fix and overwrite the file.
"""
fnames = [os.path.join(PATH_TO_AUTO_MODULE, f) for f in os.listdir(PATH_TO_AUTO_MODULE) if f.endswith(".py")]
diffs = [sort_auto_mapping(fname, overwrite=overwrite) for fname in fnames]
if not overwrite and any(diffs):
failures = [f for f, d in zip(fnames, diffs) if d]
raise ValueError(
f"The following files have auto mappings that need sorting: {', '.join(failures)}. Run `make style` to fix"
" this."
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.")
args = parser.parse_args()
sort_all_auto_mappings(not args.check_only)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_inits.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that checks the custom inits of Transformers are well-defined: Transformers uses init files that delay the
import of an object to when it's actually needed. This is to avoid the main init importing all models, which would
make the line `import transformers` very slow when the user has all optional dependencies installed. The inits with
delayed imports have two halves: one definining a dictionary `_import_structure` which maps modules to the name of the
objects in each module, and one in `TYPE_CHECKING` which looks like a normal init for type-checkers. The goal of this
script is to check the objects defined in both halves are the same.
This also checks the main init properly references all submodules, even if it doesn't import anything from them: every
submodule should be defined as a key of `_import_structure`, with an empty list as value potentially, or the submodule
won't be importable.
Use from the root of the repo with:
```bash
python utils/check_inits.py
```
for a check that will error in case of inconsistencies (used by `make repo-consistency`).
There is no auto-fix possible here sadly :-(
"""
import collections
import os
import re
from pathlib import Path
from typing import Dict, List, Optional, Tuple
# Path is set with the intent you should run this script from the root of the repo.
PATH_TO_TRANSFORMERS = "src/transformers"
# Matches is_xxx_available()
_re_backend = re.compile(r"is\_([a-z_]*)_available()")
# Catches a one-line _import_struct = {xxx}
_re_one_line_import_struct = re.compile(r"^_import_structure\s+=\s+\{([^\}]+)\}")
# Catches a line with a key-values pattern: "bla": ["foo", "bar"]
_re_import_struct_key_value = re.compile(r'\s+"\S*":\s+\[([^\]]*)\]')
# Catches a line if not is_foo_available
_re_test_backend = re.compile(r"^\s*if\s+not\s+is\_[a-z_]*\_available\(\)")
# Catches a line _import_struct["bla"].append("foo")
_re_import_struct_add_one = re.compile(r'^\s*_import_structure\["\S*"\]\.append\("(\S*)"\)')
# Catches a line _import_struct["bla"].extend(["foo", "bar"]) or _import_struct["bla"] = ["foo", "bar"]
_re_import_struct_add_many = re.compile(r"^\s*_import_structure\[\S*\](?:\.extend\(|\s*=\s+)\[([^\]]*)\]")
# Catches a line with an object between quotes and a comma: "MyModel",
_re_quote_object = re.compile(r'^\s+"([^"]+)",')
# Catches a line with objects between brackets only: ["foo", "bar"],
_re_between_brackets = re.compile(r"^\s+\[([^\]]+)\]")
# Catches a line with from foo import bar, bla, boo
_re_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n")
# Catches a line with try:
_re_try = re.compile(r"^\s*try:")
# Catches a line with else:
_re_else = re.compile(r"^\s*else:")
def find_backend(line: str) -> Optional[str]:
"""
Find one (or multiple) backend in a code line of the init.
Args:
line (`str`): A code line of the main init.
Returns:
Optional[`str`]: If one (or several) backend is found, returns it. In the case of multiple backends (the line
contains `if is_xxx_available() and `is_yyy_available()`) returns all backends joined on `_and_` (so
`xxx_and_yyy` for instance).
"""
if _re_test_backend.search(line) is None:
return None
backends = [b[0] for b in _re_backend.findall(line)]
backends.sort()
return "_and_".join(backends)
def parse_init(init_file) -> Optional[Tuple[Dict[str, List[str]], Dict[str, List[str]]]]:
"""
Read an init_file and parse (per backend) the `_import_structure` objects defined and the `TYPE_CHECKING` objects
defined.
Args:
init_file (`str`): Path to the init file to inspect.
Returns:
`Optional[Tuple[Dict[str, List[str]], Dict[str, List[str]]]]`: A tuple of two dictionaries mapping backends to list of
imported objects, one for the `_import_structure` part of the init and one for the `TYPE_CHECKING` part of the
init. Returns `None` if the init is not a custom init.
"""
with open(init_file, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Get the to `_import_structure` definition.
line_index = 0
while line_index < len(lines) and not lines[line_index].startswith("_import_structure = {"):
line_index += 1
# If this is a traditional init, just return.
if line_index >= len(lines):
return None
# First grab the objects without a specific backend in _import_structure
objects = []
while not lines[line_index].startswith("if TYPE_CHECKING") and find_backend(lines[line_index]) is None:
line = lines[line_index]
# If we have everything on a single line, let's deal with it.
if _re_one_line_import_struct.search(line):
content = _re_one_line_import_struct.search(line).groups()[0]
imports = re.findall(r"\[([^\]]+)\]", content)
for imp in imports:
objects.extend([obj[1:-1] for obj in imp.split(", ")])
line_index += 1
continue
single_line_import_search = _re_import_struct_key_value.search(line)
if single_line_import_search is not None:
imports = [obj[1:-1] for obj in single_line_import_search.groups()[0].split(", ") if len(obj) > 0]
objects.extend(imports)
elif line.startswith(" " * 8 + '"'):
objects.append(line[9:-3])
line_index += 1
# Those are stored with the key "none".
import_dict_objects = {"none": objects}
# Let's continue with backend-specific objects in _import_structure
while not lines[line_index].startswith("if TYPE_CHECKING"):
# If the line is an if not is_backend_available, we grab all objects associated.
backend = find_backend(lines[line_index])
# Check if the backend declaration is inside a try block:
if _re_try.search(lines[line_index - 1]) is None:
backend = None
if backend is not None:
line_index += 1
# Scroll until we hit the else block of try-except-else
while _re_else.search(lines[line_index]) is None:
line_index += 1
line_index += 1
objects = []
# Until we unindent, add backend objects to the list
while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 4):
line = lines[line_index]
if _re_import_struct_add_one.search(line) is not None:
objects.append(_re_import_struct_add_one.search(line).groups()[0])
elif _re_import_struct_add_many.search(line) is not None:
imports = _re_import_struct_add_many.search(line).groups()[0].split(", ")
imports = [obj[1:-1] for obj in imports if len(obj) > 0]
objects.extend(imports)
elif _re_between_brackets.search(line) is not None:
imports = _re_between_brackets.search(line).groups()[0].split(", ")
imports = [obj[1:-1] for obj in imports if len(obj) > 0]
objects.extend(imports)
elif _re_quote_object.search(line) is not None:
objects.append(_re_quote_object.search(line).groups()[0])
elif line.startswith(" " * 8 + '"'):
objects.append(line[9:-3])
elif line.startswith(" " * 12 + '"'):
objects.append(line[13:-3])
line_index += 1
import_dict_objects[backend] = objects
else:
line_index += 1
# At this stage we are in the TYPE_CHECKING part, first grab the objects without a specific backend
objects = []
while (
line_index < len(lines)
and find_backend(lines[line_index]) is None
and not lines[line_index].startswith("else")
):
line = lines[line_index]
single_line_import_search = _re_import.search(line)
if single_line_import_search is not None:
objects.extend(single_line_import_search.groups()[0].split(", "))
elif line.startswith(" " * 8):
objects.append(line[8:-2])
line_index += 1
type_hint_objects = {"none": objects}
# Let's continue with backend-specific objects
while line_index < len(lines):
# If the line is an if is_backend_available, we grab all objects associated.
backend = find_backend(lines[line_index])
# Check if the backend declaration is inside a try block:
if _re_try.search(lines[line_index - 1]) is None:
backend = None
if backend is not None:
line_index += 1
# Scroll until we hit the else block of try-except-else
while _re_else.search(lines[line_index]) is None:
line_index += 1
line_index += 1
objects = []
# Until we unindent, add backend objects to the list
while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8):
line = lines[line_index]
single_line_import_search = _re_import.search(line)
if single_line_import_search is not None:
objects.extend(single_line_import_search.groups()[0].split(", "))
elif line.startswith(" " * 12):
objects.append(line[12:-2])
line_index += 1
type_hint_objects[backend] = objects
else:
line_index += 1
return import_dict_objects, type_hint_objects
def analyze_results(import_dict_objects: Dict[str, List[str]], type_hint_objects: Dict[str, List[str]]) -> List[str]:
"""
Analyze the differences between _import_structure objects and TYPE_CHECKING objects found in an init.
Args:
import_dict_objects (`Dict[str, List[str]]`):
A dictionary mapping backend names (`"none"` for the objects independent of any specific backend) to
list of imported objects.
type_hint_objects (`Dict[str, List[str]]`):
A dictionary mapping backend names (`"none"` for the objects independent of any specific backend) to
list of imported objects.
Returns:
`List[str]`: The list of errors corresponding to mismatches.
"""
def find_duplicates(seq):
return [k for k, v in collections.Counter(seq).items() if v > 1]
# If one backend is missing from the other part of the init, error early.
if list(import_dict_objects.keys()) != list(type_hint_objects.keys()):
return ["Both sides of the init do not have the same backends!"]
errors = []
# Find all errors.
for key in import_dict_objects.keys():
# Duplicate imports in any half.
duplicate_imports = find_duplicates(import_dict_objects[key])
if duplicate_imports:
errors.append(f"Duplicate _import_structure definitions for: {duplicate_imports}")
duplicate_type_hints = find_duplicates(type_hint_objects[key])
if duplicate_type_hints:
errors.append(f"Duplicate TYPE_CHECKING objects for: {duplicate_type_hints}")
# Missing imports in either part of the init.
if sorted(set(import_dict_objects[key])) != sorted(set(type_hint_objects[key])):
name = "base imports" if key == "none" else f"{key} backend"
errors.append(f"Differences for {name}:")
for a in type_hint_objects[key]:
if a not in import_dict_objects[key]:
errors.append(f" {a} in TYPE_HINT but not in _import_structure.")
for a in import_dict_objects[key]:
if a not in type_hint_objects[key]:
errors.append(f" {a} in _import_structure but not in TYPE_HINT.")
return errors
def check_all_inits():
"""
Check all inits in the transformers repo and raise an error if at least one does not define the same objects in
both halves.
"""
failures = []
for root, _, files in os.walk(PATH_TO_TRANSFORMERS):
if "__init__.py" in files:
fname = os.path.join(root, "__init__.py")
objects = parse_init(fname)
if objects is not None:
errors = analyze_results(*objects)
if len(errors) > 0:
errors[0] = f"Problem in {fname}, both halves do not define the same objects.\n{errors[0]}"
failures.append("\n".join(errors))
if len(failures) > 0:
raise ValueError("\n\n".join(failures))
def get_transformers_submodules() -> List[str]:
"""
Returns the list of Transformers submodules.
"""
submodules = []
for path, directories, files in os.walk(PATH_TO_TRANSFORMERS):
for folder in directories:
# Ignore private modules
if folder.startswith("_"):
directories.remove(folder)
continue
# Ignore leftovers from branches (empty folders apart from pycache)
if len(list((Path(path) / folder).glob("*.py"))) == 0:
continue
short_path = str((Path(path) / folder).relative_to(PATH_TO_TRANSFORMERS))
submodule = short_path.replace(os.path.sep, ".")
submodules.append(submodule)
for fname in files:
if fname == "__init__.py":
continue
short_path = str((Path(path) / fname).relative_to(PATH_TO_TRANSFORMERS))
submodule = short_path.replace(".py", "").replace(os.path.sep, ".")
if len(submodule.split(".")) == 1:
submodules.append(submodule)
return submodules
IGNORE_SUBMODULES = [
"convert_pytorch_checkpoint_to_tf2",
"modeling_flax_pytorch_utils",
"models.esm.openfold_utils",
"modeling_attn_mask_utils",
"safetensors_conversion",
]
def check_submodules():
"""
Check all submodules of Transformers are properly registered in the main init. Error otherwise.
"""
# This is to make sure the transformers module imported is the one in the repo.
from transformers.utils import direct_transformers_import
transformers = direct_transformers_import(PATH_TO_TRANSFORMERS)
import_structure_keys = set(transformers._import_structure.keys())
# This contains all the base keys of the _import_structure object defined in the init, but if the user is missing
# some optional dependencies, they may not have all of them. Thus we read the init to read all additions and
# (potentiall re-) add them.
with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r") as f:
init_content = f.read()
import_structure_keys.update(set(re.findall(r"import_structure\[\"([^\"]*)\"\]", init_content)))
module_not_registered = [
module
for module in get_transformers_submodules()
if module not in IGNORE_SUBMODULES and module not in import_structure_keys
]
if len(module_not_registered) > 0:
list_of_modules = "\n".join(f"- {module}" for module in module_not_registered)
raise ValueError(
"The following submodules are not properly registed in the main init of Transformers:\n"
f"{list_of_modules}\n"
"Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value."
)
if __name__ == "__main__":
check_all_inits()
check_submodules()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_doctest_list.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script is responsible for cleaning the list of doctests by making sure the entries all exist and are in
alphabetical order.
Usage (from the root of the repo):
Check that the doctest list is properly sorted and all files exist (used in `make repo-consistency`):
```bash
python utils/check_doctest_list.py
```
Auto-sort the doctest list if it is not properly sorted (used in `make fix-copies`):
```bash
python utils/check_doctest_list.py --fix_and_overwrite
```
"""
import argparse
import os
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_doctest_list.py
REPO_PATH = "."
DOCTEST_FILE_PATHS = ["not_doctested.txt", "slow_documentation_tests.txt"]
def clean_doctest_list(doctest_file: str, overwrite: bool = False):
"""
Cleans the doctest in a given file.
Args:
doctest_file (`str`):
The path to the doctest file to check or clean.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to fix problems. If `False`, will error when the file is not clean.
"""
non_existent_paths = []
all_paths = []
with open(doctest_file, "r", encoding="utf-8") as f:
for line in f:
line = line.strip().split(" ")[0]
path = os.path.join(REPO_PATH, line)
if not (os.path.isfile(path) or os.path.isdir(path)):
non_existent_paths.append(line)
all_paths.append(line)
if len(non_existent_paths) > 0:
non_existent_paths = "\n".join([f"- {f}" for f in non_existent_paths])
raise ValueError(f"`{doctest_file}` contains non-existent paths:\n{non_existent_paths}")
sorted_paths = sorted(all_paths)
if all_paths != sorted_paths:
if not overwrite:
raise ValueError(
f"Files in `{doctest_file}` are not in alphabetical order, run `make fix-copies` to fix "
"this automatically."
)
with open(doctest_file, "w", encoding="utf-8") as f:
f.write("\n".join(sorted_paths) + "\n")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.")
args = parser.parse_args()
for doctest_file in DOCTEST_FILE_PATHS:
doctest_file = os.path.join(REPO_PATH, "utils", doctest_file)
clean_doctest_list(doctest_file, args.fix_and_overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_config_docstrings.py | # coding=utf-8
# Copyright 2022 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import re
from transformers.utils import direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_config_docstrings.py
PATH_TO_TRANSFORMERS = "src/transformers"
# This is to make sure the transformers module imported is the one in the repo.
transformers = direct_transformers_import(PATH_TO_TRANSFORMERS)
CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING
# Regex pattern used to find the checkpoint mentioned in the docstring of `config_class`.
# For example, `[google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)`
_re_checkpoint = re.compile(r"\[(.+?)\]\((https://huggingface\.co/.+?)\)")
CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK = {
"DecisionTransformerConfig",
"EncoderDecoderConfig",
"MusicgenConfig",
"RagConfig",
"SpeechEncoderDecoderConfig",
"TimmBackboneConfig",
"VisionEncoderDecoderConfig",
"VisionTextDualEncoderConfig",
"LlamaConfig",
}
def get_checkpoint_from_config_class(config_class):
checkpoint = None
# source code of `config_class`
config_source = inspect.getsource(config_class)
checkpoints = _re_checkpoint.findall(config_source)
# Each `checkpoint` is a tuple of a checkpoint name and a checkpoint link.
# For example, `('google-bert/bert-base-uncased', 'https://huggingface.co/google-bert/bert-base-uncased')`
for ckpt_name, ckpt_link in checkpoints:
# allow the link to end with `/`
if ckpt_link.endswith("/"):
ckpt_link = ckpt_link[:-1]
# verify the checkpoint name corresponds to the checkpoint link
ckpt_link_from_name = f"https://huggingface.co/{ckpt_name}"
if ckpt_link == ckpt_link_from_name:
checkpoint = ckpt_name
break
return checkpoint
def check_config_docstrings_have_checkpoints():
configs_without_checkpoint = []
for config_class in list(CONFIG_MAPPING.values()):
# Skip deprecated models
if "models.deprecated" in config_class.__module__:
continue
checkpoint = get_checkpoint_from_config_class(config_class)
name = config_class.__name__
if checkpoint is None and name not in CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK:
configs_without_checkpoint.append(name)
if len(configs_without_checkpoint) > 0:
message = "\n".join(sorted(configs_without_checkpoint))
raise ValueError(
f"The following configurations don't contain any valid checkpoint:\n{message}\n\n"
"The requirement is to include a link pointing to one of the models of this architecture in the "
"docstring of the config classes listed above. The link should have be a markdown format like "
"[myorg/mymodel](https://huggingface.co/myorg/mymodel)."
)
if __name__ == "__main__":
check_config_docstrings_have_checkpoints()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/notification_service_quantization.py | # Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ast
import json
import os
import sys
import time
from typing import Dict
from get_ci_error_statistics import get_jobs
from notification_service import (
Message,
handle_stacktraces,
handle_test_results,
prepare_reports,
retrieve_artifact,
retrieve_available_artifacts,
)
from slack_sdk import WebClient
client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"])
class QuantizationMessage(Message):
def __init__(
self,
title: str,
results: Dict,
):
self.title = title
# Failures and success of the modeling tests
self.n_success = sum(r["success"] for r in results.values())
self.single_gpu_failures = sum(r["failed"]["single"] for r in results.values())
self.multi_gpu_failures = sum(r["failed"]["multi"] for r in results.values())
self.n_failures = self.single_gpu_failures + self.multi_gpu_failures
self.n_tests = self.n_failures + self.n_success
self.results = results
self.thread_ts = None
@property
def payload(self) -> str:
blocks = [self.header]
if self.n_failures > 0:
blocks.append(self.failures_overwiew)
blocks.append(self.failures_detailed)
if self.n_failures == 0:
blocks.append(self.no_failures)
return json.dumps(blocks)
@property
def time(self) -> str:
all_results = self.results.values()
time_spent = []
for r in all_results:
if len(r["time_spent"]):
time_spent.extend([x for x in r["time_spent"].split(", ") if len(x.strip())])
total_secs = 0
for time in time_spent:
time_parts = time.split(":")
# Time can be formatted as xx:xx:xx, as .xx, or as x.xx if the time spent was less than a minute.
if len(time_parts) == 1:
time_parts = [0, 0, time_parts[0]]
hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2])
total_secs += hours * 3600 + minutes * 60 + seconds
hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60
return f"{int(hours)}h{int(minutes)}m{int(seconds)}s"
@property
def failures_overwiew(self) -> Dict:
return {
"type": "section",
"text": {
"type": "plain_text",
"text": (
f"There were {self.n_failures} failures, out of {self.n_tests} tests.\n"
f"The suite ran in {self.time}."
),
"emoji": True,
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Check Action results", "emoji": True},
"url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}",
},
}
@property
def failures_detailed(self) -> Dict:
failures = {k: v["failed"] for k, v in self.results.items()}
individual_reports = []
for key, value in failures.items():
device_report = self.get_device_report(value)
if sum(value.values()):
report = f"{device_report}{key}"
individual_reports.append(report)
header = "Single | Multi | Category\n"
failures_report = prepare_reports(
title="The following quantization tests had failures", header=header, reports=individual_reports
)
return {"type": "section", "text": {"type": "mrkdwn", "text": failures_report}}
def post(self):
payload = self.payload
print("Sending the following payload")
print(json.dumps({"blocks": json.loads(payload)}))
text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed."
self.thread_ts = client.chat_postMessage(
channel=SLACK_REPORT_CHANNEL_ID,
blocks=payload,
text=text,
)
def post_reply(self):
if self.thread_ts is None:
raise ValueError("Can only post reply if a post has been made.")
for job, job_result in self.results.items():
if len(job_result["failures"]):
for device, failures in job_result["failures"].items():
blocks = self.get_reply_blocks(
job,
job_result,
failures,
device,
text=f'Number of failures: {job_result["failed"][device]}',
)
print("Sending the following reply")
print(json.dumps({"blocks": blocks}))
client.chat_postMessage(
channel="#transformers-ci-daily-quantization",
text=f"Results for {job}",
blocks=blocks,
thread_ts=self.thread_ts["ts"],
)
time.sleep(1)
if __name__ == "__main__":
setup_status = os.environ.get("SETUP_STATUS")
SLACK_REPORT_CHANNEL_ID = os.environ["SLACK_REPORT_CHANNEL"]
setup_failed = True if setup_status is not None and setup_status != "success" else False
# This env. variable is set in workflow file (under the job `send_results`).
ci_event = os.environ["CI_EVENT"]
title = f"๐ค Results of the {ci_event} tests."
if setup_failed:
Message.error_out(
title, ci_title="", runner_not_available=False, runner_failed=False, setup_failed=setup_failed
)
exit(0)
arguments = sys.argv[1:][0]
try:
quantization_matrix = ast.literal_eval(arguments)
# Need to change from elements like `quantization/bnb` to `quantization_bnb` (the ones used as artifact names).
quantization_matrix = [x.replace("quantization/", "quantization_") for x in quantization_matrix]
except SyntaxError:
Message.error_out(title, ci_title="")
raise ValueError("Errored out.")
available_artifacts = retrieve_available_artifacts()
quantization_results = {
quant: {
"failed": {"single": 0, "multi": 0},
"success": 0,
"time_spent": "",
"failures": {},
"job_link": {},
}
for quant in quantization_matrix
if f"run_quantization_torch_gpu_{ quant }_test_reports" in available_artifacts
}
github_actions_jobs = get_jobs(
workflow_run_id=os.environ["GITHUB_RUN_ID"], token=os.environ["ACCESS_REPO_INFO_TOKEN"]
)
github_actions_job_links = {job["name"]: job["html_url"] for job in github_actions_jobs}
artifact_name_to_job_map = {}
for job in github_actions_jobs:
for step in job["steps"]:
if step["name"].startswith("Test suite reports artifacts: "):
artifact_name = step["name"][len("Test suite reports artifacts: ") :]
artifact_name_to_job_map[artifact_name] = job
break
for quant in quantization_results.keys():
for artifact_path in available_artifacts[f"run_quantization_torch_gpu_{ quant }_test_reports"].paths:
artifact = retrieve_artifact(artifact_path["path"], artifact_path["gpu"])
if "stats" in artifact:
# Link to the GitHub Action job
job = artifact_name_to_job_map[artifact_path["path"]]
quantization_results[quant]["job_link"][artifact_path["gpu"]] = job["html_url"]
failed, success, time_spent = handle_test_results(artifact["stats"])
quantization_results[quant]["failed"][artifact_path["gpu"]] += failed
quantization_results[quant]["success"] += success
quantization_results[quant]["time_spent"] += time_spent[1:-1] + ", "
stacktraces = handle_stacktraces(artifact["failures_line"])
for line in artifact["summary_short"].split("\n"):
if line.startswith("FAILED "):
line = line[len("FAILED ") :]
line = line.split()[0].replace("\n", "")
if artifact_path["gpu"] not in quantization_results[quant]["failures"]:
quantization_results[quant]["failures"][artifact_path["gpu"]] = []
quantization_results[quant]["failures"][artifact_path["gpu"]].append(
{"line": line, "trace": stacktraces.pop(0)}
)
message = QuantizationMessage(
title,
results=quantization_results,
)
message.post()
message.post_reply()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/not_doctested.txt | docs/source/en/_config.py
docs/source/en/accelerate.md
docs/source/en/add_new_model.md
docs/source/en/add_new_pipeline.md
docs/source/en/attention.md
docs/source/en/benchmarks.md
docs/source/en/bertology.md
docs/source/en/big_models.md
docs/source/en/community.md
docs/source/en/contributing.md
docs/source/en/create_a_model.md
docs/source/en/custom_models.md
docs/source/en/custom_tools.md
docs/source/en/debugging.md
docs/source/en/fast_tokenizers.md
docs/source/en/glossary.md
docs/source/en/hpo_train.md
docs/source/en/index.md
docs/source/en/installation.md
docs/source/en/internal/audio_utils.md
docs/source/en/internal/file_utils.md
docs/source/en/internal/image_processing_utils.md
docs/source/en/internal/modeling_utils.md
docs/source/en/internal/pipelines_utils.md
docs/source/en/internal/time_series_utils.md
docs/source/en/internal/tokenization_utils.md
docs/source/en/internal/trainer_utils.md
docs/source/en/llm_tutorial.md
docs/source/en/main_classes/agent.md
docs/source/en/main_classes/callback.md
docs/source/en/main_classes/configuration.md
docs/source/en/main_classes/data_collator.md
docs/source/en/main_classes/deepspeed.md
docs/source/en/main_classes/feature_extractor.md
docs/source/en/main_classes/image_processor.md
docs/source/en/main_classes/keras_callbacks.md
docs/source/en/main_classes/logging.md
docs/source/en/main_classes/model.md
docs/source/en/main_classes/onnx.md
docs/source/en/main_classes/optimizer_schedules.md
docs/source/en/main_classes/output.md
docs/source/en/main_classes/pipelines.md
docs/source/en/main_classes/processors.md
docs/source/en/main_classes/quantization.md
docs/source/en/main_classes/tokenizer.md
docs/source/en/main_classes/trainer.md
docs/source/en/model_doc/albert.md
docs/source/en/model_doc/align.md
docs/source/en/model_doc/altclip.md
docs/source/en/model_doc/audio-spectrogram-transformer.md
docs/source/en/model_doc/auto.md
docs/source/en/model_doc/autoformer.md
docs/source/en/model_doc/bark.md
docs/source/en/model_doc/bart.md
docs/source/en/model_doc/barthez.md
docs/source/en/model_doc/bartpho.md
docs/source/en/model_doc/beit.md
docs/source/en/model_doc/bert-generation.md
docs/source/en/model_doc/bert-japanese.md
docs/source/en/model_doc/bert.md
docs/source/en/model_doc/bertweet.md
docs/source/en/model_doc/big_bird.md
docs/source/en/model_doc/bigbird_pegasus.md
docs/source/en/model_doc/biogpt.md
docs/source/en/model_doc/bit.md
docs/source/en/model_doc/blenderbot-small.md
docs/source/en/model_doc/blenderbot.md
docs/source/en/model_doc/blip-2.md
docs/source/en/model_doc/blip.md
docs/source/en/model_doc/bloom.md
docs/source/en/model_doc/bort.md
docs/source/en/model_doc/bridgetower.md
docs/source/en/model_doc/camembert.md
docs/source/en/model_doc/canine.md
docs/source/en/model_doc/chinese_clip.md
docs/source/en/model_doc/clap.md
docs/source/en/model_doc/clip.md
docs/source/en/model_doc/clipseg.md
docs/source/en/model_doc/codegen.md
docs/source/en/model_doc/conditional_detr.md
docs/source/en/model_doc/convbert.md
docs/source/en/model_doc/convnext.md
docs/source/en/model_doc/convnextv2.md
docs/source/en/model_doc/cpm.md
docs/source/en/model_doc/cpmant.md
docs/source/en/model_doc/ctrl.md
docs/source/en/model_doc/cvt.md
docs/source/en/model_doc/data2vec.md
docs/source/en/model_doc/deberta-v2.md
docs/source/en/model_doc/deberta.md
docs/source/en/model_doc/decision_transformer.md
docs/source/en/model_doc/deformable_detr.md
docs/source/en/model_doc/deit.md
docs/source/en/model_doc/deplot.md
docs/source/en/model_doc/deta.md
docs/source/en/model_doc/detr.md
docs/source/en/model_doc/dialogpt.md
docs/source/en/model_doc/dinat.md
docs/source/en/model_doc/dinov2.md
docs/source/en/model_doc/distilbert.md
docs/source/en/model_doc/dit.md
docs/source/en/model_doc/dpr.md
docs/source/en/model_doc/dpt.md
docs/source/en/model_doc/efficientformer.md
docs/source/en/model_doc/efficientnet.md
docs/source/en/model_doc/electra.md
docs/source/en/model_doc/encodec.md
docs/source/en/model_doc/ernie.md
docs/source/en/model_doc/ernie_m.md
docs/source/en/model_doc/esm.md
docs/source/en/model_doc/flan-t5.md
docs/source/en/model_doc/flan-ul2.md
docs/source/en/model_doc/flaubert.md
docs/source/en/model_doc/flava.md
docs/source/en/model_doc/fnet.md
docs/source/en/model_doc/focalnet.md
docs/source/en/model_doc/fsmt.md
docs/source/en/model_doc/funnel.md
docs/source/en/model_doc/git.md
docs/source/en/model_doc/glpn.md
docs/source/en/model_doc/gpt-sw3.md
docs/source/en/model_doc/gpt2.md
docs/source/en/model_doc/gpt_bigcode.md
docs/source/en/model_doc/gpt_neo.md
docs/source/en/model_doc/gpt_neox.md
docs/source/en/model_doc/gpt_neox_japanese.md
docs/source/en/model_doc/gptj.md
docs/source/en/model_doc/gptsan-japanese.md
docs/source/en/model_doc/graphormer.md
docs/source/en/model_doc/groupvit.md
docs/source/en/model_doc/herbert.md
docs/source/en/model_doc/hubert.md
docs/source/en/model_doc/ibert.md
docs/source/en/model_doc/idefics.md
docs/source/en/model_doc/imagegpt.md
docs/source/en/model_doc/informer.md
docs/source/en/model_doc/instructblip.md
docs/source/en/model_doc/jukebox.md
docs/source/en/model_doc/layoutlm.md
docs/source/en/model_doc/layoutlmv2.md
docs/source/en/model_doc/layoutlmv3.md
docs/source/en/model_doc/layoutxlm.md
docs/source/en/model_doc/led.md
docs/source/en/model_doc/levit.md
docs/source/en/model_doc/lilt.md
docs/source/en/model_doc/llama.md
docs/source/en/model_doc/llama2.md
docs/source/en/model_doc/llava.md
docs/source/en/model_doc/llava_next.md
docs/source/en/model_doc/longformer.md
docs/source/en/model_doc/longt5.md
docs/source/en/model_doc/luke.md
docs/source/en/model_doc/lxmert.md
docs/source/en/model_doc/m2m_100.md
docs/source/en/model_doc/madlad-400.md
docs/source/en/model_doc/marian.md
docs/source/en/model_doc/mask2former.md
docs/source/en/model_doc/maskformer.md
docs/source/en/model_doc/matcha.md
docs/source/en/model_doc/mbart.md
docs/source/en/model_doc/mctct.md
docs/source/en/model_doc/mega.md
docs/source/en/model_doc/megatron-bert.md
docs/source/en/model_doc/megatron_gpt2.md
docs/source/en/model_doc/mgp-str.md
docs/source/en/model_doc/mistral.md
docs/source/en/model_doc/mixtral.md
docs/source/en/model_doc/mluke.md
docs/source/en/model_doc/mms.md
docs/source/en/model_doc/mobilebert.md
docs/source/en/model_doc/mobilenet_v1.md
docs/source/en/model_doc/mobilenet_v2.md
docs/source/en/model_doc/mobilevit.md
docs/source/en/model_doc/mobilevitv2.md
docs/source/en/model_doc/mpnet.md
docs/source/en/model_doc/mpt.md
docs/source/en/model_doc/mra.md
docs/source/en/model_doc/mt5.md
docs/source/en/model_doc/musicgen.md
docs/source/en/model_doc/musicgen_melody.md
docs/source/en/model_doc/mvp.md
docs/source/en/model_doc/nat.md
docs/source/en/model_doc/nezha.md
docs/source/en/model_doc/nllb-moe.md
docs/source/en/model_doc/nllb.md
docs/source/en/model_doc/nystromformer.md
docs/source/en/model_doc/oneformer.md
docs/source/en/model_doc/open-llama.md
docs/source/en/model_doc/openai-gpt.md
docs/source/en/model_doc/opt.md
docs/source/en/model_doc/owlvit.md
docs/source/en/model_doc/pegasus.md
docs/source/en/model_doc/pegasus_x.md
docs/source/en/model_doc/perceiver.md
docs/source/en/model_doc/phobert.md
docs/source/en/model_doc/pix2struct.md
docs/source/en/model_doc/plbart.md
docs/source/en/model_doc/poolformer.md
docs/source/en/model_doc/pop2piano.md
docs/source/en/model_doc/prophetnet.md
docs/source/en/model_doc/pvt.md
docs/source/en/model_doc/qdqbert.md
docs/source/en/model_doc/qwen2.md
docs/source/en/model_doc/qwen2_moe.md
docs/source/en/model_doc/rag.md
docs/source/en/model_doc/realm.md
docs/source/en/model_doc/reformer.md
docs/source/en/model_doc/regnet.md
docs/source/en/model_doc/rembert.md
docs/source/en/model_doc/resnet.md
docs/source/en/model_doc/retribert.md
docs/source/en/model_doc/roberta-prelayernorm.md
docs/source/en/model_doc/roberta.md
docs/source/en/model_doc/roc_bert.md
docs/source/en/model_doc/roformer.md
docs/source/en/model_doc/rwkv.md
docs/source/en/model_doc/sam.md
docs/source/en/model_doc/segformer.md
docs/source/en/model_doc/sew-d.md
docs/source/en/model_doc/sew.md
docs/source/en/model_doc/speech-encoder-decoder.md
docs/source/en/model_doc/speech_to_text_2.md
docs/source/en/model_doc/speecht5.md
docs/source/en/model_doc/splinter.md
docs/source/en/model_doc/squeezebert.md
docs/source/en/model_doc/swiftformer.md
docs/source/en/model_doc/swin.md
docs/source/en/model_doc/swin2sr.md
docs/source/en/model_doc/swinv2.md
docs/source/en/model_doc/table-transformer.md
docs/source/en/model_doc/tapas.md
docs/source/en/model_doc/time_series_transformer.md
docs/source/en/model_doc/timesformer.md
docs/source/en/model_doc/trajectory_transformer.md
docs/source/en/model_doc/transfo-xl.md
docs/source/en/model_doc/trocr.md
docs/source/en/model_doc/tvlt.md
docs/source/en/model_doc/ul2.md
docs/source/en/model_doc/umt5.md
docs/source/en/model_doc/unispeech-sat.md
docs/source/en/model_doc/unispeech.md
docs/source/en/model_doc/upernet.md
docs/source/en/model_doc/van.md
docs/source/en/model_doc/videomae.md
docs/source/en/model_doc/vilt.md
docs/source/en/model_doc/vipllava.md
docs/source/en/model_doc/vision-encoder-decoder.md
docs/source/en/model_doc/vision-text-dual-encoder.md
docs/source/en/model_doc/visual_bert.md
docs/source/en/model_doc/vit.md
docs/source/en/model_doc/vit_hybrid.md
docs/source/en/model_doc/vit_mae.md
docs/source/en/model_doc/vit_msn.md
docs/source/en/model_doc/vivit.md
docs/source/en/model_doc/wav2vec2-conformer.md
docs/source/en/model_doc/wav2vec2.md
docs/source/en/model_doc/wav2vec2_phoneme.md
docs/source/en/model_doc/wavlm.md
docs/source/en/model_doc/whisper.md
docs/source/en/model_doc/xclip.md
docs/source/en/model_doc/xglm.md
docs/source/en/model_doc/xlm-prophetnet.md
docs/source/en/model_doc/xlm-roberta-xl.md
docs/source/en/model_doc/xlm-roberta.md
docs/source/en/model_doc/xlm-v.md
docs/source/en/model_doc/xlm.md
docs/source/en/model_doc/xlnet.md
docs/source/en/model_doc/xls_r.md
docs/source/en/model_doc/xlsr_wav2vec2.md
docs/source/en/model_doc/xmod.md
docs/source/en/model_doc/yolos.md
docs/source/en/model_doc/yoso.md
docs/source/en/model_memory_anatomy.md
docs/source/en/model_sharing.md
docs/source/en/model_summary.md
docs/source/en/multilingual.md
docs/source/en/notebooks.md
docs/source/en/pad_truncation.md
docs/source/en/peft.md
docs/source/en/perf_hardware.md
docs/source/en/perf_infer_cpu.md
docs/source/en/perf_infer_gpu_one.md
docs/source/en/perf_torch_compile.md
docs/source/en/perf_train_cpu.md
docs/source/en/perf_train_cpu_many.md
docs/source/en/perf_train_gpu_many.md
docs/source/en/perf_train_gpu_one.md
docs/source/en/perf_train_special.md
docs/source/en/perf_train_tpu_tf.md
docs/source/en/performance.md
docs/source/en/perplexity.md
docs/source/en/philosophy.md
docs/source/en/pipeline_webserver.md
docs/source/en/pr_checks.md
docs/source/en/preprocessing.md
docs/source/en/run_scripts.md
docs/source/en/sagemaker.md
docs/source/en/serialization.md
docs/source/en/tasks/asr.md
docs/source/en/tasks/audio_classification.md
docs/source/en/tasks/document_question_answering.md
docs/source/en/tasks/idefics.md
docs/source/en/tasks/image_captioning.md
docs/source/en/tasks/image_classification.md
docs/source/en/tasks/language_modeling.md
docs/source/en/tasks/masked_language_modeling.md
docs/source/en/tasks/monocular_depth_estimation.md
docs/source/en/tasks/multiple_choice.md
docs/source/en/tasks/object_detection.md
docs/source/en/tasks/question_answering.md
docs/source/en/tasks/semantic_segmentation.md
docs/source/en/tasks/sequence_classification.md
docs/source/en/tasks/summarization.md
docs/source/en/tasks/text-to-speech.md
docs/source/en/tasks/token_classification.md
docs/source/en/tasks/translation.md
docs/source/en/tasks/video_classification.md
docs/source/en/tasks/visual_question_answering.md
docs/source/en/tasks/zero_shot_image_classification.md
docs/source/en/tasks/zero_shot_object_detection.md
docs/source/en/tasks_explained.md
docs/source/en/tf_xla.md
docs/source/en/tflite.md
docs/source/en/tokenizer_summary.md
docs/source/en/torchscript.md
docs/source/en/training.md
docs/source/en/transformers_agents.md
docs/source/en/troubleshooting.md
src/transformers/activations.py
src/transformers/activations_tf.py
src/transformers/audio_utils.py
src/transformers/benchmark/benchmark.py
src/transformers/benchmark/benchmark_args.py
src/transformers/benchmark/benchmark_args_tf.py
src/transformers/benchmark/benchmark_args_utils.py
src/transformers/benchmark/benchmark_tf.py
src/transformers/benchmark/benchmark_utils.py
src/transformers/commands/add_new_model_like.py
src/transformers/commands/convert.py
src/transformers/commands/download.py
src/transformers/commands/env.py
src/transformers/commands/lfs.py
src/transformers/commands/pt_to_tf.py
src/transformers/commands/run.py
src/transformers/commands/serving.py
src/transformers/commands/train.py
src/transformers/commands/transformers_cli.py
src/transformers/commands/user.py
src/transformers/configuration_utils.py
src/transformers/convert_graph_to_onnx.py
src/transformers/convert_pytorch_checkpoint_to_tf2.py
src/transformers/convert_slow_tokenizer.py
src/transformers/convert_slow_tokenizers_checkpoints_to_fast.py
src/transformers/convert_tf_hub_seq_to_seq_bert_to_pytorch.py
src/transformers/data/data_collator.py
src/transformers/data/datasets/glue.py
src/transformers/data/datasets/language_modeling.py
src/transformers/data/datasets/squad.py
src/transformers/data/metrics/squad_metrics.py
src/transformers/data/processors/glue.py
src/transformers/data/processors/squad.py
src/transformers/data/processors/utils.py
src/transformers/data/processors/xnli.py
src/transformers/debug_utils.py
src/transformers/deepspeed.py
src/transformers/dependency_versions_check.py
src/transformers/dependency_versions_table.py
src/transformers/dynamic_module_utils.py
src/transformers/feature_extraction_sequence_utils.py
src/transformers/feature_extraction_utils.py
src/transformers/file_utils.py
src/transformers/hf_argparser.py
src/transformers/hyperparameter_search.py
src/transformers/image_processing_utils.py
src/transformers/image_transforms.py
src/transformers/image_utils.py
src/transformers/integrations/bitsandbytes.py
src/transformers/integrations/deepspeed.py
src/transformers/integrations/integration_utils.py
src/transformers/integrations/peft.py
src/transformers/keras_callbacks.py
src/transformers/modelcard.py
src/transformers/modeling_flax_outputs.py
src/transformers/modeling_flax_pytorch_utils.py
src/transformers/modeling_flax_utils.py
src/transformers/modeling_outputs.py
src/transformers/modeling_tf_outputs.py
src/transformers/modeling_tf_pytorch_utils.py
src/transformers/modeling_tf_utils.py
src/transformers/modeling_utils.py
src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/albert/modeling_flax_albert.py
src/transformers/models/align/configuration_align.py
src/transformers/models/align/convert_align_tf_to_hf.py
src/transformers/models/align/modeling_align.py
src/transformers/models/altclip/configuration_altclip.py
src/transformers/models/altclip/modeling_altclip.py
src/transformers/models/audio_spectrogram_transformer/configuration_audio_spectrogram_transformer.py
src/transformers/models/audio_spectrogram_transformer/convert_audio_spectrogram_transformer_original_to_pytorch.py
src/transformers/models/auto/auto_factory.py
src/transformers/models/auto/configuration_auto.py
src/transformers/models/auto/modeling_auto.py
src/transformers/models/auto/modeling_flax_auto.py
src/transformers/models/auto/modeling_tf_auto.py
src/transformers/models/autoformer/configuration_autoformer.py
src/transformers/models/autoformer/modeling_autoformer.py
src/transformers/models/bark/convert_suno_to_hf.py
src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/bart/modeling_flax_bart.py
src/transformers/models/bart/modeling_tf_bart.py
src/transformers/models/beit/convert_beit_unilm_to_pytorch.py
src/transformers/models/beit/modeling_flax_beit.py
src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py
src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py
src/transformers/models/bert/convert_bert_token_dropping_original_tf2_checkpoint_to_pytorch.py
src/transformers/models/bert/modeling_flax_bert.py
src/transformers/models/bert_generation/modeling_bert_generation.py
src/transformers/models/big_bird/convert_bigbird_original_tf_checkpoint_to_pytorch.py
src/transformers/models/big_bird/modeling_flax_big_bird.py
src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py
src/transformers/models/biogpt/configuration_biogpt.py
src/transformers/models/biogpt/convert_biogpt_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/biogpt/modeling_biogpt.py
src/transformers/models/bit/configuration_bit.py
src/transformers/models/bit/convert_bit_to_pytorch.py
src/transformers/models/bit/modeling_bit.py
src/transformers/models/blenderbot/convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/blenderbot/modeling_flax_blenderbot.py
src/transformers/models/blenderbot/modeling_tf_blenderbot.py
src/transformers/models/blenderbot_small/modeling_flax_blenderbot_small.py
src/transformers/models/blenderbot_small/modeling_tf_blenderbot_small.py
src/transformers/models/blip/configuration_blip.py
src/transformers/models/blip/convert_blip_original_pytorch_to_hf.py
src/transformers/models/blip/modeling_blip_text.py
src/transformers/models/blip/modeling_tf_blip_text.py
src/transformers/models/blip_2/configuration_blip_2.py
src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py
src/transformers/models/blip_2/modeling_blip_2.py
src/transformers/models/bloom/convert_bloom_original_checkpoint_to_pytorch.py
src/transformers/models/bloom/modeling_bloom.py
src/transformers/models/bloom/modeling_flax_bloom.py
src/transformers/models/bridgetower/configuration_bridgetower.py
src/transformers/models/bridgetower/modeling_bridgetower.py
src/transformers/models/bros/convert_bros_to_pytorch.py
src/transformers/models/byt5/convert_byt5_original_tf_checkpoint_to_pytorch.py
src/transformers/models/camembert/modeling_camembert.py
src/transformers/models/camembert/modeling_tf_camembert.py
src/transformers/models/canine/convert_canine_original_tf_checkpoint_to_pytorch.py
src/transformers/models/chinese_clip/configuration_chinese_clip.py
src/transformers/models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py
src/transformers/models/chinese_clip/modeling_chinese_clip.py
src/transformers/models/clap/convert_clap_original_pytorch_to_hf.py
src/transformers/models/clip/convert_clip_original_pytorch_to_hf.py
src/transformers/models/clip/modeling_clip.py
src/transformers/models/clip/modeling_flax_clip.py
src/transformers/models/clip/modeling_tf_clip.py
src/transformers/models/clipseg/configuration_clipseg.py
src/transformers/models/clipseg/convert_clipseg_original_pytorch_to_hf.py
src/transformers/models/codegen/modeling_codegen.py
src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/convbert/convert_convbert_original_tf1_checkpoint_to_pytorch_and_tf2.py
src/transformers/models/convbert/modeling_convbert.py
src/transformers/models/convbert/modeling_tf_convbert.py
src/transformers/models/convnext/convert_convnext_to_pytorch.py
src/transformers/models/convnext/modeling_tf_convnext.py
src/transformers/models/convnextv2/configuration_convnextv2.py
src/transformers/models/convnextv2/convert_convnextv2_to_pytorch.py
src/transformers/models/convnextv2/modeling_convnextv2.py
src/transformers/models/cpmant/configuration_cpmant.py
src/transformers/models/cpmant/modeling_cpmant.py
src/transformers/models/cpmant/tokenization_cpmant.py
src/transformers/models/ctrl/modeling_tf_ctrl.py
src/transformers/models/cvt/convert_cvt_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/cvt/modeling_tf_cvt.py
src/transformers/models/data2vec/convert_data2vec_audio_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/data2vec/convert_data2vec_text_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/data2vec/convert_data2vec_vision_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/data2vec/modeling_data2vec_text.py
src/transformers/models/data2vec/modeling_tf_data2vec_vision.py
src/transformers/models/deberta/modeling_tf_deberta.py
src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py
src/transformers/models/decision_transformer/modeling_decision_transformer.py
src/transformers/models/deformable_detr/convert_deformable_detr_to_pytorch.py
src/transformers/models/deformable_detr/load_custom.py
src/transformers/models/deit/convert_deit_timm_to_pytorch.py
src/transformers/models/deprecated/bort/convert_bort_original_gluonnlp_checkpoint_to_pytorch.py
src/transformers/models/deprecated/mctct/configuration_mctct.py
src/transformers/models/deprecated/mctct/feature_extraction_mctct.py
src/transformers/models/deprecated/mctct/modeling_mctct.py
src/transformers/models/deprecated/mctct/processing_mctct.py
src/transformers/models/deprecated/mmbt/configuration_mmbt.py
src/transformers/models/deprecated/mmbt/modeling_mmbt.py
src/transformers/models/deprecated/open_llama/configuration_open_llama.py
src/transformers/models/deprecated/open_llama/modeling_open_llama.py
src/transformers/models/deprecated/retribert/configuration_retribert.py
src/transformers/models/deprecated/retribert/modeling_retribert.py
src/transformers/models/deprecated/retribert/tokenization_retribert.py
src/transformers/models/deprecated/retribert/tokenization_retribert_fast.py
src/transformers/models/deprecated/tapex/tokenization_tapex.py
src/transformers/models/deprecated/trajectory_transformer/configuration_trajectory_transformer.py
src/transformers/models/deprecated/trajectory_transformer/convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/deprecated/trajectory_transformer/modeling_trajectory_transformer.py
src/transformers/models/deprecated/transfo_xl/convert_transfo_xl_original_tf_checkpoint_to_pytorch.py
src/transformers/models/deprecated/transfo_xl/modeling_tf_transfo_xl.py
src/transformers/models/deprecated/transfo_xl/modeling_tf_transfo_xl_utilities.py
src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py
src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl_utilities.py
src/transformers/models/deprecated/van/configuration_van.py
src/transformers/models/deprecated/van/convert_van_to_pytorch.py
src/transformers/models/deprecated/van/modeling_van.py
src/transformers/models/deta/convert_deta_resnet_to_pytorch.py
src/transformers/models/deta/convert_deta_swin_to_pytorch.py
src/transformers/models/detr/convert_detr_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/detr/convert_detr_to_pytorch.py
src/transformers/models/dialogpt/convert_dialogpt_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/dinov2/configuration_dinov2.py
src/transformers/models/dinov2/convert_dinov2_to_hf.py
src/transformers/models/dinov2/modeling_dinov2.py
src/transformers/models/distilbert/modeling_distilbert.py
src/transformers/models/distilbert/modeling_flax_distilbert.py
src/transformers/models/distilbert/modeling_tf_distilbert.py
src/transformers/models/dit/convert_dit_unilm_to_pytorch.py
src/transformers/models/donut/configuration_donut_swin.py
src/transformers/models/donut/convert_donut_to_pytorch.py
src/transformers/models/donut/modeling_donut_swin.py
src/transformers/models/dpr/convert_dpr_original_checkpoint_to_pytorch.py
src/transformers/models/dpr/modeling_dpr.py
src/transformers/models/dpr/modeling_tf_dpr.py
src/transformers/models/dpt/configuration_dpt.py
src/transformers/models/dpt/convert_dpt_hybrid_to_pytorch.py
src/transformers/models/dpt/convert_dpt_to_pytorch.py
src/transformers/models/efficientformer/configuration_efficientformer.py
src/transformers/models/efficientformer/convert_efficientformer_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/efficientformer/modeling_efficientformer.py
src/transformers/models/efficientnet/configuration_efficientnet.py
src/transformers/models/efficientnet/convert_efficientnet_to_pytorch.py
src/transformers/models/efficientnet/modeling_efficientnet.py
src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py
src/transformers/models/electra/modeling_flax_electra.py
src/transformers/models/encodec/configuration_encodec.py
src/transformers/models/encodec/convert_encodec_checkpoint_to_pytorch.py
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
src/transformers/models/encoder_decoder/modeling_flax_encoder_decoder.py
src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py
src/transformers/models/ernie/modeling_ernie.py
src/transformers/models/esm/configuration_esm.py
src/transformers/models/esm/convert_esm.py
src/transformers/models/esm/modeling_esm.py
src/transformers/models/esm/modeling_esmfold.py
src/transformers/models/esm/modeling_tf_esm.py
src/transformers/models/esm/openfold_utils/chunk_utils.py
src/transformers/models/esm/openfold_utils/data_transforms.py
src/transformers/models/esm/openfold_utils/feats.py
src/transformers/models/esm/openfold_utils/loss.py
src/transformers/models/esm/openfold_utils/protein.py
src/transformers/models/esm/openfold_utils/residue_constants.py
src/transformers/models/esm/openfold_utils/rigid_utils.py
src/transformers/models/esm/openfold_utils/tensor_utils.py
src/transformers/models/falcon/configuration_falcon.py
src/transformers/models/falcon/modeling_falcon.py
src/transformers/models/flaubert/configuration_flaubert.py
src/transformers/models/flaubert/modeling_flaubert.py
src/transformers/models/flaubert/modeling_tf_flaubert.py
src/transformers/models/flava/convert_dalle_to_flava_codebook.py
src/transformers/models/flava/convert_flava_original_pytorch_to_hf.py
src/transformers/models/flava/modeling_flava.py
src/transformers/models/fnet/convert_fnet_original_flax_checkpoint_to_pytorch.py
src/transformers/models/fnet/modeling_fnet.py
src/transformers/models/focalnet/configuration_focalnet.py
src/transformers/models/focalnet/convert_focalnet_to_hf_format.py
src/transformers/models/focalnet/modeling_focalnet.py
src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/fsmt/modeling_fsmt.py
src/transformers/models/funnel/configuration_funnel.py
src/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py
src/transformers/models/funnel/modeling_funnel.py
src/transformers/models/funnel/modeling_tf_funnel.py
src/transformers/models/fuyu/convert_fuyu_model_weights_to_hf.py
src/transformers/models/gemma/configuration_gemma.py
src/transformers/models/gemma/convert_gemma_weights_to_hf.py
src/transformers/models/gemma/modeling_flax_gemma.py
src/transformers/models/gemma/modeling_gemma.py
src/transformers/models/git/configuration_git.py
src/transformers/models/git/convert_git_to_pytorch.py
src/transformers/models/glpn/configuration_glpn.py
src/transformers/models/glpn/convert_glpn_to_pytorch.py
src/transformers/models/gpt2/CONVERSION.md
src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py
src/transformers/models/gpt2/modeling_flax_gpt2.py
src/transformers/models/gpt2/modeling_tf_gpt2.py
src/transformers/models/gpt_bigcode/configuration_gpt_bigcode.py
src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py
src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py
src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py
src/transformers/models/gpt_neo/modeling_gpt_neo.py
src/transformers/models/gpt_neox/modeling_gpt_neox.py
src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py
src/transformers/models/gpt_sw3/convert_megatron_to_pytorch.py
src/transformers/models/gptj/configuration_gptj.py
src/transformers/models/gptj/modeling_flax_gptj.py
src/transformers/models/gptj/modeling_tf_gptj.py
src/transformers/models/gptsan_japanese/configuration_gptsan_japanese.py
src/transformers/models/gptsan_japanese/convert_gptsan_tf_checkpoint_to_pytorch.py
src/transformers/models/gptsan_japanese/modeling_gptsan_japanese.py
src/transformers/models/graphormer/collating_graphormer.py
src/transformers/models/graphormer/configuration_graphormer.py
src/transformers/models/graphormer/modeling_graphormer.py
src/transformers/models/groupvit/configuration_groupvit.py
src/transformers/models/groupvit/convert_groupvit_nvlab_to_hf.py
src/transformers/models/hubert/configuration_hubert.py
src/transformers/models/hubert/convert_distilhubert_original_s3prl_checkpoint_to_pytorch.py
src/transformers/models/hubert/convert_hubert_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/hubert/convert_hubert_original_s3prl_checkpoint_to_pytorch.py
src/transformers/models/hubert/modeling_tf_hubert.py
src/transformers/models/ibert/configuration_ibert.py
src/transformers/models/ibert/modeling_ibert.py
src/transformers/models/ibert/quant_modules.py
src/transformers/models/idefics/configuration_idefics.py
src/transformers/models/idefics/image_processing_idefics.py
src/transformers/models/idefics/modeling_idefics.py
src/transformers/models/idefics/perceiver.py
src/transformers/models/idefics/processing_idefics.py
src/transformers/models/idefics/vision.py
src/transformers/models/imagegpt/convert_imagegpt_original_tf2_to_pytorch.py
src/transformers/models/informer/configuration_informer.py
src/transformers/models/informer/modeling_informer.py
src/transformers/models/instructblip/configuration_instructblip.py
src/transformers/models/instructblip/convert_instructblip_original_to_pytorch.py
src/transformers/models/instructblip/modeling_instructblip.py
src/transformers/models/instructblip/processing_instructblip.py
src/transformers/models/jamba/configuration_jamba.py
src/transformers/models/jamba/modeling_jamba.py
src/transformers/models/jukebox/configuration_jukebox.py
src/transformers/models/jukebox/convert_jukebox.py
src/transformers/models/jukebox/modeling_jukebox.py
src/transformers/models/kosmos2/convert_kosmos2_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/led/configuration_led.py
src/transformers/models/led/modeling_led.py
src/transformers/models/led/modeling_tf_led.py
src/transformers/models/levit/convert_levit_timm_to_pytorch.py
src/transformers/models/levit/modeling_levit.py
src/transformers/models/lilt/configuration_lilt.py
src/transformers/models/llama/configuration_llama.py
src/transformers/models/llama/convert_llama_weights_to_hf.py
src/transformers/models/llama/modeling_llama.py
src/transformers/models/llava/configuration_llava.py
src/transformers/models/llava/modeling_llava.py
src/transformers/models/llava_next/configuration_llava_next.py
src/transformers/models/llava_next/modeling_llava_next.py
src/transformers/models/longformer/configuration_longformer.py
src/transformers/models/longformer/convert_longformer_original_pytorch_lightning_to_pytorch.py
src/transformers/models/longt5/configuration_longt5.py
src/transformers/models/longt5/convert_longt5x_checkpoint_to_flax.py
src/transformers/models/longt5/modeling_flax_longt5.py
src/transformers/models/luke/configuration_luke.py
src/transformers/models/luke/convert_luke_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/luke/modeling_luke.py
src/transformers/models/lxmert/configuration_lxmert.py
src/transformers/models/lxmert/convert_lxmert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/lxmert/modeling_lxmert.py
src/transformers/models/lxmert/modeling_tf_lxmert.py
src/transformers/models/m2m_100/convert_m2m100_original_checkpoint_to_pytorch.py
src/transformers/models/m2m_100/modeling_m2m_100.py
src/transformers/models/marian/configuration_marian.py
src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py
src/transformers/models/marian/convert_marian_to_pytorch.py
src/transformers/models/marian/modeling_flax_marian.py
src/transformers/models/marian/modeling_tf_marian.py
src/transformers/models/markuplm/configuration_markuplm.py
src/transformers/models/markuplm/feature_extraction_markuplm.py
src/transformers/models/mask2former/convert_mask2former_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/maskformer/configuration_maskformer_swin.py
src/transformers/models/maskformer/convert_maskformer_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/maskformer/convert_maskformer_resnet_to_pytorch.py
src/transformers/models/maskformer/convert_maskformer_swin_to_pytorch.py
src/transformers/models/maskformer/modeling_maskformer_swin.py
src/transformers/models/mbart/convert_mbart_original_checkpoint_to_pytorch.py
src/transformers/models/mbart/modeling_flax_mbart.py
src/transformers/models/mega/configuration_mega.py
src/transformers/models/mega/convert_mega_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/mega/modeling_mega.py
src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py
src/transformers/models/megatron_bert/modeling_megatron_bert.py
src/transformers/models/megatron_gpt2/checkpoint_reshaping_and_interoperability.py
src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py
src/transformers/models/mgp_str/configuration_mgp_str.py
src/transformers/models/mgp_str/modeling_mgp_str.py
src/transformers/models/mistral/configuration_mistral.py
src/transformers/models/mistral/modeling_mistral.py
src/transformers/models/mixtral/configuration_mixtral.py
src/transformers/models/mixtral/modeling_mixtral.py
src/transformers/models/mluke/convert_mluke_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/mobilebert/convert_mobilebert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/mobilenet_v1/configuration_mobilenet_v1.py
src/transformers/models/mobilenet_v1/convert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/mobilenet_v2/configuration_mobilenet_v2.py
src/transformers/models/mobilenet_v2/convert_original_tf_checkpoint_to_pytorch.py
src/transformers/models/mobilevit/configuration_mobilevit.py
src/transformers/models/mobilevit/convert_mlcvnets_to_pytorch.py
src/transformers/models/mobilevitv2/convert_mlcvnets_to_pytorch.py
src/transformers/models/mpnet/configuration_mpnet.py
src/transformers/models/mpnet/modeling_mpnet.py
src/transformers/models/mpnet/modeling_tf_mpnet.py
src/transformers/models/mpt/configuration_mpt.py
src/transformers/models/mpt/modeling_mpt.py
src/transformers/models/mra/configuration_mra.py
src/transformers/models/mra/convert_mra_pytorch_to_pytorch.py
src/transformers/models/mra/modeling_mra.py
src/transformers/models/mt5/configuration_mt5.py
src/transformers/models/mt5/modeling_flax_mt5.py
src/transformers/models/mt5/modeling_mt5.py
src/transformers/models/mt5/modeling_tf_mt5.py
src/transformers/models/musicgen/convert_musicgen_transformers.py
src/transformers/models/musicgen_melody/convert_musicgen_melody_transformers.py
src/transformers/models/mvp/modeling_mvp.py
src/transformers/models/nezha/modeling_nezha.py
src/transformers/models/nllb_moe/configuration_nllb_moe.py
src/transformers/models/nllb_moe/convert_nllb_moe_sharded_original_checkpoint_to_pytorch.py
src/transformers/models/nllb_moe/modeling_nllb_moe.py
src/transformers/models/nougat/convert_nougat_to_hf.py
src/transformers/models/nystromformer/configuration_nystromformer.py
src/transformers/models/nystromformer/convert_nystromformer_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/nystromformer/modeling_nystromformer.py
src/transformers/models/oneformer/convert_to_hf_oneformer.py
src/transformers/models/openai/convert_openai_original_tf_checkpoint_to_pytorch.py
src/transformers/models/openai/modeling_openai.py
src/transformers/models/openai/modeling_tf_openai.py
src/transformers/models/opt/convert_opt_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/opt/modeling_flax_opt.py
src/transformers/models/owlvit/configuration_owlvit.py
src/transformers/models/owlvit/convert_owlvit_original_flax_to_hf.py
src/transformers/models/pegasus/convert_pegasus_tf_to_pytorch.py
src/transformers/models/pegasus/modeling_flax_pegasus.py
src/transformers/models/pegasus/modeling_tf_pegasus.py
src/transformers/models/pegasus_x/modeling_pegasus_x.py
src/transformers/models/perceiver/configuration_perceiver.py
src/transformers/models/perceiver/convert_perceiver_haiku_to_pytorch.py
src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py
src/transformers/models/persimmon/modeling_persimmon.py
src/transformers/models/pix2struct/configuration_pix2struct.py
src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py
src/transformers/models/pix2struct/image_processing_pix2struct.py
src/transformers/models/pix2struct/processing_pix2struct.py
src/transformers/models/plbart/convert_plbart_original_checkpoint_to_torch.py
src/transformers/models/poolformer/convert_poolformer_original_to_pytorch.py
src/transformers/models/pop2piano/convert_pop2piano_weights_to_hf.py
src/transformers/models/pop2piano/feature_extraction_pop2piano.py
src/transformers/models/pop2piano/processing_pop2piano.py
src/transformers/models/pop2piano/tokenization_pop2piano.py
src/transformers/models/prophetnet/configuration_prophetnet.py
src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/prophetnet/modeling_prophetnet.py
src/transformers/models/pvt/configuration_pvt.py
src/transformers/models/pvt/convert_pvt_to_pytorch.py
src/transformers/models/pvt/image_processing_pvt.py
src/transformers/models/pvt/modeling_pvt.py
src/transformers/models/qdqbert/configuration_qdqbert.py
src/transformers/models/qdqbert/modeling_qdqbert.py
src/transformers/models/qwen2/configuration_qwen2.py
src/transformers/models/qwen2/modeling_qwen2.py
src/transformers/models/qwen2/tokenization_qwen2.py
src/transformers/models/qwen2/tokenization_qwen2_fast.py
src/transformers/models/qwen2_moe/configuration_qwen2_moe.py
src/transformers/models/qwen2_moe/modeling_qwen2_moe.py
src/transformers/models/rag/configuration_rag.py
src/transformers/models/rag/modeling_rag.py
src/transformers/models/rag/modeling_tf_rag.py
src/transformers/models/rag/retrieval_rag.py
src/transformers/models/realm/modeling_realm.py
src/transformers/models/realm/retrieval_realm.py
src/transformers/models/recurrent_gemma/modeling_recurrent_gemma.py
src/transformers/models/reformer/convert_reformer_trax_checkpoint_to_pytorch.py
src/transformers/models/regnet/configuration_regnet.py
src/transformers/models/regnet/convert_regnet_seer_10b_to_pytorch.py
src/transformers/models/regnet/convert_regnet_to_pytorch.py
src/transformers/models/regnet/modeling_flax_regnet.py
src/transformers/models/rembert/configuration_rembert.py
src/transformers/models/rembert/convert_rembert_tf_checkpoint_to_pytorch.py
src/transformers/models/rembert/modeling_rembert.py
src/transformers/models/rembert/modeling_tf_rembert.py
src/transformers/models/resnet/convert_resnet_to_pytorch.py
src/transformers/models/resnet/modeling_flax_resnet.py
src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/roberta/modeling_flax_roberta.py
src/transformers/models/roberta_prelayernorm/convert_roberta_prelayernorm_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/roberta_prelayernorm/modeling_flax_roberta_prelayernorm.py
src/transformers/models/roc_bert/configuration_roc_bert.py
src/transformers/models/roformer/convert_roformer_original_tf_checkpoint_to_pytorch.py
src/transformers/models/roformer/modeling_flax_roformer.py
src/transformers/models/roformer/modeling_roformer.py
src/transformers/models/roformer/modeling_tf_roformer.py
src/transformers/models/rwkv/configuration_rwkv.py
src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py
src/transformers/models/rwkv/modeling_rwkv.py
src/transformers/models/sam/configuration_sam.py
src/transformers/models/sam/convert_sam_to_hf.py
src/transformers/models/sam/image_processing_sam.py
src/transformers/models/sam/modeling_sam.py
src/transformers/models/sam/modeling_tf_sam.py
src/transformers/models/sam/processing_sam.py
src/transformers/models/seamless_m4t/convert_fairseq2_to_hf.py
src/transformers/models/seamless_m4t_v2/convert_fairseq2_to_hf.py
src/transformers/models/segformer/configuration_segformer.py
src/transformers/models/segformer/convert_segformer_original_to_pytorch.py
src/transformers/models/sew/convert_sew_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/sew_d/convert_sew_d_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/speech_encoder_decoder/configuration_speech_encoder_decoder.py
src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py
src/transformers/models/speech_encoder_decoder/convert_speech_to_text_wav2vec2_seq2seq_original_to_pytorch.py
src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py
src/transformers/models/speech_to_text/convert_s2t_fairseq_to_tfms.py
src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py
src/transformers/models/speecht5/configuration_speecht5.py
src/transformers/models/speecht5/convert_hifigan.py
src/transformers/models/speecht5/convert_speecht5_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/speecht5/number_normalizer.py
src/transformers/models/splinter/configuration_splinter.py
src/transformers/models/splinter/modeling_splinter.py
src/transformers/models/squeezebert/modeling_squeezebert.py
src/transformers/models/stablelm/modeling_stablelm.py
src/transformers/models/starcoder2/modeling_starcoder2.py
src/transformers/models/swiftformer/configuration_swiftformer.py
src/transformers/models/swiftformer/convert_swiftformer_original_to_hf.py
src/transformers/models/swiftformer/modeling_swiftformer.py
src/transformers/models/swin/convert_swin_simmim_to_pytorch.py
src/transformers/models/swin/convert_swin_timm_to_pytorch.py
src/transformers/models/swin/modeling_tf_swin.py
src/transformers/models/swin2sr/configuration_swin2sr.py
src/transformers/models/swin2sr/convert_swin2sr_original_to_pytorch.py
src/transformers/models/swinv2/convert_swinv2_timm_to_pytorch.py
src/transformers/models/swinv2/modeling_swinv2.py
src/transformers/models/switch_transformers/configuration_switch_transformers.py
src/transformers/models/switch_transformers/convert_big_switch.py
src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py
src/transformers/models/switch_transformers/modeling_switch_transformers.py
src/transformers/models/t5/configuration_t5.py
src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py
src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py
src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py
src/transformers/models/t5/modeling_flax_t5.py
src/transformers/models/t5/modeling_t5.py
src/transformers/models/t5/modeling_tf_t5.py
src/transformers/models/table_transformer/configuration_table_transformer.py
src/transformers/models/table_transformer/convert_table_transformer_to_hf.py
src/transformers/models/table_transformer/convert_table_transformer_to_hf_no_timm.py
src/transformers/models/tapas/configuration_tapas.py
src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py
src/transformers/models/tapas/modeling_tapas.py
src/transformers/models/tapas/modeling_tf_tapas.py
src/transformers/models/timesformer/convert_timesformer_to_pytorch.py
src/transformers/models/timm_backbone/configuration_timm_backbone.py
src/transformers/models/timm_backbone/modeling_timm_backbone.py
src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py
src/transformers/models/tvlt/configuration_tvlt.py
src/transformers/models/tvlt/modeling_tvlt.py
src/transformers/models/umt5/configuration_umt5.py
src/transformers/models/umt5/convert_umt5_checkpoint_to_pytorch.py
src/transformers/models/umt5/modeling_umt5.py
src/transformers/models/unispeech/convert_unispeech_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/unispeech_sat/configuration_unispeech_sat.py
src/transformers/models/unispeech_sat/convert_unispeech_original_s3prl_checkpoint_to_pytorch.py
src/transformers/models/unispeech_sat/convert_unispeech_sat_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/upernet/configuration_upernet.py
src/transformers/models/upernet/convert_convnext_upernet_to_pytorch.py
src/transformers/models/upernet/convert_swin_upernet_to_pytorch.py
src/transformers/models/videomae/configuration_videomae.py
src/transformers/models/videomae/convert_videomae_to_pytorch.py
src/transformers/models/vilt/configuration_vilt.py
src/transformers/models/vilt/convert_vilt_original_to_pytorch.py
src/transformers/models/vipllava/configuration_vipllava.py
src/transformers/models/vipllava/modeling_vipllava.py
src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py
src/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py
src/transformers/models/vision_text_dual_encoder/modeling_flax_vision_text_dual_encoder.py
src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py
src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/visual_bert/modeling_visual_bert.py
src/transformers/models/vit/convert_dino_to_pytorch.py
src/transformers/models/vit/convert_vit_timm_to_pytorch.py
src/transformers/models/vit/modeling_flax_vit.py
src/transformers/models/vit_hybrid/configuration_vit_hybrid.py
src/transformers/models/vit_hybrid/convert_vit_hybrid_timm_to_pytorch.py
src/transformers/models/vit_hybrid/modeling_vit_hybrid.py
src/transformers/models/vit_mae/convert_vit_mae_to_pytorch.py
src/transformers/models/vit_mae/modeling_tf_vit_mae.py
src/transformers/models/vit_msn/configuration_vit_msn.py
src/transformers/models/vit_msn/convert_msn_to_pytorch.py
src/transformers/models/vivit/configuration_vivit.py
src/transformers/models/vivit/convert_vivit_flax_to_pytorch.py
src/transformers/models/vivit/image_processing_vivit.py
src/transformers/models/vivit/modeling_vivit.py
src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/wav2vec2/convert_wav2vec2_original_s3prl_checkpoint_to_pytorch.py
src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py
src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py
src/transformers/models/wav2vec2_bert/convert_wav2vec2_seamless_checkpoint.py
src/transformers/models/wav2vec2_conformer/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/wavlm/convert_wavlm_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py
src/transformers/models/whisper/convert_openai_to_hf.py
src/transformers/models/whisper/english_normalizer.py
src/transformers/models/whisper/modeling_flax_whisper.py
src/transformers/models/x_clip/configuration_x_clip.py
src/transformers/models/x_clip/convert_x_clip_original_pytorch_to_hf.py
src/transformers/models/xglm/configuration_xglm.py
src/transformers/models/xglm/convert_xglm_original_ckpt_to_trfms.py
src/transformers/models/xglm/modeling_flax_xglm.py
src/transformers/models/xglm/modeling_tf_xglm.py
src/transformers/models/xglm/modeling_xglm.py
src/transformers/models/xlm/convert_xlm_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/xlm/modeling_tf_xlm.py
src/transformers/models/xlm/modeling_xlm.py
src/transformers/models/xlm_prophetnet/configuration_xlm_prophetnet.py
src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py
src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py
src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py
src/transformers/models/xlm_roberta/modeling_xlm_roberta.py
src/transformers/models/xlm_roberta_xl/convert_xlm_roberta_xl_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/xlm_roberta_xl/modeling_xlm_roberta_xl.py
src/transformers/models/xlnet/convert_xlnet_original_tf_checkpoint_to_pytorch.py
src/transformers/models/xlnet/modeling_tf_xlnet.py
src/transformers/models/xlnet/modeling_xlnet.py
src/transformers/models/xmod/convert_xmod_original_pytorch_checkpoint_to_pytorch.py
src/transformers/models/yolos/convert_yolos_to_pytorch.py
src/transformers/models/yoso/convert_yoso_pytorch_to_pytorch.py
src/transformers/models/yoso/modeling_yoso.py
src/transformers/onnx/__main__.py
src/transformers/onnx/config.py
src/transformers/onnx/convert.py
src/transformers/onnx/features.py
src/transformers/onnx/utils.py
src/transformers/optimization.py
src/transformers/optimization_tf.py
src/transformers/pipelines/audio_classification.py
src/transformers/pipelines/audio_utils.py
src/transformers/pipelines/automatic_speech_recognition.py
src/transformers/pipelines/base.py
src/transformers/pipelines/conversational.py
src/transformers/pipelines/depth_estimation.py
src/transformers/pipelines/document_question_answering.py
src/transformers/pipelines/feature_extraction.py
src/transformers/pipelines/fill_mask.py
src/transformers/pipelines/image_classification.py
src/transformers/pipelines/image_segmentation.py
src/transformers/pipelines/image_to_text.py
src/transformers/pipelines/mask_generation.py
src/transformers/pipelines/object_detection.py
src/transformers/pipelines/pt_utils.py
src/transformers/pipelines/question_answering.py
src/transformers/pipelines/table_question_answering.py
src/transformers/pipelines/text_classification.py
src/transformers/pipelines/token_classification.py
src/transformers/pipelines/video_classification.py
src/transformers/pipelines/visual_question_answering.py
src/transformers/pipelines/zero_shot_audio_classification.py
src/transformers/pipelines/zero_shot_classification.py
src/transformers/pipelines/zero_shot_image_classification.py
src/transformers/pipelines/zero_shot_object_detection.py
src/transformers/processing_utils.py
src/transformers/pytorch_utils.py
src/transformers/quantizers/auto.py
src/transformers/quantizers/base.py
src/transformers/quantizers/quantizer_awq.py
src/transformers/quantizers/quantizer_bnb_4bit.py
src/transformers/quantizers/quantizer_bnb_8bit.py
src/transformers/quantizers/quantizer_gptq.py
src/transformers/quantizers/quantizers_utils.py
src/transformers/sagemaker/trainer_sm.py
src/transformers/sagemaker/training_args_sm.py
src/transformers/testing_utils.py
src/transformers/tf_utils.py
src/transformers/time_series_utils.py
src/transformers/tokenization_utils.py
src/transformers/tokenization_utils_base.py
src/transformers/tokenization_utils_fast.py
src/transformers/tools/agent_types.py
src/transformers/tools/agents.py
src/transformers/tools/base.py
src/transformers/tools/document_question_answering.py
src/transformers/tools/evaluate_agent.py
src/transformers/tools/image_captioning.py
src/transformers/tools/image_question_answering.py
src/transformers/tools/image_segmentation.py
src/transformers/tools/prompts.py
src/transformers/tools/python_interpreter.py
src/transformers/tools/speech_to_text.py
src/transformers/tools/text_classification.py
src/transformers/tools/text_question_answering.py
src/transformers/tools/text_summarization.py
src/transformers/tools/text_to_speech.py
src/transformers/tools/translation.py
src/transformers/trainer.py
src/transformers/trainer_callback.py
src/transformers/trainer_pt_utils.py
src/transformers/trainer_seq2seq.py
src/transformers/trainer_utils.py
src/transformers/training_args.py
src/transformers/training_args_seq2seq.py
src/transformers/training_args_tf.py
src/transformers/utils/backbone_utils.py
src/transformers/utils/bitsandbytes.py
src/transformers/utils/constants.py
src/transformers/utils/doc.py
src/transformers/utils/dummy_detectron2_objects.py
src/transformers/utils/dummy_essentia_and_librosa_and_pretty_midi_and_scipy_and_torch_objects.py
src/transformers/utils/dummy_flax_objects.py
src/transformers/utils/dummy_keras_nlp_objects.py
src/transformers/utils/dummy_music_objects.py
src/transformers/utils/dummy_pt_objects.py
src/transformers/utils/dummy_sentencepiece_and_tokenizers_objects.py
src/transformers/utils/dummy_sentencepiece_objects.py
src/transformers/utils/dummy_speech_objects.py
src/transformers/utils/dummy_tensorflow_text_objects.py
src/transformers/utils/dummy_tf_objects.py
src/transformers/utils/dummy_tokenizers_objects.py
src/transformers/utils/dummy_vision_objects.py
src/transformers/utils/fx.py
src/transformers/utils/generic.py
src/transformers/utils/hp_naming.py
src/transformers/utils/hub.py
src/transformers/utils/import_utils.py
src/transformers/utils/logging.py
src/transformers/utils/model_parallel_utils.py
src/transformers/utils/notebook.py
src/transformers/utils/peft_utils.py
src/transformers/utils/quantization_config.py
src/transformers/utils/sentencepiece_model_pb2.py
src/transformers/utils/sentencepiece_model_pb2_new.py
src/transformers/utils/versions.py
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_build.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import importlib
from pathlib import Path
# Test all the extensions added in the setup
FILES_TO_FIND = [
"kernels/rwkv/wkv_cuda.cu",
"kernels/rwkv/wkv_op.cpp",
"kernels/deformable_detr/ms_deform_attn.h",
"kernels/deformable_detr/cuda/ms_deform_im2col_cuda.cuh",
"models/graphormer/algos_graphormer.pyx",
]
def test_custom_files_are_present(transformers_path):
# Test all the extensions added in the setup
for file in FILES_TO_FIND:
if not (transformers_path / file).exists():
return False
return True
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--check_lib", action="store_true", help="Whether to check the build or the actual package.")
args = parser.parse_args()
if args.check_lib:
transformers_module = importlib.import_module("transformers")
transformers_path = Path(transformers_module.__file__).parent
else:
transformers_path = Path.cwd() / "build/lib/transformers"
if not test_custom_files_are_present(transformers_path):
raise ValueError("The built release does not contain the custom files. Fix this before going further!")
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/release.py | # coding=utf-8
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that prepares the repository for releases (or patches) by updating all versions in the relevant places. It
also performs some post-release cleanup, by updating the links in the main README to respective model doc pages (from
main to stable).
To prepare for a release, use from the root of the repo on the release branch with:
```bash
python release.py
```
or use `make pre-release`.
To prepare for a patch release, use from the root of the repo on the release branch with:
```bash
python release.py --patch
```
or use `make pre-patch`.
To do the post-release cleanup, use from the root of the repo on the main branch with:
```bash
python release.py --post_release
```
or use `make post-release`.
"""
import argparse
import os
import re
import packaging.version
# All paths are defined with the intent that this script should be run from the root of the repo.
PATH_TO_EXAMPLES = "examples/"
# This maps a type of file to the pattern to look for when searching where the version is defined, as well as the
# template to follow when replacing it with the new version.
REPLACE_PATTERNS = {
"examples": (re.compile(r'^check_min_version\("[^"]+"\)\s*$', re.MULTILINE), 'check_min_version("VERSION")\n'),
"init": (re.compile(r'^__version__\s+=\s+"([^"]+)"\s*$', re.MULTILINE), '__version__ = "VERSION"\n'),
"setup": (re.compile(r'^(\s*)version\s*=\s*"[^"]+",', re.MULTILINE), r'\1version="VERSION",'),
}
# This maps a type of file to its path in Transformers
REPLACE_FILES = {
"init": "src/transformers/__init__.py",
"setup": "setup.py",
}
README_FILE = "README.md"
def update_version_in_file(fname: str, version: str, file_type: str):
"""
Update the version of Transformers in one file.
Args:
fname (`str`): The path to the file where we want to update the version.
version (`str`): The new version to set in the file.
file_type (`str`): The type of the file (should be a key in `REPLACE_PATTERNS`).
"""
with open(fname, "r", encoding="utf-8", newline="\n") as f:
code = f.read()
re_pattern, replace = REPLACE_PATTERNS[file_type]
replace = replace.replace("VERSION", version)
code = re_pattern.sub(replace, code)
with open(fname, "w", encoding="utf-8", newline="\n") as f:
f.write(code)
def update_version_in_examples(version: str):
"""
Update the version in all examples files.
Args:
version (`str`): The new version to set in the examples.
"""
for folder, directories, fnames in os.walk(PATH_TO_EXAMPLES):
# Removing some of the folders with non-actively maintained examples from the walk
if "research_projects" in directories:
directories.remove("research_projects")
if "legacy" in directories:
directories.remove("legacy")
for fname in fnames:
if fname.endswith(".py"):
update_version_in_file(os.path.join(folder, fname), version, file_type="examples")
def global_version_update(version: str, patch: bool = False):
"""
Update the version in all needed files.
Args:
version (`str`): The new version to set everywhere.
patch (`bool`, *optional*, defaults to `False`): Whether or not this is a patch release.
"""
for pattern, fname in REPLACE_FILES.items():
update_version_in_file(fname, version, pattern)
if not patch:
# We don't update the version in the examples for patch releases.
update_version_in_examples(version)
def clean_main_ref_in_model_list():
"""
Replace the links from main doc to stable doc in the model list of the README.
"""
# If the introduction or the conclusion of the list change, the prompts may need to be updated.
_start_prompt = "๐ค Transformers currently provides the following architectures"
_end_prompt = "1. Want to contribute a new model?"
with open(README_FILE, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
# Find the start of the list.
start_index = 0
while not lines[start_index].startswith(_start_prompt):
start_index += 1
start_index += 1
index = start_index
# Update the lines in the model list.
while not lines[index].startswith(_end_prompt):
if lines[index].startswith("1."):
lines[index] = lines[index].replace(
"https://huggingface.co/docs/transformers/main/model_doc",
"https://huggingface.co/docs/transformers/model_doc",
)
index += 1
with open(README_FILE, "w", encoding="utf-8", newline="\n") as f:
f.writelines(lines)
def get_version() -> packaging.version.Version:
"""
Reads the current version in the main __init__.
"""
with open(REPLACE_FILES["init"], "r") as f:
code = f.read()
default_version = REPLACE_PATTERNS["init"][0].search(code).groups()[0]
return packaging.version.parse(default_version)
def pre_release_work(patch: bool = False):
"""
Do all the necessary pre-release steps:
- figure out the next minor release version and ask confirmation
- update the version eveywhere
- clean-up the model list in the main README
Args:
patch (`bool`, *optional*, defaults to `False`): Whether or not this is a patch release.
"""
# First let's get the default version: base version if we are in dev, bump minor otherwise.
default_version = get_version()
if patch and default_version.is_devrelease:
raise ValueError("Can't create a patch version from the dev branch, checkout a released version!")
if default_version.is_devrelease:
default_version = default_version.base_version
elif patch:
default_version = f"{default_version.major}.{default_version.minor}.{default_version.micro + 1}"
else:
default_version = f"{default_version.major}.{default_version.minor + 1}.0"
# Now let's ask nicely if we have found the right version.
version = input(f"Which version are you releasing? [{default_version}]")
if len(version) == 0:
version = default_version
print(f"Updating version to {version}.")
global_version_update(version, patch=patch)
if not patch:
print("Cleaning main README, don't forget to run `make fix-copies`.")
clean_main_ref_in_model_list()
def post_release_work():
"""
Do all the necesarry post-release steps:
- figure out the next dev version and ask confirmation
- update the version eveywhere
- clean-up the model list in the main README
"""
# First let's get the current version
current_version = get_version()
dev_version = f"{current_version.major}.{current_version.minor + 1}.0.dev0"
current_version = current_version.base_version
# Check with the user we got that right.
version = input(f"Which version are we developing now? [{dev_version}]")
if len(version) == 0:
version = dev_version
print(f"Updating version to {version}.")
global_version_update(version)
print("Cleaning main README, don't forget to run `make fix-copies`.")
clean_main_ref_in_model_list()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--post_release", action="store_true", help="Whether this is pre or post release.")
parser.add_argument("--patch", action="store_true", help="Whether or not this is a patch release.")
args = parser.parse_args()
if not args.post_release:
pre_release_work(patch=args.patch)
elif args.patch:
print("Nothing to do after a patch :-)")
else:
post_release_work()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/add_pipeline_model_mapping_to_test.py | # coding=utf-8
# Copyright 2023 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A script to add and/or update the attribute `pipeline_model_mapping` in model test files.
This script will be (mostly) used in the following 2 situations:
- run within a (scheduled) CI job to:
- check if model test files in the library have updated `pipeline_model_mapping`,
- and/or update test files and (possibly) open a GitHub pull request automatically
- being run by a `transformers` member to quickly check and update some particular test file(s)
This script is **NOT** intended to be run (manually) by community contributors.
"""
import argparse
import glob
import inspect
import os
import re
import unittest
from get_test_info import get_test_classes
from tests.test_pipeline_mixin import pipeline_test_mapping
PIPELINE_TEST_MAPPING = {}
for task, _ in pipeline_test_mapping.items():
PIPELINE_TEST_MAPPING[task] = {"pt": None, "tf": None}
# DO **NOT** add item to this set (unless the reason is approved)
TEST_FILE_TO_IGNORE = {
"tests/models/esm/test_modeling_esmfold.py", # The pipeline test mapping is added to `test_modeling_esm.py`
}
def get_framework(test_class):
"""Infer the framework from the test class `test_class`."""
if "ModelTesterMixin" in [x.__name__ for x in test_class.__bases__]:
return "pt"
elif "TFModelTesterMixin" in [x.__name__ for x in test_class.__bases__]:
return "tf"
elif "FlaxModelTesterMixin" in [x.__name__ for x in test_class.__bases__]:
return "flax"
else:
return None
def get_mapping_for_task(task, framework):
"""Get mappings defined in `XXXPipelineTests` for the task `task`."""
# Use the cached results
if PIPELINE_TEST_MAPPING[task].get(framework, None) is not None:
return PIPELINE_TEST_MAPPING[task][framework]
pipeline_test_class = pipeline_test_mapping[task]["test"]
mapping = None
if framework == "pt":
mapping = getattr(pipeline_test_class, "model_mapping", None)
elif framework == "tf":
mapping = getattr(pipeline_test_class, "tf_model_mapping", None)
if mapping is not None:
mapping = dict(mapping.items())
# cache the results
PIPELINE_TEST_MAPPING[task][framework] = mapping
return mapping
def get_model_for_pipeline_test(test_class, task):
"""Get the model architecture(s) related to the test class `test_class` for a pipeline `task`."""
framework = get_framework(test_class)
if framework is None:
return None
mapping = get_mapping_for_task(task, framework)
if mapping is None:
return None
config_classes = list({model_class.config_class for model_class in test_class.all_model_classes})
if len(config_classes) != 1:
raise ValueError("There should be exactly one configuration class from `test_class.all_model_classes`.")
# This could be a list/tuple of model classes, but it's rare.
model_class = mapping.get(config_classes[0], None)
if isinstance(model_class, (tuple, list)):
model_class = sorted(model_class, key=lambda x: x.__name__)
return model_class
def get_pipeline_model_mapping(test_class):
"""Get `pipeline_model_mapping` for `test_class`."""
mapping = [(task, get_model_for_pipeline_test(test_class, task)) for task in pipeline_test_mapping]
mapping = sorted([(task, model) for task, model in mapping if model is not None], key=lambda x: x[0])
return dict(mapping)
def get_pipeline_model_mapping_string(test_class):
"""Get `pipeline_model_mapping` for `test_class` as a string (to be added to the test file).
This will be a 1-line string. After this is added to a test file, `make style` will format it beautifully.
"""
framework = get_framework(test_class)
if framework == "pt":
framework = "torch"
default_value = "{}"
mapping = get_pipeline_model_mapping(test_class)
if len(mapping) == 0:
return ""
texts = []
for task, model_classes in mapping.items():
if isinstance(model_classes, (tuple, list)):
# A list/tuple of model classes
value = "(" + ", ".join([x.__name__ for x in model_classes]) + ")"
else:
# A single model class
value = model_classes.__name__
texts.append(f'"{task}": {value}')
text = "{" + ", ".join(texts) + "}"
text = f"pipeline_model_mapping = {text} if is_{framework}_available() else {default_value}"
return text
def is_valid_test_class(test_class):
"""Restrict to `XXXModelTesterMixin` and should be a subclass of `unittest.TestCase`."""
base_class_names = {"ModelTesterMixin", "TFModelTesterMixin", "FlaxModelTesterMixin"}
if not issubclass(test_class, unittest.TestCase):
return False
return len(base_class_names.intersection([x.__name__ for x in test_class.__bases__])) > 0
def find_test_class(test_file):
"""Find a test class in `test_file` to which we will add `pipeline_model_mapping`."""
test_classes = [x for x in get_test_classes(test_file) if is_valid_test_class(x)]
target_test_class = None
for test_class in test_classes:
# If a test class has defined `pipeline_model_mapping`, let's take it
if getattr(test_class, "pipeline_model_mapping", None) is not None:
target_test_class = test_class
break
# Take the test class with the shortest name (just a heuristic)
if target_test_class is None and len(test_classes) > 0:
target_test_class = sorted(test_classes, key=lambda x: (len(x.__name__), x.__name__))[0]
return target_test_class
def find_block_ending(lines, start_idx, indent_level):
end_idx = start_idx
for idx, line in enumerate(lines[start_idx:]):
indent = len(line) - len(line.lstrip())
if idx == 0 or indent > indent_level or (indent == indent_level and line.strip() == ")"):
end_idx = start_idx + idx
elif idx > 0 and indent <= indent_level:
# Outside the definition block of `pipeline_model_mapping`
break
return end_idx
def add_pipeline_model_mapping(test_class, overwrite=False):
"""Add `pipeline_model_mapping` to `test_class`."""
if getattr(test_class, "pipeline_model_mapping", None) is not None:
if not overwrite:
return "", -1
line_to_add = get_pipeline_model_mapping_string(test_class)
if len(line_to_add) == 0:
return "", -1
line_to_add = line_to_add + "\n"
# The code defined the class `test_class`
class_lines, class_start_line_no = inspect.getsourcelines(test_class)
# `inspect` gives the code for an object, including decorator(s) if any.
# We (only) need the exact line of the class definition.
for idx, line in enumerate(class_lines):
if line.lstrip().startswith("class "):
class_lines = class_lines[idx:]
class_start_line_no += idx
break
class_end_line_no = class_start_line_no + len(class_lines) - 1
# The index in `class_lines` that starts the definition of `all_model_classes`, `all_generative_model_classes` or
# `pipeline_model_mapping`. This assumes they are defined in such order, and we take the start index of the last
# block that appears in a `test_class`.
start_idx = None
# The indent level of the line at `class_lines[start_idx]` (if defined)
indent_level = 0
# To record if `pipeline_model_mapping` is found in `test_class`.
def_line = None
for idx, line in enumerate(class_lines):
if line.strip().startswith("all_model_classes = "):
indent_level = len(line) - len(line.lstrip())
start_idx = idx
elif line.strip().startswith("all_generative_model_classes = "):
indent_level = len(line) - len(line.lstrip())
start_idx = idx
elif line.strip().startswith("pipeline_model_mapping = "):
indent_level = len(line) - len(line.lstrip())
start_idx = idx
def_line = line
break
if start_idx is None:
return "", -1
# Find the ending index (inclusive) of the above found block.
end_idx = find_block_ending(class_lines, start_idx, indent_level)
# Extract `is_xxx_available()` from existing blocks: some models require specific libraries like `timm` and use
# `is_timm_available()` instead of `is_torch_available()`.
# Keep leading and trailing whitespaces
r = re.compile(r"\s(is_\S+?_available\(\))\s")
for line in class_lines[start_idx : end_idx + 1]:
backend_condition = r.search(line)
if backend_condition is not None:
# replace the leading and trailing whitespaces to the space character " ".
target = " " + backend_condition[0][1:-1] + " "
line_to_add = r.sub(target, line_to_add)
break
if def_line is None:
# `pipeline_model_mapping` is not defined. The target index is set to the ending index (inclusive) of
# `all_model_classes` or `all_generative_model_classes`.
target_idx = end_idx
else:
# `pipeline_model_mapping` is defined. The target index is set to be one **BEFORE** its start index.
target_idx = start_idx - 1
# mark the lines of the currently existing `pipeline_model_mapping` to be removed.
for idx in range(start_idx, end_idx + 1):
# These lines are going to be removed before writing to the test file.
class_lines[idx] = None # noqa
# Make sure the test class is a subclass of `PipelineTesterMixin`.
parent_classes = [x.__name__ for x in test_class.__bases__]
if "PipelineTesterMixin" not in parent_classes:
# Put `PipelineTesterMixin` just before `unittest.TestCase`
_parent_classes = [x for x in parent_classes if x != "TestCase"] + ["PipelineTesterMixin"]
if "TestCase" in parent_classes:
# Here we **assume** the original string is always with `unittest.TestCase`.
_parent_classes.append("unittest.TestCase")
parent_classes = ", ".join(_parent_classes)
for idx, line in enumerate(class_lines):
# Find the ending of the declaration of `test_class`
if line.strip().endswith("):"):
# mark the lines of the declaration of `test_class` to be removed
for _idx in range(idx + 1):
class_lines[_idx] = None # noqa
break
# Add the new, one-line, class declaration for `test_class`
class_lines[0] = f"class {test_class.__name__}({parent_classes}):\n"
# Add indentation
line_to_add = " " * indent_level + line_to_add
# Insert `pipeline_model_mapping` to `class_lines`.
# (The line at `target_idx` should be kept by definition!)
class_lines = class_lines[: target_idx + 1] + [line_to_add] + class_lines[target_idx + 1 :]
# Remove the lines that are marked to be removed
class_lines = [x for x in class_lines if x is not None]
# Move from test class to module (in order to write to the test file)
module_lines = inspect.getsourcelines(inspect.getmodule(test_class))[0]
# Be careful with the 1-off between line numbers and array indices
module_lines = module_lines[: class_start_line_no - 1] + class_lines + module_lines[class_end_line_no:]
code = "".join(module_lines)
moddule_file = inspect.getsourcefile(test_class)
with open(moddule_file, "w", encoding="UTF-8", newline="\n") as fp:
fp.write(code)
return line_to_add
def add_pipeline_model_mapping_to_test_file(test_file, overwrite=False):
"""Add `pipeline_model_mapping` to `test_file`."""
test_class = find_test_class(test_file)
if test_class:
add_pipeline_model_mapping(test_class, overwrite=overwrite)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--test_file", type=str, help="A path to the test file, starting with the repository's `tests` directory."
)
parser.add_argument(
"--all",
action="store_true",
help="If to check and modify all test files.",
)
parser.add_argument(
"--overwrite",
action="store_true",
help="If to overwrite a test class if it has already defined `pipeline_model_mapping`.",
)
args = parser.parse_args()
if not args.all and not args.test_file:
raise ValueError("Please specify either `test_file` or pass `--all` to check/modify all test files.")
elif args.all and args.test_file:
raise ValueError("Only one of `--test_file` and `--all` could be specified.")
test_files = []
if args.test_file:
test_files = [args.test_file]
else:
pattern = os.path.join("tests", "models", "**", "test_modeling_*.py")
for test_file in glob.glob(pattern):
# `Flax` is not concerned at this moment
if not test_file.startswith("test_modeling_flax_"):
test_files.append(test_file)
for test_file in test_files:
if test_file in TEST_FILE_TO_IGNORE:
print(f"[SKIPPED] {test_file} is skipped as it is in `TEST_FILE_TO_IGNORE` in the file {__file__}.")
continue
add_pipeline_model_mapping_to_test_file(test_file, overwrite=args.overwrite)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/update_metadata.py | # coding=utf-8
# Copyright 2021 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that updates the metadata of the Transformers library in the repository `huggingface/transformers-metadata`.
Usage for an update (as used by the GitHub action `update_metadata`):
```bash
python utils/update_metadata.py --token <token> --commit_sha <commit_sha>
```
Usage to check all pipelines are properly defined in the constant `PIPELINE_TAGS_AND_AUTO_MODELS` of this script, so
that new pipelines are properly added as metadata (as used in `make repo-consistency`):
```bash
python utils/update_metadata.py --check-only
```
"""
import argparse
import collections
import os
import re
import tempfile
from typing import Dict, List, Tuple
import pandas as pd
from datasets import Dataset
from huggingface_hub import hf_hub_download, upload_folder
from transformers.utils import direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/update_metadata.py
TRANSFORMERS_PATH = "src/transformers"
# This is to make sure the transformers module imported is the one in the repo.
transformers_module = direct_transformers_import(TRANSFORMERS_PATH)
# Regexes that match TF/Flax/PT model names.
_re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
_re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
# Will match any TF or Flax model too so need to be in an else branch afterthe two previous regexes.
_re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)")
# Fill this with tuples (pipeline_tag, model_mapping, auto_model)
PIPELINE_TAGS_AND_AUTO_MODELS = [
("pretraining", "MODEL_FOR_PRETRAINING_MAPPING_NAMES", "AutoModelForPreTraining"),
("feature-extraction", "MODEL_MAPPING_NAMES", "AutoModel"),
("image-feature-extraction", "MODEL_FOR_IMAGE_MAPPING_NAMES", "AutoModel"),
("audio-classification", "MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForAudioClassification"),
("text-generation", "MODEL_FOR_CAUSAL_LM_MAPPING_NAMES", "AutoModelForCausalLM"),
("automatic-speech-recognition", "MODEL_FOR_CTC_MAPPING_NAMES", "AutoModelForCTC"),
("image-classification", "MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForImageClassification"),
("image-segmentation", "MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES", "AutoModelForImageSegmentation"),
("image-to-image", "MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES", "AutoModelForImageToImage"),
("fill-mask", "MODEL_FOR_MASKED_LM_MAPPING_NAMES", "AutoModelForMaskedLM"),
("object-detection", "MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES", "AutoModelForObjectDetection"),
(
"zero-shot-object-detection",
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING_NAMES",
"AutoModelForZeroShotObjectDetection",
),
("question-answering", "MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForQuestionAnswering"),
("text2text-generation", "MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES", "AutoModelForSeq2SeqLM"),
("text-classification", "MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForSequenceClassification"),
("automatic-speech-recognition", "MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES", "AutoModelForSpeechSeq2Seq"),
(
"table-question-answering",
"MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES",
"AutoModelForTableQuestionAnswering",
),
("token-classification", "MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES", "AutoModelForTokenClassification"),
("multiple-choice", "MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES", "AutoModelForMultipleChoice"),
(
"next-sentence-prediction",
"MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES",
"AutoModelForNextSentencePrediction",
),
(
"audio-frame-classification",
"MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING_NAMES",
"AutoModelForAudioFrameClassification",
),
("audio-xvector", "MODEL_FOR_AUDIO_XVECTOR_MAPPING_NAMES", "AutoModelForAudioXVector"),
(
"document-question-answering",
"MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES",
"AutoModelForDocumentQuestionAnswering",
),
(
"visual-question-answering",
"MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING_NAMES",
"AutoModelForVisualQuestionAnswering",
),
("image-to-text", "MODEL_FOR_FOR_VISION_2_SEQ_MAPPING_NAMES", "AutoModelForVision2Seq"),
(
"zero-shot-image-classification",
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES",
"AutoModelForZeroShotImageClassification",
),
("depth-estimation", "MODEL_FOR_DEPTH_ESTIMATION_MAPPING_NAMES", "AutoModelForDepthEstimation"),
("video-classification", "MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForVideoClassification"),
("mask-generation", "MODEL_FOR_MASK_GENERATION_MAPPING_NAMES", "AutoModelForMaskGeneration"),
("text-to-audio", "MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES", "AutoModelForTextToSpectrogram"),
("text-to-audio", "MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES", "AutoModelForTextToWaveform"),
]
def camel_case_split(identifier: str) -> List[str]:
"""
Split a camel-cased name into words.
Args:
identifier (`str`): The camel-cased name to parse.
Returns:
`List[str]`: The list of words in the identifier (as seprated by capital letters).
Example:
```py
>>> camel_case_split("CamelCasedClass")
["Camel", "Cased", "Class"]
```
"""
# Regex thanks to https://stackoverflow.com/questions/29916065/how-to-do-camelcase-split-in-python
matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier)
return [m.group(0) for m in matches]
def get_frameworks_table() -> pd.DataFrame:
"""
Generates a dataframe containing the supported auto classes for each model type, using the content of the auto
modules.
"""
# Dictionary model names to config.
config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES
model_prefix_to_model_type = {
config.replace("Config", ""): model_type for model_type, config in config_maping_names.items()
}
# Dictionaries flagging if each model prefix has a backend in PT/TF/Flax.
pt_models = collections.defaultdict(bool)
tf_models = collections.defaultdict(bool)
flax_models = collections.defaultdict(bool)
# Let's lookup through all transformers object (once) and find if models are supported by a given backend.
for attr_name in dir(transformers_module):
lookup_dict = None
if _re_tf_models.match(attr_name) is not None:
lookup_dict = tf_models
attr_name = _re_tf_models.match(attr_name).groups()[0]
elif _re_flax_models.match(attr_name) is not None:
lookup_dict = flax_models
attr_name = _re_flax_models.match(attr_name).groups()[0]
elif _re_pt_models.match(attr_name) is not None:
lookup_dict = pt_models
attr_name = _re_pt_models.match(attr_name).groups()[0]
if lookup_dict is not None:
while len(attr_name) > 0:
if attr_name in model_prefix_to_model_type:
lookup_dict[model_prefix_to_model_type[attr_name]] = True
break
# Try again after removing the last word in the name
attr_name = "".join(camel_case_split(attr_name)[:-1])
all_models = set(list(pt_models.keys()) + list(tf_models.keys()) + list(flax_models.keys()))
all_models = list(all_models)
all_models.sort()
data = {"model_type": all_models}
data["pytorch"] = [pt_models[t] for t in all_models]
data["tensorflow"] = [tf_models[t] for t in all_models]
data["flax"] = [flax_models[t] for t in all_models]
# Now let's find the right processing class for each model. In order we check if there is a Processor, then a
# Tokenizer, then a FeatureExtractor, then an ImageProcessor
processors = {}
for t in all_models:
if t in transformers_module.models.auto.processing_auto.PROCESSOR_MAPPING_NAMES:
processors[t] = "AutoProcessor"
elif t in transformers_module.models.auto.tokenization_auto.TOKENIZER_MAPPING_NAMES:
processors[t] = "AutoTokenizer"
elif t in transformers_module.models.auto.image_processing_auto.IMAGE_PROCESSOR_MAPPING_NAMES:
processors[t] = "AutoImageProcessor"
elif t in transformers_module.models.auto.feature_extraction_auto.FEATURE_EXTRACTOR_MAPPING_NAMES:
processors[t] = "AutoFeatureExtractor"
else:
# Default to AutoTokenizer if a model has nothing, for backward compatibility.
processors[t] = "AutoTokenizer"
data["processor"] = [processors[t] for t in all_models]
return pd.DataFrame(data)
def update_pipeline_and_auto_class_table(table: Dict[str, Tuple[str, str]]) -> Dict[str, Tuple[str, str]]:
"""
Update the table maping models to pipelines and auto classes without removing old keys if they don't exist anymore.
Args:
table (`Dict[str, Tuple[str, str]]`):
The existing table mapping model names to a tuple containing the pipeline tag and the auto-class name with
which they should be used.
Returns:
`Dict[str, Tuple[str, str]]`: The updated table in the same format.
"""
auto_modules = [
transformers_module.models.auto.modeling_auto,
transformers_module.models.auto.modeling_tf_auto,
transformers_module.models.auto.modeling_flax_auto,
]
for pipeline_tag, model_mapping, auto_class in PIPELINE_TAGS_AND_AUTO_MODELS:
model_mappings = [model_mapping, f"TF_{model_mapping}", f"FLAX_{model_mapping}"]
auto_classes = [auto_class, f"TF_{auto_class}", f"Flax_{auto_class}"]
# Loop through all three frameworks
for module, cls, mapping in zip(auto_modules, auto_classes, model_mappings):
# The type of pipeline may not exist in this framework
if not hasattr(module, mapping):
continue
# First extract all model_names
model_names = []
for name in getattr(module, mapping).values():
if isinstance(name, str):
model_names.append(name)
else:
model_names.extend(list(name))
# Add pipeline tag and auto model class for those models
table.update({model_name: (pipeline_tag, cls) for model_name in model_names})
return table
def update_metadata(token: str, commit_sha: str):
"""
Update the metadata for the Transformers repo in `huggingface/transformers-metadata`.
Args:
token (`str`): A valid token giving write access to `huggingface/transformers-metadata`.
commit_sha (`str`): The commit SHA on Transformers corresponding to this update.
"""
frameworks_table = get_frameworks_table()
frameworks_dataset = Dataset.from_pandas(frameworks_table)
resolved_tags_file = hf_hub_download(
"huggingface/transformers-metadata", "pipeline_tags.json", repo_type="dataset", token=token
)
tags_dataset = Dataset.from_json(resolved_tags_file)
table = {
tags_dataset[i]["model_class"]: (tags_dataset[i]["pipeline_tag"], tags_dataset[i]["auto_class"])
for i in range(len(tags_dataset))
}
table = update_pipeline_and_auto_class_table(table)
# Sort the model classes to avoid some nondeterministic updates to create false update commits.
model_classes = sorted(table.keys())
tags_table = pd.DataFrame(
{
"model_class": model_classes,
"pipeline_tag": [table[m][0] for m in model_classes],
"auto_class": [table[m][1] for m in model_classes],
}
)
tags_dataset = Dataset.from_pandas(tags_table)
hub_frameworks_json = hf_hub_download(
repo_id="huggingface/transformers-metadata",
filename="frameworks.json",
repo_type="dataset",
token=token,
)
with open(hub_frameworks_json) as f:
hub_frameworks_json = f.read()
hub_pipeline_tags_json = hf_hub_download(
repo_id="huggingface/transformers-metadata",
filename="pipeline_tags.json",
repo_type="dataset",
token=token,
)
with open(hub_pipeline_tags_json) as f:
hub_pipeline_tags_json = f.read()
with tempfile.TemporaryDirectory() as tmp_dir:
frameworks_dataset.to_json(os.path.join(tmp_dir, "frameworks.json"))
tags_dataset.to_json(os.path.join(tmp_dir, "pipeline_tags.json"))
with open(os.path.join(tmp_dir, "frameworks.json")) as f:
frameworks_json = f.read()
with open(os.path.join(tmp_dir, "pipeline_tags.json")) as f:
pipeline_tags_json = f.read()
frameworks_equal = hub_frameworks_json == frameworks_json
hub_pipeline_tags_equal = hub_pipeline_tags_json == pipeline_tags_json
if frameworks_equal and hub_pipeline_tags_equal:
print("No updates on the Hub, not pushing the metadata files.")
return
if commit_sha is not None:
commit_message = (
f"Update with commit {commit_sha}\n\nSee: "
f"https://github.com/huggingface/transformers/commit/{commit_sha}"
)
else:
commit_message = "Update"
upload_folder(
repo_id="huggingface/transformers-metadata",
folder_path=tmp_dir,
repo_type="dataset",
token=token,
commit_message=commit_message,
)
def check_pipeline_tags():
"""
Check all pipeline tags are properly defined in the `PIPELINE_TAGS_AND_AUTO_MODELS` constant of this script.
"""
in_table = {tag: cls for tag, _, cls in PIPELINE_TAGS_AND_AUTO_MODELS}
pipeline_tasks = transformers_module.pipelines.SUPPORTED_TASKS
missing = []
for key in pipeline_tasks:
if key not in in_table:
model = pipeline_tasks[key]["pt"]
if isinstance(model, (list, tuple)):
model = model[0]
model = model.__name__
if model not in in_table.values():
missing.append(key)
if len(missing) > 0:
msg = ", ".join(missing)
raise ValueError(
"The following pipeline tags are not present in the `PIPELINE_TAGS_AND_AUTO_MODELS` constant inside "
f"`utils/update_metadata.py`: {msg}. Please add them!"
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--token", type=str, help="The token to use to push to the transformers-metadata dataset.")
parser.add_argument("--commit_sha", type=str, help="The sha of the commit going with this update.")
parser.add_argument("--check-only", action="store_true", help="Activate to just check all pipelines are present.")
args = parser.parse_args()
if args.check_only:
check_pipeline_tags()
else:
update_metadata(args.token, args.commit_sha)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/get_github_job_time.py | import argparse
import math
import traceback
import dateutil.parser as date_parser
import requests
def extract_time_from_single_job(job):
"""Extract time info from a single job in a GitHub Actions workflow run"""
job_info = {}
start = job["started_at"]
end = job["completed_at"]
start_datetime = date_parser.parse(start)
end_datetime = date_parser.parse(end)
duration_in_min = round((end_datetime - start_datetime).total_seconds() / 60.0)
job_info["started_at"] = start
job_info["completed_at"] = end
job_info["duration"] = duration_in_min
return job_info
def get_job_time(workflow_run_id, token=None):
"""Extract time info for all jobs in a GitHub Actions workflow run"""
headers = None
if token is not None:
headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"}
url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100"
result = requests.get(url, headers=headers).json()
job_time = {}
try:
job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]})
pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100)
for i in range(pages_to_iterate_over):
result = requests.get(url + f"&page={i + 2}", headers=headers).json()
job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]})
return job_time
except Exception:
print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}")
return {}
if __name__ == "__main__":
r"""
Example:
python get_github_job_time.py --workflow_run_id 2945609517
"""
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.")
args = parser.parse_args()
job_time = get_job_time(args.workflow_run_id)
job_time = dict(sorted(job_time.items(), key=lambda item: item[1]["duration"], reverse=True))
for k, v in job_time.items():
print(f'{k}: {v["duration"]}')
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_tf_ops.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import os
from tensorflow.core.protobuf.saved_model_pb2 import SavedModel
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_copies.py
REPO_PATH = "."
# Internal TensorFlow ops that can be safely ignored (mostly specific to a saved model)
INTERNAL_OPS = [
"Assert",
"AssignVariableOp",
"EmptyTensorList",
"MergeV2Checkpoints",
"ReadVariableOp",
"ResourceGather",
"RestoreV2",
"SaveV2",
"ShardedFilename",
"StatefulPartitionedCall",
"StaticRegexFullMatch",
"VarHandleOp",
]
def onnx_compliancy(saved_model_path, strict, opset):
saved_model = SavedModel()
onnx_ops = []
with open(os.path.join(REPO_PATH, "utils", "tf_ops", "onnx.json")) as f:
onnx_opsets = json.load(f)["opsets"]
for i in range(1, opset + 1):
onnx_ops.extend(onnx_opsets[str(i)])
with open(saved_model_path, "rb") as f:
saved_model.ParseFromString(f.read())
model_op_names = set()
# Iterate over every metagraph in case there is more than one (a saved model can contain multiple graphs)
for meta_graph in saved_model.meta_graphs:
# Add operations in the graph definition
model_op_names.update(node.op for node in meta_graph.graph_def.node)
# Go through the functions in the graph definition
for func in meta_graph.graph_def.library.function:
# Add operations in each function
model_op_names.update(node.op for node in func.node_def)
# Convert to list, sorted if you want
model_op_names = sorted(model_op_names)
incompatible_ops = []
for op in model_op_names:
if op not in onnx_ops and op not in INTERNAL_OPS:
incompatible_ops.append(op)
if strict and len(incompatible_ops) > 0:
raise Exception(f"Found the following incompatible ops for the opset {opset}:\n" + incompatible_ops)
elif len(incompatible_ops) > 0:
print(f"Found the following incompatible ops for the opset {opset}:")
print(*incompatible_ops, sep="\n")
else:
print(f"The saved model {saved_model_path} can properly be converted with ONNX.")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--saved_model_path", help="Path of the saved model to check (the .pb file).")
parser.add_argument(
"--opset", default=12, type=int, help="The ONNX opset against which the model has to be tested."
)
parser.add_argument(
"--framework", choices=["onnx"], default="onnx", help="Frameworks against which to test the saved model."
)
parser.add_argument(
"--strict", action="store_true", help="Whether make the checking strict (raise errors) or not (raise warnings)"
)
args = parser.parse_args()
if args.framework == "onnx":
onnx_compliancy(args.saved_model_path, args.strict, args.opset)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/check_repo.py | # coding=utf-8
# Copyright 2020 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that performs several consistency checks on the repo. This includes:
- checking all models are properly defined in the __init__ of models/
- checking all models are in the main __init__
- checking all models are properly tested
- checking all object in the main __init__ are documented
- checking all models are in at least one auto class
- checking all the auto mapping are properly defined (no typos, importable)
- checking the list of deprecated models is up to date
Use from the root of the repo with (as used in `make repo-consistency`):
```bash
python utils/check_repo.py
```
It has no auto-fix mode.
"""
import inspect
import os
import re
import sys
import types
import warnings
from collections import OrderedDict
from difflib import get_close_matches
from pathlib import Path
from typing import List, Tuple
from transformers import is_flax_available, is_tf_available, is_torch_available
from transformers.models.auto import get_values
from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES
from transformers.models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING_NAMES
from transformers.models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING_NAMES
from transformers.models.auto.processing_auto import PROCESSOR_MAPPING_NAMES
from transformers.models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES
from transformers.utils import ENV_VARS_TRUE_VALUES, direct_transformers_import
# All paths are set with the intent you should run this script from the root of the repo with the command
# python utils/check_repo.py
PATH_TO_TRANSFORMERS = "src/transformers"
PATH_TO_TESTS = "tests"
PATH_TO_DOC = "docs/source/en"
# Update this list with models that are supposed to be private.
PRIVATE_MODELS = [
"AltRobertaModel",
"DPRSpanPredictor",
"UdopStack",
"LongT5Stack",
"RealmBertModel",
"T5Stack",
"MT5Stack",
"UMT5Stack",
"Pop2PianoStack",
"SwitchTransformersStack",
"TFDPRSpanPredictor",
"MaskFormerSwinModel",
"MaskFormerSwinPreTrainedModel",
"BridgeTowerTextModel",
"BridgeTowerVisionModel",
"Kosmos2TextModel",
"Kosmos2TextForCausalLM",
"Kosmos2VisionModel",
"SeamlessM4Tv2TextToUnitModel",
"SeamlessM4Tv2CodeHifiGan",
"SeamlessM4Tv2TextToUnitForConditionalGeneration",
]
# Update this list for models that are not tested with a comment explaining the reason it should not be.
# Being in this list is an exception and should **not** be the rule.
IGNORE_NON_TESTED = PRIVATE_MODELS.copy() + [
# models to ignore for not tested
"RecurrentGemmaModel", # Building part of bigger (tested) model.
"FuyuForCausalLM", # Not tested fort now
"InstructBlipQFormerModel", # Building part of bigger (tested) model.
"UMT5EncoderModel", # Building part of bigger (tested) model.
"Blip2QFormerModel", # Building part of bigger (tested) model.
"ErnieMForInformationExtraction",
"FastSpeech2ConformerHifiGan", # Already tested by SpeechT5HifiGan (# Copied from)
"FastSpeech2ConformerWithHifiGan", # Built with two smaller (tested) models.
"GraphormerDecoderHead", # Building part of bigger (tested) model.
"JukeboxVQVAE", # Building part of bigger (tested) model.
"JukeboxPrior", # Building part of bigger (tested) model.
"DecisionTransformerGPT2Model", # Building part of bigger (tested) model.
"SegformerDecodeHead", # Building part of bigger (tested) model.
"MgpstrModel", # Building part of bigger (tested) model.
"BertLMHeadModel", # Needs to be setup as decoder.
"MegatronBertLMHeadModel", # Building part of bigger (tested) model.
"RealmBertModel", # Building part of bigger (tested) model.
"RealmReader", # Not regular model.
"RealmScorer", # Not regular model.
"RealmForOpenQA", # Not regular model.
"ReformerForMaskedLM", # Needs to be setup as decoder.
"TFElectraMainLayer", # Building part of bigger (tested) model (should it be a TFPreTrainedModel ?)
"TFRobertaForMultipleChoice", # TODO: fix
"TFRobertaPreLayerNormForMultipleChoice", # TODO: fix
"SeparableConv1D", # Building part of bigger (tested) model.
"FlaxBartForCausalLM", # Building part of bigger (tested) model.
"FlaxBertForCausalLM", # Building part of bigger (tested) model. Tested implicitly through FlaxRobertaForCausalLM.
"OPTDecoderWrapper",
"TFSegformerDecodeHead", # Not a regular model.
"AltRobertaModel", # Building part of bigger (tested) model.
"BlipTextLMHeadModel", # No need to test it as it is tested by BlipTextVision models
"TFBlipTextLMHeadModel", # No need to test it as it is tested by BlipTextVision models
"BridgeTowerTextModel", # No need to test it as it is tested by BridgeTowerModel model.
"BridgeTowerVisionModel", # No need to test it as it is tested by BridgeTowerModel model.
"BarkCausalModel", # Building part of bigger (tested) model.
"BarkModel", # Does not have a forward signature - generation tested with integration tests.
"SeamlessM4TTextToUnitModel", # Building part of bigger (tested) model.
"SeamlessM4TCodeHifiGan", # Building part of bigger (tested) model.
"SeamlessM4TTextToUnitForConditionalGeneration", # Building part of bigger (tested) model.
]
# Update this list with test files that don't have a tester with a `all_model_classes` variable and which don't
# trigger the common tests.
TEST_FILES_WITH_NO_COMMON_TESTS = [
"models/decision_transformer/test_modeling_decision_transformer.py",
"models/camembert/test_modeling_camembert.py",
"models/mt5/test_modeling_flax_mt5.py",
"models/mbart/test_modeling_mbart.py",
"models/mt5/test_modeling_mt5.py",
"models/pegasus/test_modeling_pegasus.py",
"models/camembert/test_modeling_tf_camembert.py",
"models/mt5/test_modeling_tf_mt5.py",
"models/xlm_roberta/test_modeling_tf_xlm_roberta.py",
"models/xlm_roberta/test_modeling_flax_xlm_roberta.py",
"models/xlm_prophetnet/test_modeling_xlm_prophetnet.py",
"models/xlm_roberta/test_modeling_xlm_roberta.py",
"models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py",
"models/vision_text_dual_encoder/test_modeling_tf_vision_text_dual_encoder.py",
"models/vision_text_dual_encoder/test_modeling_flax_vision_text_dual_encoder.py",
"models/decision_transformer/test_modeling_decision_transformer.py",
"models/bark/test_modeling_bark.py",
]
# Update this list for models that are not in any of the auto MODEL_XXX_MAPPING. Being in this list is an exception and
# should **not** be the rule.
IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
# models to ignore for model xxx mapping
"AlignTextModel",
"AlignVisionModel",
"ClapTextModel",
"ClapTextModelWithProjection",
"ClapAudioModel",
"ClapAudioModelWithProjection",
"Blip2ForConditionalGeneration",
"Blip2QFormerModel",
"Blip2VisionModel",
"ErnieMForInformationExtraction",
"FastSpeech2ConformerHifiGan",
"FastSpeech2ConformerWithHifiGan",
"GitVisionModel",
"GraphormerModel",
"GraphormerForGraphClassification",
"BlipForConditionalGeneration",
"BlipForImageTextRetrieval",
"BlipForQuestionAnswering",
"BlipVisionModel",
"BlipTextLMHeadModel",
"BlipTextModel",
"BrosSpadeEEForTokenClassification",
"BrosSpadeELForTokenClassification",
"TFBlipForConditionalGeneration",
"TFBlipForImageTextRetrieval",
"TFBlipForQuestionAnswering",
"TFBlipVisionModel",
"TFBlipTextLMHeadModel",
"TFBlipTextModel",
"Swin2SRForImageSuperResolution",
"BridgeTowerForImageAndTextRetrieval",
"BridgeTowerForMaskedLM",
"BridgeTowerForContrastiveLearning",
"CLIPSegForImageSegmentation",
"CLIPSegVisionModel",
"CLIPSegTextModel",
"EsmForProteinFolding",
"GPTSanJapaneseModel",
"TimeSeriesTransformerForPrediction",
"InformerForPrediction",
"AutoformerForPrediction",
"PatchTSTForPretraining",
"PatchTSTForPrediction",
"JukeboxVQVAE",
"JukeboxPrior",
"SamModel",
"DPTForDepthEstimation",
"DecisionTransformerGPT2Model",
"GLPNForDepthEstimation",
"ViltForImagesAndTextClassification",
"ViltForImageAndTextRetrieval",
"ViltForTokenClassification",
"ViltForMaskedLM",
"PerceiverForMultimodalAutoencoding",
"PerceiverForOpticalFlow",
"SegformerDecodeHead",
"TFSegformerDecodeHead",
"FlaxBeitForMaskedImageModeling",
"BeitForMaskedImageModeling",
"ChineseCLIPTextModel",
"ChineseCLIPVisionModel",
"CLIPTextModel",
"CLIPTextModelWithProjection",
"CLIPVisionModelWithProjection",
"ClvpForCausalLM",
"ClvpModel",
"GroupViTTextModel",
"GroupViTVisionModel",
"TFCLIPTextModel",
"TFCLIPVisionModel",
"TFGroupViTTextModel",
"TFGroupViTVisionModel",
"FlaxCLIPTextModel",
"FlaxCLIPTextModelWithProjection",
"FlaxCLIPVisionModel",
"FlaxWav2Vec2ForCTC",
"DetrForSegmentation",
"Pix2StructVisionModel",
"Pix2StructTextModel",
"Pix2StructForConditionalGeneration",
"ConditionalDetrForSegmentation",
"DPRReader",
"FlaubertForQuestionAnswering",
"FlavaImageCodebook",
"FlavaTextModel",
"FlavaImageModel",
"FlavaMultimodalModel",
"GPT2DoubleHeadsModel",
"GPTSw3DoubleHeadsModel",
"InstructBlipVisionModel",
"InstructBlipQFormerModel",
"LayoutLMForQuestionAnswering",
"LukeForMaskedLM",
"LukeForEntityClassification",
"LukeForEntityPairClassification",
"LukeForEntitySpanClassification",
"MgpstrModel",
"OpenAIGPTDoubleHeadsModel",
"OwlViTTextModel",
"OwlViTVisionModel",
"Owlv2TextModel",
"Owlv2VisionModel",
"OwlViTForObjectDetection",
"PatchTSMixerForPrediction",
"PatchTSMixerForPretraining",
"RagModel",
"RagSequenceForGeneration",
"RagTokenForGeneration",
"RealmEmbedder",
"RealmForOpenQA",
"RealmScorer",
"RealmReader",
"TFDPRReader",
"TFGPT2DoubleHeadsModel",
"TFLayoutLMForQuestionAnswering",
"TFOpenAIGPTDoubleHeadsModel",
"TFRagModel",
"TFRagSequenceForGeneration",
"TFRagTokenForGeneration",
"Wav2Vec2ForCTC",
"HubertForCTC",
"SEWForCTC",
"SEWDForCTC",
"XLMForQuestionAnswering",
"XLNetForQuestionAnswering",
"SeparableConv1D",
"VisualBertForRegionToPhraseAlignment",
"VisualBertForVisualReasoning",
"VisualBertForQuestionAnswering",
"VisualBertForMultipleChoice",
"TFWav2Vec2ForCTC",
"TFHubertForCTC",
"XCLIPVisionModel",
"XCLIPTextModel",
"AltCLIPTextModel",
"AltCLIPVisionModel",
"AltRobertaModel",
"TvltForAudioVisualClassification",
"BarkCausalModel",
"BarkCoarseModel",
"BarkFineModel",
"BarkSemanticModel",
"MusicgenMelodyModel",
"MusicgenModel",
"MusicgenForConditionalGeneration",
"SpeechT5ForSpeechToSpeech",
"SpeechT5ForTextToSpeech",
"SpeechT5HifiGan",
"VitMatteForImageMatting",
"SeamlessM4TTextToUnitModel",
"SeamlessM4TTextToUnitForConditionalGeneration",
"SeamlessM4TCodeHifiGan",
"SeamlessM4TForSpeechToSpeech", # no auto class for speech-to-speech
"TvpForVideoGrounding",
"UdopForConditionalGeneration",
"SeamlessM4Tv2NARTextToUnitModel",
"SeamlessM4Tv2NARTextToUnitForConditionalGeneration",
"SeamlessM4Tv2CodeHifiGan",
"SeamlessM4Tv2ForSpeechToSpeech", # no auto class for speech-to-speech
"SegGptForImageSegmentation",
"SiglipVisionModel",
"SiglipTextModel",
]
# DO NOT edit this list!
# (The corresponding pytorch objects should never have been in the main `__init__`, but it's too late to remove)
OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK = [
"FlaxBertLayer",
"FlaxBigBirdLayer",
"FlaxRoFormerLayer",
"TFBertLayer",
"TFLxmertEncoder",
"TFLxmertXLayer",
"TFMPNetLayer",
"TFMobileBertLayer",
"TFSegformerLayer",
"TFViTMAELayer",
]
# Update this list for models that have multiple model types for the same model doc.
MODEL_TYPE_TO_DOC_MAPPING = OrderedDict(
[
("data2vec-text", "data2vec"),
("data2vec-audio", "data2vec"),
("data2vec-vision", "data2vec"),
("donut-swin", "donut"),
]
)
# This is to make sure the transformers module imported is the one in the repo.
transformers = direct_transformers_import(PATH_TO_TRANSFORMERS)
def check_missing_backends():
"""
Checks if all backends are installed (otherwise the check of this script is incomplete). Will error in the CI if
that's not the case but only throw a warning for users running this.
"""
missing_backends = []
if not is_torch_available():
missing_backends.append("PyTorch")
if not is_tf_available():
missing_backends.append("TensorFlow")
if not is_flax_available():
missing_backends.append("Flax")
if len(missing_backends) > 0:
missing = ", ".join(missing_backends)
if os.getenv("TRANSFORMERS_IS_CI", "").upper() in ENV_VARS_TRUE_VALUES:
raise Exception(
"Full repo consistency checks require all backends to be installed (with `pip install -e '.[dev]'` in the "
f"Transformers repo, the following are missing: {missing}."
)
else:
warnings.warn(
"Full repo consistency checks require all backends to be installed (with `pip install -e '.[dev]'` in the "
f"Transformers repo, the following are missing: {missing}. While it's probably fine as long as you "
"didn't make any change in one of those backends modeling files, you should probably execute the "
"command above to be on the safe side."
)
def check_model_list():
"""
Checks the model listed as subfolders of `models` match the models available in `transformers.models`.
"""
# Get the models from the directory structure of `src/transformers/models/`
models_dir = os.path.join(PATH_TO_TRANSFORMERS, "models")
_models = []
for model in os.listdir(models_dir):
if model == "deprecated":
continue
model_dir = os.path.join(models_dir, model)
if os.path.isdir(model_dir) and "__init__.py" in os.listdir(model_dir):
_models.append(model)
# Get the models in the submodule `transformers.models`
models = [model for model in dir(transformers.models) if not model.startswith("__")]
missing_models = sorted(set(_models).difference(models))
if missing_models:
raise Exception(
f"The following models should be included in {models_dir}/__init__.py: {','.join(missing_models)}."
)
# If some modeling modules should be ignored for all checks, they should be added in the nested list
# _ignore_modules of this function.
def get_model_modules() -> List[str]:
"""Get all the model modules inside the transformers library (except deprecated models)."""
_ignore_modules = [
"modeling_auto",
"modeling_encoder_decoder",
"modeling_marian",
"modeling_mmbt",
"modeling_outputs",
"modeling_retribert",
"modeling_utils",
"modeling_flax_auto",
"modeling_flax_encoder_decoder",
"modeling_flax_utils",
"modeling_speech_encoder_decoder",
"modeling_flax_speech_encoder_decoder",
"modeling_flax_vision_encoder_decoder",
"modeling_timm_backbone",
"modeling_tf_auto",
"modeling_tf_encoder_decoder",
"modeling_tf_outputs",
"modeling_tf_pytorch_utils",
"modeling_tf_utils",
"modeling_tf_vision_encoder_decoder",
"modeling_vision_encoder_decoder",
]
modules = []
for model in dir(transformers.models):
# There are some magic dunder attributes in the dir, we ignore them
if model == "deprecated" or model.startswith("__"):
continue
model_module = getattr(transformers.models, model)
for submodule in dir(model_module):
if submodule.startswith("modeling") and submodule not in _ignore_modules:
modeling_module = getattr(model_module, submodule)
if inspect.ismodule(modeling_module):
modules.append(modeling_module)
return modules
def get_models(module: types.ModuleType, include_pretrained: bool = False) -> List[Tuple[str, type]]:
"""
Get the objects in a module that are models.
Args:
module (`types.ModuleType`):
The module from which we are extracting models.
include_pretrained (`bool`, *optional*, defaults to `False`):
Whether or not to include the `PreTrainedModel` subclass (like `BertPreTrainedModel`) or not.
Returns:
List[Tuple[str, type]]: List of models as tuples (class name, actual class).
"""
models = []
model_classes = (transformers.PreTrainedModel, transformers.TFPreTrainedModel, transformers.FlaxPreTrainedModel)
for attr_name in dir(module):
if not include_pretrained and ("Pretrained" in attr_name or "PreTrained" in attr_name):
continue
attr = getattr(module, attr_name)
if isinstance(attr, type) and issubclass(attr, model_classes) and attr.__module__ == module.__name__:
models.append((attr_name, attr))
return models
def is_building_block(model: str) -> bool:
"""
Returns `True` if a model is a building block part of a bigger model.
"""
if model.endswith("Wrapper"):
return True
if model.endswith("Encoder"):
return True
if model.endswith("Decoder"):
return True
if model.endswith("Prenet"):
return True
def is_a_private_model(model: str) -> bool:
"""Returns `True` if the model should not be in the main init."""
if model in PRIVATE_MODELS:
return True
return is_building_block(model)
def check_models_are_in_init():
"""Checks all models defined in the library are in the main init."""
models_not_in_init = []
dir_transformers = dir(transformers)
for module in get_model_modules():
models_not_in_init += [
model[0] for model in get_models(module, include_pretrained=True) if model[0] not in dir_transformers
]
# Remove private models
models_not_in_init = [model for model in models_not_in_init if not is_a_private_model(model)]
if len(models_not_in_init) > 0:
raise Exception(f"The following models should be in the main init: {','.join(models_not_in_init)}.")
# If some test_modeling files should be ignored when checking models are all tested, they should be added in the
# nested list _ignore_files of this function.
def get_model_test_files() -> List[str]:
"""
Get the model test files.
Returns:
`List[str]`: The list of test files. The returned files will NOT contain the `tests` (i.e. `PATH_TO_TESTS`
defined in this script). They will be considered as paths relative to `tests`. A caller has to use
`os.path.join(PATH_TO_TESTS, ...)` to access the files.
"""
_ignore_files = [
"test_modeling_common",
"test_modeling_encoder_decoder",
"test_modeling_flax_encoder_decoder",
"test_modeling_flax_speech_encoder_decoder",
"test_modeling_marian",
"test_modeling_tf_common",
"test_modeling_tf_encoder_decoder",
]
test_files = []
model_test_root = os.path.join(PATH_TO_TESTS, "models")
model_test_dirs = []
for x in os.listdir(model_test_root):
x = os.path.join(model_test_root, x)
if os.path.isdir(x):
model_test_dirs.append(x)
for target_dir in [PATH_TO_TESTS] + model_test_dirs:
for file_or_dir in os.listdir(target_dir):
path = os.path.join(target_dir, file_or_dir)
if os.path.isfile(path):
filename = os.path.split(path)[-1]
if "test_modeling" in filename and os.path.splitext(filename)[0] not in _ignore_files:
file = os.path.join(*path.split(os.sep)[1:])
test_files.append(file)
return test_files
# This is a bit hacky but I didn't find a way to import the test_file as a module and read inside the tester class
# for the all_model_classes variable.
def find_tested_models(test_file: str) -> List[str]:
"""
Parse the content of test_file to detect what's in `all_model_classes`. This detects the models that inherit from
the common test class.
Args:
test_file (`str`): The path to the test file to check
Returns:
`List[str]`: The list of models tested in that file.
"""
with open(os.path.join(PATH_TO_TESTS, test_file), "r", encoding="utf-8", newline="\n") as f:
content = f.read()
all_models = re.findall(r"all_model_classes\s+=\s+\(\s*\(([^\)]*)\)", content)
# Check with one less parenthesis as well
all_models += re.findall(r"all_model_classes\s+=\s+\(([^\)]*)\)", content)
if len(all_models) > 0:
model_tested = []
for entry in all_models:
for line in entry.split(","):
name = line.strip()
if len(name) > 0:
model_tested.append(name)
return model_tested
def should_be_tested(model_name: str) -> bool:
"""
Whether or not a model should be tested.
"""
if model_name in IGNORE_NON_TESTED:
return False
return not is_building_block(model_name)
def check_models_are_tested(module: types.ModuleType, test_file: str) -> List[str]:
"""Check models defined in a module are all tested in a given file.
Args:
module (`types.ModuleType`): The module in which we get the models.
test_file (`str`): The path to the file where the module is tested.
Returns:
`List[str]`: The list of error messages corresponding to models not tested.
"""
# XxxPreTrainedModel are not tested
defined_models = get_models(module)
tested_models = find_tested_models(test_file)
if tested_models is None:
if test_file.replace(os.path.sep, "/") in TEST_FILES_WITH_NO_COMMON_TESTS:
return
return [
f"{test_file} should define `all_model_classes` to apply common tests to the models it tests. "
+ "If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file "
+ "`utils/check_repo.py`."
]
failures = []
for model_name, _ in defined_models:
if model_name not in tested_models and should_be_tested(model_name):
failures.append(
f"{model_name} is defined in {module.__name__} but is not tested in "
+ f"{os.path.join(PATH_TO_TESTS, test_file)}. Add it to the all_model_classes in that file."
+ "If common tests should not applied to that model, add its name to `IGNORE_NON_TESTED`"
+ "in the file `utils/check_repo.py`."
)
return failures
def check_all_models_are_tested():
"""Check all models are properly tested."""
modules = get_model_modules()
test_files = get_model_test_files()
failures = []
for module in modules:
# Matches a module to its test file.
test_file = [file for file in test_files if f"test_{module.__name__.split('.')[-1]}.py" in file]
if len(test_file) == 0:
failures.append(f"{module.__name__} does not have its corresponding test file {test_file}.")
elif len(test_file) > 1:
failures.append(f"{module.__name__} has several test files: {test_file}.")
else:
test_file = test_file[0]
new_failures = check_models_are_tested(module, test_file)
if new_failures is not None:
failures += new_failures
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
def get_all_auto_configured_models() -> List[str]:
"""Return the list of all models in at least one auto class."""
result = set() # To avoid duplicates we concatenate all model classes in a set.
if is_torch_available():
for attr_name in dir(transformers.models.auto.modeling_auto):
if attr_name.startswith("MODEL_") and attr_name.endswith("MAPPING_NAMES"):
result = result | set(get_values(getattr(transformers.models.auto.modeling_auto, attr_name)))
if is_tf_available():
for attr_name in dir(transformers.models.auto.modeling_tf_auto):
if attr_name.startswith("TF_MODEL_") and attr_name.endswith("MAPPING_NAMES"):
result = result | set(get_values(getattr(transformers.models.auto.modeling_tf_auto, attr_name)))
if is_flax_available():
for attr_name in dir(transformers.models.auto.modeling_flax_auto):
if attr_name.startswith("FLAX_MODEL_") and attr_name.endswith("MAPPING_NAMES"):
result = result | set(get_values(getattr(transformers.models.auto.modeling_flax_auto, attr_name)))
return list(result)
def ignore_unautoclassed(model_name: str) -> bool:
"""Rules to determine if a model should be in an auto class."""
# Special white list
if model_name in IGNORE_NON_AUTO_CONFIGURED:
return True
# Encoder and Decoder should be ignored
if "Encoder" in model_name or "Decoder" in model_name:
return True
return False
def check_models_are_auto_configured(module: types.ModuleType, all_auto_models: List[str]) -> List[str]:
"""
Check models defined in module are each in an auto class.
Args:
module (`types.ModuleType`):
The module in which we get the models.
all_auto_models (`List[str]`):
The list of all models in an auto class (as obtained with `get_all_auto_configured_models()`).
Returns:
`List[str]`: The list of error messages corresponding to models not tested.
"""
defined_models = get_models(module)
failures = []
for model_name, _ in defined_models:
if model_name not in all_auto_models and not ignore_unautoclassed(model_name):
failures.append(
f"{model_name} is defined in {module.__name__} but is not present in any of the auto mapping. "
"If that is intended behavior, add its name to `IGNORE_NON_AUTO_CONFIGURED` in the file "
"`utils/check_repo.py`."
)
return failures
def check_all_models_are_auto_configured():
"""Check all models are each in an auto class."""
# This is where we need to check we have all backends or the check is incomplete.
check_missing_backends()
modules = get_model_modules()
all_auto_models = get_all_auto_configured_models()
failures = []
for module in modules:
new_failures = check_models_are_auto_configured(module, all_auto_models)
if new_failures is not None:
failures += new_failures
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
def check_all_auto_object_names_being_defined():
"""Check all names defined in auto (name) mappings exist in the library."""
# This is where we need to check we have all backends or the check is incomplete.
check_missing_backends()
failures = []
mappings_to_check = {
"TOKENIZER_MAPPING_NAMES": TOKENIZER_MAPPING_NAMES,
"IMAGE_PROCESSOR_MAPPING_NAMES": IMAGE_PROCESSOR_MAPPING_NAMES,
"FEATURE_EXTRACTOR_MAPPING_NAMES": FEATURE_EXTRACTOR_MAPPING_NAMES,
"PROCESSOR_MAPPING_NAMES": PROCESSOR_MAPPING_NAMES,
}
# Each auto modeling files contains multiple mappings. Let's get them in a dynamic way.
for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]:
module = getattr(transformers.models.auto, module_name, None)
if module is None:
continue
# all mappings in a single auto modeling file
mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")]
mappings_to_check.update({name: getattr(module, name) for name in mapping_names})
for name, mapping in mappings_to_check.items():
for _, class_names in mapping.items():
if not isinstance(class_names, tuple):
class_names = (class_names,)
for class_name in class_names:
if class_name is None:
continue
# dummy object is accepted
if not hasattr(transformers, class_name):
# If the class name is in a model name mapping, let's not check if there is a definition in any modeling
# module, if it's a private model defined in this file.
if name.endswith("MODEL_MAPPING_NAMES") and is_a_private_model(class_name):
continue
if name.endswith("MODEL_FOR_IMAGE_MAPPING_NAMES") and is_a_private_model(class_name):
continue
failures.append(
f"`{class_name}` appears in the mapping `{name}` but it is not defined in the library."
)
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
def check_all_auto_mapping_names_in_config_mapping_names():
"""Check all keys defined in auto mappings (mappings of names) appear in `CONFIG_MAPPING_NAMES`."""
# This is where we need to check we have all backends or the check is incomplete.
check_missing_backends()
failures = []
# `TOKENIZER_PROCESSOR_MAPPING_NAMES` and `AutoTokenizer` is special, and don't need to follow the rule.
mappings_to_check = {
"IMAGE_PROCESSOR_MAPPING_NAMES": IMAGE_PROCESSOR_MAPPING_NAMES,
"FEATURE_EXTRACTOR_MAPPING_NAMES": FEATURE_EXTRACTOR_MAPPING_NAMES,
"PROCESSOR_MAPPING_NAMES": PROCESSOR_MAPPING_NAMES,
}
# Each auto modeling files contains multiple mappings. Let's get them in a dynamic way.
for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]:
module = getattr(transformers.models.auto, module_name, None)
if module is None:
continue
# all mappings in a single auto modeling file
mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")]
mappings_to_check.update({name: getattr(module, name) for name in mapping_names})
for name, mapping in mappings_to_check.items():
for model_type in mapping:
if model_type not in CONFIG_MAPPING_NAMES:
failures.append(
f"`{model_type}` appears in the mapping `{name}` but it is not defined in the keys of "
"`CONFIG_MAPPING_NAMES`."
)
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
def check_all_auto_mappings_importable():
"""Check all auto mappings can be imported."""
# This is where we need to check we have all backends or the check is incomplete.
check_missing_backends()
failures = []
mappings_to_check = {}
# Each auto modeling files contains multiple mappings. Let's get them in a dynamic way.
for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]:
module = getattr(transformers.models.auto, module_name, None)
if module is None:
continue
# all mappings in a single auto modeling file
mapping_names = [x for x in dir(module) if x.endswith("_MAPPING_NAMES")]
mappings_to_check.update({name: getattr(module, name) for name in mapping_names})
for name in mappings_to_check:
name = name.replace("_MAPPING_NAMES", "_MAPPING")
if not hasattr(transformers, name):
failures.append(f"`{name}`")
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
def check_objects_being_equally_in_main_init():
"""
Check if a (TensorFlow or Flax) object is in the main __init__ iif its counterpart in PyTorch is.
"""
attrs = dir(transformers)
failures = []
for attr in attrs:
obj = getattr(transformers, attr)
if not hasattr(obj, "__module__") or "models.deprecated" in obj.__module__:
continue
module_path = obj.__module__
module_name = module_path.split(".")[-1]
module_dir = ".".join(module_path.split(".")[:-1])
if (
module_name.startswith("modeling_")
and not module_name.startswith("modeling_tf_")
and not module_name.startswith("modeling_flax_")
):
parent_module = sys.modules[module_dir]
frameworks = []
if is_tf_available():
frameworks.append("TF")
if is_flax_available():
frameworks.append("Flax")
for framework in frameworks:
other_module_path = module_path.replace("modeling_", f"modeling_{framework.lower()}_")
if os.path.isfile("src/" + other_module_path.replace(".", "/") + ".py"):
other_module_name = module_name.replace("modeling_", f"modeling_{framework.lower()}_")
other_module = getattr(parent_module, other_module_name)
if hasattr(other_module, f"{framework}{attr}"):
if not hasattr(transformers, f"{framework}{attr}"):
if f"{framework}{attr}" not in OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK:
failures.append(f"{framework}{attr}")
if hasattr(other_module, f"{framework}_{attr}"):
if not hasattr(transformers, f"{framework}_{attr}"):
if f"{framework}_{attr}" not in OBJECT_TO_SKIP_IN_MAIN_INIT_CHECK:
failures.append(f"{framework}_{attr}")
if len(failures) > 0:
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
_re_decorator = re.compile(r"^\s*@(\S+)\s+$")
def check_decorator_order(filename: str) -> List[int]:
"""
Check that in a given test file, the slow decorator is always last.
Args:
filename (`str`): The path to a test file to check.
Returns:
`List[int]`: The list of failures as a list of indices where there are problems.
"""
with open(filename, "r", encoding="utf-8", newline="\n") as f:
lines = f.readlines()
decorator_before = None
errors = []
for i, line in enumerate(lines):
search = _re_decorator.search(line)
if search is not None:
decorator_name = search.groups()[0]
if decorator_before is not None and decorator_name.startswith("parameterized"):
errors.append(i)
decorator_before = decorator_name
elif decorator_before is not None:
decorator_before = None
return errors
def check_all_decorator_order():
"""Check that in all test files, the slow decorator is always last."""
errors = []
for fname in os.listdir(PATH_TO_TESTS):
if fname.endswith(".py"):
filename = os.path.join(PATH_TO_TESTS, fname)
new_errors = check_decorator_order(filename)
errors += [f"- {filename}, line {i}" for i in new_errors]
if len(errors) > 0:
msg = "\n".join(errors)
raise ValueError(
"The parameterized decorator (and its variants) should always be first, but this is not the case in the"
f" following files:\n{msg}"
)
def find_all_documented_objects() -> List[str]:
"""
Parse the content of all doc files to detect which classes and functions it documents.
Returns:
`List[str]`: The list of all object names being documented.
"""
documented_obj = []
for doc_file in Path(PATH_TO_DOC).glob("**/*.rst"):
with open(doc_file, "r", encoding="utf-8", newline="\n") as f:
content = f.read()
raw_doc_objs = re.findall(r"(?:autoclass|autofunction):: transformers.(\S+)\s+", content)
documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs]
for doc_file in Path(PATH_TO_DOC).glob("**/*.md"):
with open(doc_file, "r", encoding="utf-8", newline="\n") as f:
content = f.read()
raw_doc_objs = re.findall(r"\[\[autodoc\]\]\s+(\S+)\s+", content)
documented_obj += [obj.split(".")[-1] for obj in raw_doc_objs]
return documented_obj
# One good reason for not being documented is to be deprecated. Put in this list deprecated objects.
DEPRECATED_OBJECTS = [
"AutoModelWithLMHead",
"BartPretrainedModel",
"DataCollator",
"DataCollatorForSOP",
"GlueDataset",
"GlueDataTrainingArguments",
"LineByLineTextDataset",
"LineByLineWithRefDataset",
"LineByLineWithSOPTextDataset",
"NerPipeline",
"PretrainedBartModel",
"PretrainedFSMTModel",
"SingleSentenceClassificationProcessor",
"SquadDataTrainingArguments",
"SquadDataset",
"SquadExample",
"SquadFeatures",
"SquadV1Processor",
"SquadV2Processor",
"TFAutoModelWithLMHead",
"TFBartPretrainedModel",
"TextDataset",
"TextDatasetForNextSentencePrediction",
"Wav2Vec2ForMaskedLM",
"Wav2Vec2Tokenizer",
"glue_compute_metrics",
"glue_convert_examples_to_features",
"glue_output_modes",
"glue_processors",
"glue_tasks_num_labels",
"squad_convert_examples_to_features",
"xnli_compute_metrics",
"xnli_output_modes",
"xnli_processors",
"xnli_tasks_num_labels",
"TFTrainingArguments",
]
# Exceptionally, some objects should not be documented after all rules passed.
# ONLY PUT SOMETHING IN THIS LIST AS A LAST RESORT!
UNDOCUMENTED_OBJECTS = [
"AddedToken", # This is a tokenizers class.
"BasicTokenizer", # Internal, should never have been in the main init.
"CharacterTokenizer", # Internal, should never have been in the main init.
"DPRPretrainedReader", # Like an Encoder.
"DummyObject", # Just picked by mistake sometimes.
"MecabTokenizer", # Internal, should never have been in the main init.
"ModelCard", # Internal type.
"SqueezeBertModule", # Internal building block (should have been called SqueezeBertLayer)
"TFDPRPretrainedReader", # Like an Encoder.
"TransfoXLCorpus", # Internal type.
"WordpieceTokenizer", # Internal, should never have been in the main init.
"absl", # External module
"add_end_docstrings", # Internal, should never have been in the main init.
"add_start_docstrings", # Internal, should never have been in the main init.
"convert_tf_weight_name_to_pt_weight_name", # Internal used to convert model weights
"logger", # Internal logger
"logging", # External module
"requires_backends", # Internal function
"AltRobertaModel", # Internal module
]
# This list should be empty. Objects in it should get their own doc page.
SHOULD_HAVE_THEIR_OWN_PAGE = [
# Benchmarks
"PyTorchBenchmark",
"PyTorchBenchmarkArguments",
"TensorFlowBenchmark",
"TensorFlowBenchmarkArguments",
"AutoBackbone",
"BeitBackbone",
"BitBackbone",
"ConvNextBackbone",
"ConvNextV2Backbone",
"DinatBackbone",
"Dinov2Backbone",
"FocalNetBackbone",
"MaskFormerSwinBackbone",
"MaskFormerSwinConfig",
"MaskFormerSwinModel",
"NatBackbone",
"PvtV2Backbone",
"ResNetBackbone",
"SwinBackbone",
"Swinv2Backbone",
"TimmBackbone",
"TimmBackboneConfig",
"VitDetBackbone",
]
def ignore_undocumented(name: str) -> bool:
"""Rules to determine if `name` should be undocumented (returns `True` if it should not be documented)."""
# NOT DOCUMENTED ON PURPOSE.
# Constants uppercase are not documented.
if name.isupper():
return True
# PreTrainedModels / Encoders / Decoders / Layers / Embeddings / Attention are not documented.
if (
name.endswith("PreTrainedModel")
or name.endswith("Decoder")
or name.endswith("Encoder")
or name.endswith("Layer")
or name.endswith("Embeddings")
or name.endswith("Attention")
):
return True
# Submodules are not documented.
if os.path.isdir(os.path.join(PATH_TO_TRANSFORMERS, name)) or os.path.isfile(
os.path.join(PATH_TO_TRANSFORMERS, f"{name}.py")
):
return True
# All load functions are not documented.
if name.startswith("load_tf") or name.startswith("load_pytorch"):
return True
# is_xxx_available functions are not documented.
if name.startswith("is_") and name.endswith("_available"):
return True
# Deprecated objects are not documented.
if name in DEPRECATED_OBJECTS or name in UNDOCUMENTED_OBJECTS:
return True
# MMBT model does not really work.
if name.startswith("MMBT"):
return True
if name in SHOULD_HAVE_THEIR_OWN_PAGE:
return True
return False
def check_all_objects_are_documented():
"""Check all models are properly documented."""
documented_objs = find_all_documented_objects()
modules = transformers._modules
objects = [c for c in dir(transformers) if c not in modules and not c.startswith("_")]
undocumented_objs = [c for c in objects if c not in documented_objs and not ignore_undocumented(c)]
if len(undocumented_objs) > 0:
raise Exception(
"The following objects are in the public init so should be documented:\n - "
+ "\n - ".join(undocumented_objs)
)
check_docstrings_are_in_md()
check_model_type_doc_match()
def check_model_type_doc_match():
"""Check all doc pages have a corresponding model type."""
model_doc_folder = Path(PATH_TO_DOC) / "model_doc"
model_docs = [m.stem for m in model_doc_folder.glob("*.md")]
model_types = list(transformers.models.auto.configuration_auto.MODEL_NAMES_MAPPING.keys())
model_types = [MODEL_TYPE_TO_DOC_MAPPING[m] if m in MODEL_TYPE_TO_DOC_MAPPING else m for m in model_types]
errors = []
for m in model_docs:
if m not in model_types and m != "auto":
close_matches = get_close_matches(m, model_types)
error_message = f"{m} is not a proper model identifier."
if len(close_matches) > 0:
close_matches = "/".join(close_matches)
error_message += f" Did you mean {close_matches}?"
errors.append(error_message)
if len(errors) > 0:
raise ValueError(
"Some model doc pages do not match any existing model type:\n"
+ "\n".join(errors)
+ "\nYou can add any missing model type to the `MODEL_NAMES_MAPPING` constant in "
"models/auto/configuration_auto.py."
)
# Re pattern to catch :obj:`xx`, :class:`xx`, :func:`xx` or :meth:`xx`.
_re_rst_special_words = re.compile(r":(?:obj|func|class|meth):`([^`]+)`")
# Re pattern to catch things between double backquotes.
_re_double_backquotes = re.compile(r"(^|[^`])``([^`]+)``([^`]|$)")
# Re pattern to catch example introduction.
_re_rst_example = re.compile(r"^\s*Example.*::\s*$", flags=re.MULTILINE)
def is_rst_docstring(docstring: str) -> True:
"""
Returns `True` if `docstring` is written in rst.
"""
if _re_rst_special_words.search(docstring) is not None:
return True
if _re_double_backquotes.search(docstring) is not None:
return True
if _re_rst_example.search(docstring) is not None:
return True
return False
def check_docstrings_are_in_md():
"""Check all docstrings are written in md and nor rst."""
files_with_rst = []
for file in Path(PATH_TO_TRANSFORMERS).glob("**/*.py"):
with open(file, encoding="utf-8") as f:
code = f.read()
docstrings = code.split('"""')
for idx, docstring in enumerate(docstrings):
if idx % 2 == 0 or not is_rst_docstring(docstring):
continue
files_with_rst.append(file)
break
if len(files_with_rst) > 0:
raise ValueError(
"The following files have docstrings written in rst:\n"
+ "\n".join([f"- {f}" for f in files_with_rst])
+ "\nTo fix this run `doc-builder convert path_to_py_file` after installing `doc-builder`\n"
"(`pip install git+https://github.com/huggingface/doc-builder`)"
)
def check_deprecated_constant_is_up_to_date():
"""
Check if the constant `DEPRECATED_MODELS` in `models/auto/configuration_auto.py` is up to date.
"""
deprecated_folder = os.path.join(PATH_TO_TRANSFORMERS, "models", "deprecated")
deprecated_models = [m for m in os.listdir(deprecated_folder) if not m.startswith("_")]
constant_to_check = transformers.models.auto.configuration_auto.DEPRECATED_MODELS
message = []
missing_models = sorted(set(deprecated_models) - set(constant_to_check))
if len(missing_models) != 0:
missing_models = ", ".join(missing_models)
message.append(
"The following models are in the deprecated folder, make sure to add them to `DEPRECATED_MODELS` in "
f"`models/auto/configuration_auto.py`: {missing_models}."
)
extra_models = sorted(set(constant_to_check) - set(deprecated_models))
if len(extra_models) != 0:
extra_models = ", ".join(extra_models)
message.append(
"The following models are in the `DEPRECATED_MODELS` constant but not in the deprecated folder. Either "
f"remove them from the constant or move to the deprecated folder: {extra_models}."
)
if len(message) > 0:
raise Exception("\n".join(message))
def check_repo_quality():
"""Check all models are properly tested and documented."""
print("Checking all models are included.")
check_model_list()
print("Checking all models are public.")
check_models_are_in_init()
print("Checking all models are properly tested.")
check_all_decorator_order()
check_all_models_are_tested()
print("Checking all objects are properly documented.")
check_all_objects_are_documented()
print("Checking all models are in at least one auto class.")
check_all_models_are_auto_configured()
print("Checking all names in auto name mappings are defined.")
check_all_auto_object_names_being_defined()
print("Checking all keys in auto name mappings are defined in `CONFIG_MAPPING_NAMES`.")
check_all_auto_mapping_names_in_config_mapping_names()
print("Checking all auto mappings could be imported.")
check_all_auto_mappings_importable()
print("Checking all objects are equally (across frameworks) in the main __init__.")
check_objects_being_equally_in_main_init()
print("Checking the DEPRECATED_MODELS constant is up to date.")
check_deprecated_constant_is_up_to_date()
if __name__ == "__main__":
check_repo_quality()
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/tests_fetcher.py | # coding=utf-8
# Copyright 2021 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Welcome to tests_fetcher V2.
This util is designed to fetch tests to run on a PR so that only the tests impacted by the modifications are run, and
when too many models are being impacted, only run the tests of a subset of core models. It works like this.
Stage 1: Identify the modified files. For jobs that run on the main branch, it's just the diff with the last commit.
On a PR, this takes all the files from the branching point to the current commit (so all modifications in a PR, not
just the last commit) but excludes modifications that are on docstrings or comments only.
Stage 2: Extract the tests to run. This is done by looking at the imports in each module and test file: if module A
imports module B, then changing module B impacts module A, so the tests using module A should be run. We thus get the
dependencies of each model and then recursively builds the 'reverse' map of dependencies to get all modules and tests
impacted by a given file. We then only keep the tests (and only the core models tests if there are too many modules).
Caveats:
- This module only filters tests by files (not individual tests) so it's better to have tests for different things
in different files.
- This module assumes inits are just importing things, not really building objects, so it's better to structure
them this way and move objects building in separate submodules.
Usage:
Base use to fetch the tests in a pull request
```bash
python utils/tests_fetcher.py
```
Base use to fetch the tests on a the main branch (with diff from the last commit):
```bash
python utils/tests_fetcher.py --diff_with_last_commit
```
"""
import argparse
import collections
import importlib.util
import json
import os
import re
import tempfile
from contextlib import contextmanager
from pathlib import Path
from typing import Dict, List, Optional, Tuple, Union
from git import Repo
PATH_TO_REPO = Path(__file__).parent.parent.resolve()
PATH_TO_EXAMPLES = PATH_TO_REPO / "examples"
PATH_TO_TRANFORMERS = PATH_TO_REPO / "src/transformers"
PATH_TO_TESTS = PATH_TO_REPO / "tests"
# The value is just a heuristic to determine if we `guess` all models are impacted.
# This variable has effect only if `filter_models=False`.
NUM_MODELS_TO_TRIGGER_FULL_CI = 30
# List here the models to always test.
IMPORTANT_MODELS = [
"auto",
# Most downloaded models
"bert",
"clip",
"t5",
"xlm-roberta",
"gpt2",
"bart",
"mpnet",
"gpt-j",
"wav2vec2",
"deberta-v2",
"layoutlm",
"llama",
"opt",
"longformer",
"vit",
"whisper",
# Pipeline-specific model (to be sure each pipeline has one model in this list)
"tapas",
"vilt",
"clap",
"detr",
"owlvit",
"dpt",
"videomae",
]
@contextmanager
def checkout_commit(repo: Repo, commit_id: str):
"""
Context manager that checks out a given commit when entered, but gets back to the reference it was at on exit.
Args:
repo (`git.Repo`): A git repository (for instance the Transformers repo).
commit_id (`str`): The commit reference to checkout inside the context manager.
"""
current_head = repo.head.commit if repo.head.is_detached else repo.head.ref
try:
repo.git.checkout(commit_id)
yield
finally:
repo.git.checkout(current_head)
def clean_code(content: str) -> str:
"""
Remove docstrings, empty line or comments from some code (used to detect if a diff is real or only concern
comments or docstings).
Args:
content (`str`): The code to clean
Returns:
`str`: The cleaned code.
"""
# We need to deactivate autoformatting here to write escaped triple quotes (we cannot use real triple quotes or
# this would mess up the result if this function applied to this particular file).
# fmt: off
# Remove docstrings by splitting on triple " then triple ':
splits = content.split('\"\"\"')
content = "".join(splits[::2])
splits = content.split("\'\'\'")
# fmt: on
content = "".join(splits[::2])
# Remove empty lines and comments
lines_to_keep = []
for line in content.split("\n"):
# remove anything that is after a # sign.
line = re.sub("#.*$", "", line)
# remove white lines
if len(line) != 0 and not line.isspace():
lines_to_keep.append(line)
return "\n".join(lines_to_keep)
def keep_doc_examples_only(content: str) -> str:
"""
Remove everything from the code content except the doc examples (used to determined if a diff should trigger doc
tests or not).
Args:
content (`str`): The code to clean
Returns:
`str`: The cleaned code.
"""
# Keep doc examples only by splitting on triple "`"
splits = content.split("```")
# Add leading and trailing "```" so the navigation is easier when compared to the original input `content`
content = "```" + "```".join(splits[1::2]) + "```"
# Remove empty lines and comments
lines_to_keep = []
for line in content.split("\n"):
# remove anything that is after a # sign.
line = re.sub("#.*$", "", line)
# remove white lines
if len(line) != 0 and not line.isspace():
lines_to_keep.append(line)
return "\n".join(lines_to_keep)
def get_all_tests() -> List[str]:
"""
Walks the `tests` folder to return a list of files/subfolders. This is used to split the tests to run when using
paralellism. The split is:
- folders under `tests`: (`tokenization`, `pipelines`, etc) except the subfolder `models` is excluded.
- folders under `tests/models`: `bert`, `gpt2`, etc.
- test files under `tests`: `test_modeling_common.py`, `test_tokenization_common.py`, etc.
"""
# test folders/files directly under `tests` folder
tests = os.listdir(PATH_TO_TESTS)
tests = [f"tests/{f}" for f in tests if "__pycache__" not in f]
tests = sorted([f for f in tests if (PATH_TO_REPO / f).is_dir() or f.startswith("tests/test_")])
# model specific test folders
model_test_folders = os.listdir(PATH_TO_TESTS / "models")
model_test_folders = [f"tests/models/{f}" for f in model_test_folders if "__pycache__" not in f]
model_test_folders = sorted([f for f in model_test_folders if (PATH_TO_REPO / f).is_dir()])
tests.remove("tests/models")
# Sagemaker tests are not meant to be run on the CI.
if "tests/sagemaker" in tests:
tests.remove("tests/sagemaker")
tests = model_test_folders + tests
return tests
def diff_is_docstring_only(repo: Repo, branching_point: str, filename: str) -> bool:
"""
Check if the diff is only in docstrings (or comments and whitespace) in a filename.
Args:
repo (`git.Repo`): A git repository (for instance the Transformers repo).
branching_point (`str`): The commit reference of where to compare for the diff.
filename (`str`): The filename where we want to know if the diff isonly in docstrings/comments.
Returns:
`bool`: Whether the diff is docstring/comments only or not.
"""
folder = Path(repo.working_dir)
with checkout_commit(repo, branching_point):
with open(folder / filename, "r", encoding="utf-8") as f:
old_content = f.read()
with open(folder / filename, "r", encoding="utf-8") as f:
new_content = f.read()
old_content_clean = clean_code(old_content)
new_content_clean = clean_code(new_content)
return old_content_clean == new_content_clean
def diff_contains_doc_examples(repo: Repo, branching_point: str, filename: str) -> bool:
"""
Check if the diff is only in code examples of the doc in a filename.
Args:
repo (`git.Repo`): A git repository (for instance the Transformers repo).
branching_point (`str`): The commit reference of where to compare for the diff.
filename (`str`): The filename where we want to know if the diff is only in codes examples.
Returns:
`bool`: Whether the diff is only in code examples of the doc or not.
"""
folder = Path(repo.working_dir)
with checkout_commit(repo, branching_point):
with open(folder / filename, "r", encoding="utf-8") as f:
old_content = f.read()
with open(folder / filename, "r", encoding="utf-8") as f:
new_content = f.read()
old_content_clean = keep_doc_examples_only(old_content)
new_content_clean = keep_doc_examples_only(new_content)
return old_content_clean != new_content_clean
def get_impacted_files_from_tiny_model_summary(diff_with_last_commit: bool = False) -> List[str]:
"""
Return a list of python modeling files that are impacted by the changes of `tiny_model_summary.json` in between:
- the current head and the main branch if `diff_with_last_commit=False` (default)
- the current head and its parent commit otherwise.
Returns:
`List[str]`: The list of Python modeling files that are impacted by the changes of `tiny_model_summary.json`.
"""
repo = Repo(PATH_TO_REPO)
folder = Path(repo.working_dir)
if not diff_with_last_commit:
print(f"main is at {repo.refs.main.commit}")
print(f"Current head is at {repo.head.commit}")
commits = repo.merge_base(repo.refs.main, repo.head)
for commit in commits:
print(f"Branching commit: {commit}")
else:
print(f"main is at {repo.head.commit}")
commits = repo.head.commit.parents
for commit in commits:
print(f"Parent commit: {commit}")
if not os.path.isfile(folder / "tests/utils/tiny_model_summary.json"):
return []
files = set()
for commit in commits:
with checkout_commit(repo, commit):
with open(folder / "tests/utils/tiny_model_summary.json", "r", encoding="utf-8") as f:
old_content = f.read()
with open(folder / "tests/utils/tiny_model_summary.json", "r", encoding="utf-8") as f:
new_content = f.read()
# get the content as json object
old_content = json.loads(old_content)
new_content = json.loads(new_content)
old_keys = set(old_content.keys())
new_keys = set(new_content.keys())
# get the difference
keys_with_diff = old_keys.symmetric_difference(new_keys)
common_keys = old_keys.intersection(new_keys)
# if both have the same key, check its content
for key in common_keys:
if old_content[key] != new_content[key]:
keys_with_diff.add(key)
# get the model classes
impacted_model_classes = []
for key in keys_with_diff:
if key in new_keys:
impacted_model_classes.extend(new_content[key]["model_classes"])
# get the module where the model classes are defined. We want to use the main `__init__` file, but it requires
# all the framework being installed, which is not ideal for a simple script like test fetcher.
# So we create a temporary and modified main `__init__` and access its `_import_structure`.
with open(folder / "src/transformers/__init__.py") as fp:
lines = fp.readlines()
new_lines = []
# Get all the code related to `_import_structure`
for line in lines:
if line == "_import_structure = {\n":
new_lines.append(line)
elif line == "# Direct imports for type-checking\n":
break
elif len(new_lines) > 0:
# bypass the framework check so we can get all the information even if frameworks are not available
line = re.sub(r"is_.+_available\(\)", "True", line)
line = line.replace("OptionalDependencyNotAvailable", "Exception")
line = line.replace("Exception()", "Exception")
new_lines.append(line)
# create and load the temporary module
with tempfile.TemporaryDirectory() as tmpdirname:
with open(os.path.join(tmpdirname, "temp_init.py"), "w") as fp:
fp.write("".join(new_lines))
spec = importlib.util.spec_from_file_location("temp_init", os.path.join(tmpdirname, "temp_init.py"))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
# Finally, get `_import_structure` that we need
import_structure = module._import_structure
# map model classes to their defined module
reversed_structure = {}
for key, values in import_structure.items():
for value in values:
reversed_structure[value] = key
# Get the corresponding modeling file path
for model_class in impacted_model_classes:
module = reversed_structure[model_class]
framework = ""
if model_class.startswith("TF"):
framework = "tf"
elif model_class.startswith("Flax"):
framework = "flax"
fn = (
f"modeling_{module.split('.')[-1]}.py"
if framework == ""
else f"modeling_{framework}_{module.split('.')[-1]}.py"
)
files.add(
f"src.transformers.{module}.{fn}".replace(".", os.path.sep).replace(f"{os.path.sep}py", ".py")
)
return sorted(files)
def get_diff(repo: Repo, base_commit: str, commits: List[str]) -> List[str]:
"""
Get the diff between a base commit and one or several commits.
Args:
repo (`git.Repo`):
A git repository (for instance the Transformers repo).
base_commit (`str`):
The commit reference of where to compare for the diff. This is the current commit, not the branching point!
commits (`List[str]`):
The list of commits with which to compare the repo at `base_commit` (so the branching point).
Returns:
`List[str]`: The list of Python files with a diff (files added, renamed or deleted are always returned, files
modified are returned if the diff in the file is not only in docstrings or comments, see
`diff_is_docstring_only`).
"""
print("\n### DIFF ###\n")
code_diff = []
for commit in commits:
for diff_obj in commit.diff(base_commit):
# We always add new python files
if diff_obj.change_type == "A" and diff_obj.b_path.endswith(".py"):
code_diff.append(diff_obj.b_path)
# We check that deleted python files won't break corresponding tests.
elif diff_obj.change_type == "D" and diff_obj.a_path.endswith(".py"):
code_diff.append(diff_obj.a_path)
# Now for modified files
elif diff_obj.change_type in ["M", "R"] and diff_obj.b_path.endswith(".py"):
# In case of renames, we'll look at the tests using both the old and new name.
if diff_obj.a_path != diff_obj.b_path:
code_diff.extend([diff_obj.a_path, diff_obj.b_path])
else:
# Otherwise, we check modifications are in code and not docstrings.
if diff_is_docstring_only(repo, commit, diff_obj.b_path):
print(f"Ignoring diff in {diff_obj.b_path} as it only concerns docstrings or comments.")
else:
code_diff.append(diff_obj.a_path)
return code_diff
def get_modified_python_files(diff_with_last_commit: bool = False) -> List[str]:
"""
Return a list of python files that have been modified between:
- the current head and the main branch if `diff_with_last_commit=False` (default)
- the current head and its parent commit otherwise.
Returns:
`List[str]`: The list of Python files with a diff (files added, renamed or deleted are always returned, files
modified are returned if the diff in the file is not only in docstrings or comments, see
`diff_is_docstring_only`).
"""
repo = Repo(PATH_TO_REPO)
if not diff_with_last_commit:
print(f"main is at {repo.refs.main.commit}")
print(f"Current head is at {repo.head.commit}")
branching_commits = repo.merge_base(repo.refs.main, repo.head)
for commit in branching_commits:
print(f"Branching commit: {commit}")
return get_diff(repo, repo.head.commit, branching_commits)
else:
print(f"main is at {repo.head.commit}")
parent_commits = repo.head.commit.parents
for commit in parent_commits:
print(f"Parent commit: {commit}")
return get_diff(repo, repo.head.commit, parent_commits)
def get_diff_for_doctesting(repo: Repo, base_commit: str, commits: List[str]) -> List[str]:
"""
Get the diff in doc examples between a base commit and one or several commits.
Args:
repo (`git.Repo`):
A git repository (for instance the Transformers repo).
base_commit (`str`):
The commit reference of where to compare for the diff. This is the current commit, not the branching point!
commits (`List[str]`):
The list of commits with which to compare the repo at `base_commit` (so the branching point).
Returns:
`List[str]`: The list of Python and Markdown files with a diff (files added or renamed are always returned, files
modified are returned if the diff in the file is only in doctest examples).
"""
print("\n### DIFF ###\n")
code_diff = []
for commit in commits:
for diff_obj in commit.diff(base_commit):
# We only consider Python files and doc files.
if not diff_obj.b_path.endswith(".py") and not diff_obj.b_path.endswith(".md"):
continue
# We always add new python/md files
if diff_obj.change_type in ["A"]:
code_diff.append(diff_obj.b_path)
# Now for modified files
elif diff_obj.change_type in ["M", "R"]:
# In case of renames, we'll look at the tests using both the old and new name.
if diff_obj.a_path != diff_obj.b_path:
code_diff.extend([diff_obj.a_path, diff_obj.b_path])
else:
# Otherwise, we check modifications contain some doc example(s).
if diff_contains_doc_examples(repo, commit, diff_obj.b_path):
code_diff.append(diff_obj.a_path)
else:
print(f"Ignoring diff in {diff_obj.b_path} as it doesn't contain any doc example.")
return code_diff
def get_all_doctest_files() -> List[str]:
"""
Return the complete list of python and Markdown files on which we run doctest.
At this moment, we restrict this to only take files from `src/` or `docs/source/en/` that are not in `utils/not_doctested.txt`.
Returns:
`List[str]`: The complete list of Python and Markdown files on which we run doctest.
"""
py_files = [str(x.relative_to(PATH_TO_REPO)) for x in PATH_TO_REPO.glob("**/*.py")]
md_files = [str(x.relative_to(PATH_TO_REPO)) for x in PATH_TO_REPO.glob("**/*.md")]
test_files_to_run = py_files + md_files
# change to use "/" as path separator
test_files_to_run = ["/".join(Path(x).parts) for x in test_files_to_run]
# don't run doctest for files in `src/transformers/models/deprecated`
test_files_to_run = [x for x in test_files_to_run if "models/deprecated" not in x]
# only include files in `src` or `docs/source/en/`
test_files_to_run = [x for x in test_files_to_run if x.startswith(("src/", "docs/source/en/"))]
# not include init files
test_files_to_run = [x for x in test_files_to_run if not x.endswith(("__init__.py",))]
# These are files not doctested yet.
with open("utils/not_doctested.txt") as fp:
not_doctested = {x.split(" ")[0] for x in fp.read().strip().split("\n")}
# So far we don't have 100% coverage for doctest. This line will be removed once we achieve 100%.
test_files_to_run = [x for x in test_files_to_run if x not in not_doctested]
return sorted(test_files_to_run)
def get_new_doctest_files(repo, base_commit, branching_commit) -> List[str]:
"""
Get the list of files that were removed from "utils/not_doctested.txt", between `base_commit` and
`branching_commit`.
Returns:
`List[str]`: List of files that were removed from "utils/not_doctested.txt".
"""
for diff_obj in branching_commit.diff(base_commit):
# Ignores all but the "utils/not_doctested.txt" file.
if diff_obj.a_path != "utils/not_doctested.txt":
continue
# Loads the two versions
folder = Path(repo.working_dir)
with checkout_commit(repo, branching_commit):
with open(folder / "utils/not_doctested.txt", "r", encoding="utf-8") as f:
old_content = f.read()
with open(folder / "utils/not_doctested.txt", "r", encoding="utf-8") as f:
new_content = f.read()
# Compute the removed lines and return them
removed_content = {x.split(" ")[0] for x in old_content.split("\n")} - {
x.split(" ")[0] for x in new_content.split("\n")
}
return sorted(removed_content)
return []
def get_doctest_files(diff_with_last_commit: bool = False) -> List[str]:
"""
Return a list of python and Markdown files where doc example have been modified between:
- the current head and the main branch if `diff_with_last_commit=False` (default)
- the current head and its parent commit otherwise.
Returns:
`List[str]`: The list of Python and Markdown files with a diff (files added or renamed are always returned, files
modified are returned if the diff in the file is only in doctest examples).
"""
repo = Repo(PATH_TO_REPO)
test_files_to_run = [] # noqa
if not diff_with_last_commit:
print(f"main is at {repo.refs.main.commit}")
print(f"Current head is at {repo.head.commit}")
branching_commits = repo.merge_base(repo.refs.main, repo.head)
for commit in branching_commits:
print(f"Branching commit: {commit}")
test_files_to_run = get_diff_for_doctesting(repo, repo.head.commit, branching_commits)
else:
print(f"main is at {repo.head.commit}")
parent_commits = repo.head.commit.parents
for commit in parent_commits:
print(f"Parent commit: {commit}")
test_files_to_run = get_diff_for_doctesting(repo, repo.head.commit, parent_commits)
all_test_files_to_run = get_all_doctest_files()
# Add to the test files to run any removed entry from "utils/not_doctested.txt".
new_test_files = get_new_doctest_files(repo, repo.head.commit, repo.refs.main.commit)
test_files_to_run = list(set(test_files_to_run + new_test_files))
# Do not run slow doctest tests on CircleCI
with open("utils/slow_documentation_tests.txt") as fp:
slow_documentation_tests = set(fp.read().strip().split("\n"))
test_files_to_run = [
x for x in test_files_to_run if x in all_test_files_to_run and x not in slow_documentation_tests
]
# Make sure we did not end up with a test file that was removed
test_files_to_run = [f for f in test_files_to_run if (PATH_TO_REPO / f).exists()]
return sorted(test_files_to_run)
# (:?^|\n) -> Non-catching group for the beginning of the doc or a new line.
# \s*from\s+(\.+\S+)\s+import\s+([^\n]+) -> Line only contains from .xxx import yyy and we catch .xxx and yyy
# (?=\n) -> Look-ahead to a new line. We can't just put \n here or using find_all on this re will only catch every
# other import.
_re_single_line_relative_imports = re.compile(r"(?:^|\n)\s*from\s+(\.+\S+)\s+import\s+([^\n]+)(?=\n)")
# (:?^|\n) -> Non-catching group for the beginning of the doc or a new line.
# \s*from\s+(\.+\S+)\s+import\s+\(([^\)]+)\) -> Line continues with from .xxx import (yyy) and we catch .xxx and yyy
# yyy will take multiple lines otherwise there wouldn't be parenthesis.
_re_multi_line_relative_imports = re.compile(r"(?:^|\n)\s*from\s+(\.+\S+)\s+import\s+\(([^\)]+)\)")
# (:?^|\n) -> Non-catching group for the beginning of the doc or a new line.
# \s*from\s+transformers(\S*)\s+import\s+([^\n]+) -> Line only contains from transformers.xxx import yyy and we catch
# .xxx and yyy
# (?=\n) -> Look-ahead to a new line. We can't just put \n here or using find_all on this re will only catch every
# other import.
_re_single_line_direct_imports = re.compile(r"(?:^|\n)\s*from\s+transformers(\S*)\s+import\s+([^\n]+)(?=\n)")
# (:?^|\n) -> Non-catching group for the beginning of the doc or a new line.
# \s*from\s+transformers(\S*)\s+import\s+\(([^\)]+)\) -> Line continues with from transformers.xxx import (yyy) and we
# catch .xxx and yyy. yyy will take multiple lines otherwise there wouldn't be parenthesis.
_re_multi_line_direct_imports = re.compile(r"(?:^|\n)\s*from\s+transformers(\S*)\s+import\s+\(([^\)]+)\)")
def extract_imports(module_fname: str, cache: Dict[str, List[str]] = None) -> List[str]:
"""
Get the imports a given module makes.
Args:
module_fname (`str`):
The name of the file of the module where we want to look at the imports (given relative to the root of
the repo).
cache (Dictionary `str` to `List[str]`, *optional*):
To speed up this function if it was previously called on `module_fname`, the cache of all previously
computed results.
Returns:
`List[str]`: The list of module filenames imported in the input `module_fname` (a submodule we import from that
is a subfolder will give its init file).
"""
if cache is not None and module_fname in cache:
return cache[module_fname]
with open(PATH_TO_REPO / module_fname, "r", encoding="utf-8") as f:
content = f.read()
# Filter out all docstrings to not get imports in code examples. As before we need to deactivate formatting to
# keep this as escaped quotes and avoid this function failing on this file.
splits = content.split('\"\"\"') # fmt: skip
content = "".join(splits[::2])
module_parts = str(module_fname).split(os.path.sep)
imported_modules = []
# Let's start with relative imports
relative_imports = _re_single_line_relative_imports.findall(content)
relative_imports = [
(mod, imp) for mod, imp in relative_imports if "# tests_ignore" not in imp and imp.strip() != "("
]
multiline_relative_imports = _re_multi_line_relative_imports.findall(content)
relative_imports += [(mod, imp) for mod, imp in multiline_relative_imports if "# tests_ignore" not in imp]
# We need to remove parts of the module name depending on the depth of the relative imports.
for module, imports in relative_imports:
level = 0
while module.startswith("."):
module = module[1:]
level += 1
if len(module) > 0:
dep_parts = module_parts[: len(module_parts) - level] + module.split(".")
else:
dep_parts = module_parts[: len(module_parts) - level]
imported_module = os.path.sep.join(dep_parts)
imported_modules.append((imported_module, [imp.strip() for imp in imports.split(",")]))
# Let's continue with direct imports
direct_imports = _re_single_line_direct_imports.findall(content)
direct_imports = [(mod, imp) for mod, imp in direct_imports if "# tests_ignore" not in imp and imp.strip() != "("]
multiline_direct_imports = _re_multi_line_direct_imports.findall(content)
direct_imports += [(mod, imp) for mod, imp in multiline_direct_imports if "# tests_ignore" not in imp]
# We need to find the relative path of those imports.
for module, imports in direct_imports:
import_parts = module.split(".")[1:] # ignore the name of the repo since we add it below.
dep_parts = ["src", "transformers"] + import_parts
imported_module = os.path.sep.join(dep_parts)
imported_modules.append((imported_module, [imp.strip() for imp in imports.split(",")]))
result = []
# Double check we get proper modules (either a python file or a folder with an init).
for module_file, imports in imported_modules:
if (PATH_TO_REPO / f"{module_file}.py").is_file():
module_file = f"{module_file}.py"
elif (PATH_TO_REPO / module_file).is_dir() and (PATH_TO_REPO / module_file / "__init__.py").is_file():
module_file = os.path.sep.join([module_file, "__init__.py"])
imports = [imp for imp in imports if len(imp) > 0 and re.match("^[A-Za-z0-9_]*$", imp)]
if len(imports) > 0:
result.append((module_file, imports))
if cache is not None:
cache[module_fname] = result
return result
def get_module_dependencies(module_fname: str, cache: Dict[str, List[str]] = None) -> List[str]:
"""
Refines the result of `extract_imports` to remove subfolders and get a proper list of module filenames: if a file
as an import `from utils import Foo, Bar`, with `utils` being a subfolder containing many files, this will traverse
the `utils` init file to check where those dependencies come from: for instance the files utils/foo.py and utils/bar.py.
Warning: This presupposes that all intermediate inits are properly built (with imports from the respective
submodules) and work better if objects are defined in submodules and not the intermediate init (otherwise the
intermediate init is added, and inits usually have a lot of dependencies).
Args:
module_fname (`str`):
The name of the file of the module where we want to look at the imports (given relative to the root of
the repo).
cache (Dictionary `str` to `List[str]`, *optional*):
To speed up this function if it was previously called on `module_fname`, the cache of all previously
computed results.
Returns:
`List[str]`: The list of module filenames imported in the input `module_fname` (with submodule imports refined).
"""
dependencies = []
imported_modules = extract_imports(module_fname, cache=cache)
# The while loop is to recursively traverse all inits we may encounter: we will add things as we go.
while len(imported_modules) > 0:
new_modules = []
for module, imports in imported_modules:
# If we end up in an __init__ we are often not actually importing from this init (except in the case where
# the object is fully defined in the __init__)
if module.endswith("__init__.py"):
# So we get the imports from that init then try to find where our objects come from.
new_imported_modules = extract_imports(module, cache=cache)
for new_module, new_imports in new_imported_modules:
if any(i in new_imports for i in imports):
if new_module not in dependencies:
new_modules.append((new_module, [i for i in new_imports if i in imports]))
imports = [i for i in imports if i not in new_imports]
if len(imports) > 0:
# If there are any objects lefts, they may be a submodule
path_to_module = PATH_TO_REPO / module.replace("__init__.py", "")
dependencies.extend(
[
os.path.join(module.replace("__init__.py", ""), f"{i}.py")
for i in imports
if (path_to_module / f"{i}.py").is_file()
]
)
imports = [i for i in imports if not (path_to_module / f"{i}.py").is_file()]
if len(imports) > 0:
# Then if there are still objects left, they are fully defined in the init, so we keep it as a
# dependency.
dependencies.append(module)
else:
dependencies.append(module)
imported_modules = new_modules
return dependencies
def create_reverse_dependency_tree() -> List[Tuple[str, str]]:
"""
Create a list of all edges (a, b) which mean that modifying a impacts b with a going over all module and test files.
"""
cache = {}
all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py"))
all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules]
edges = [(dep, mod) for mod in all_modules for dep in get_module_dependencies(mod, cache=cache)]
return list(set(edges))
def get_tree_starting_at(module: str, edges: List[Tuple[str, str]]) -> List[Union[str, List[str]]]:
"""
Returns the tree starting at a given module following all edges.
Args:
module (`str`): The module that will be the root of the subtree we want.
eges (`List[Tuple[str, str]]`): The list of all edges of the tree.
Returns:
`List[Union[str, List[str]]]`: The tree to print in the following format: [module, [list of edges
starting at module], [list of edges starting at the preceding level], ...]
"""
vertices_seen = [module]
new_edges = [edge for edge in edges if edge[0] == module and edge[1] != module and "__init__.py" not in edge[1]]
tree = [module]
while len(new_edges) > 0:
tree.append(new_edges)
final_vertices = list({edge[1] for edge in new_edges})
vertices_seen.extend(final_vertices)
new_edges = [
edge
for edge in edges
if edge[0] in final_vertices and edge[1] not in vertices_seen and "__init__.py" not in edge[1]
]
return tree
def print_tree_deps_of(module, all_edges=None):
"""
Prints the tree of modules depending on a given module.
Args:
module (`str`): The module that will be the root of the subtree we want.
all_eges (`List[Tuple[str, str]]`, *optional*):
The list of all edges of the tree. Will be set to `create_reverse_dependency_tree()` if not passed.
"""
if all_edges is None:
all_edges = create_reverse_dependency_tree()
tree = get_tree_starting_at(module, all_edges)
# The list of lines is a list of tuples (line_to_be_printed, module)
# Keeping the modules lets us know where to insert each new lines in the list.
lines = [(tree[0], tree[0])]
for index in range(1, len(tree)):
edges = tree[index]
start_edges = {edge[0] for edge in edges}
for start in start_edges:
end_edges = {edge[1] for edge in edges if edge[0] == start}
# We will insert all those edges just after the line showing start.
pos = 0
while lines[pos][1] != start:
pos += 1
lines = lines[: pos + 1] + [(" " * (2 * index) + end, end) for end in end_edges] + lines[pos + 1 :]
for line in lines:
# We don't print the refs that where just here to help build lines.
print(line[0])
def init_test_examples_dependencies() -> Tuple[Dict[str, List[str]], List[str]]:
"""
The test examples do not import from the examples (which are just scripts, not modules) so we need som extra
care initializing the dependency map, which is the goal of this function. It initializes the dependency map for
example files by linking each example to the example test file for the example framework.
Returns:
`Tuple[Dict[str, List[str]], List[str]]`: A tuple with two elements: the initialized dependency map which is a
dict test example file to list of example files potentially tested by that test file, and the list of all
example files (to avoid recomputing it later).
"""
test_example_deps = {}
all_examples = []
for framework in ["flax", "pytorch", "tensorflow"]:
test_files = list((PATH_TO_EXAMPLES / framework).glob("test_*.py"))
all_examples.extend(test_files)
# Remove the files at the root of examples/framework since they are not proper examples (they are eith utils
# or example test files).
examples = [
f for f in (PATH_TO_EXAMPLES / framework).glob("**/*.py") if f.parent != PATH_TO_EXAMPLES / framework
]
all_examples.extend(examples)
for test_file in test_files:
with open(test_file, "r", encoding="utf-8") as f:
content = f.read()
# Map all examples to the test files found in examples/framework.
test_example_deps[str(test_file.relative_to(PATH_TO_REPO))] = [
str(e.relative_to(PATH_TO_REPO)) for e in examples if e.name in content
]
# Also map the test files to themselves.
test_example_deps[str(test_file.relative_to(PATH_TO_REPO))].append(
str(test_file.relative_to(PATH_TO_REPO))
)
return test_example_deps, all_examples
def create_reverse_dependency_map() -> Dict[str, List[str]]:
"""
Create the dependency map from module/test filename to the list of modules/tests that depend on it recursively.
Returns:
`Dict[str, List[str]]`: The reverse dependency map as a dictionary mapping filenames to all the filenames
depending on it recursively. This way the tests impacted by a change in file A are the test files in the list
corresponding to key A in this result.
"""
cache = {}
# Start from the example deps init.
example_deps, examples = init_test_examples_dependencies()
# Add all modules and all tests to all examples
all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py")) + examples
all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules]
# Compute the direct dependencies of all modules.
direct_deps = {m: get_module_dependencies(m, cache=cache) for m in all_modules}
direct_deps.update(example_deps)
# This recurses the dependencies
something_changed = True
while something_changed:
something_changed = False
for m in all_modules:
for d in direct_deps[m]:
# We stop recursing at an init (cause we always end up in the main init and we don't want to add all
# files which the main init imports)
if d.endswith("__init__.py"):
continue
if d not in direct_deps:
raise ValueError(f"KeyError:{d}. From {m}")
new_deps = set(direct_deps[d]) - set(direct_deps[m])
if len(new_deps) > 0:
direct_deps[m].extend(list(new_deps))
something_changed = True
# Finally we can build the reverse map.
reverse_map = collections.defaultdict(list)
for m in all_modules:
for d in direct_deps[m]:
reverse_map[d].append(m)
# For inits, we don't do the reverse deps but the direct deps: if modifying an init, we want to make sure we test
# all the modules impacted by that init.
for m in [f for f in all_modules if f.endswith("__init__.py")]:
direct_deps = get_module_dependencies(m, cache=cache)
deps = sum([reverse_map[d] for d in direct_deps if not d.endswith("__init__.py")], direct_deps)
reverse_map[m] = list(set(deps) - {m})
return reverse_map
def create_module_to_test_map(
reverse_map: Dict[str, List[str]] = None, filter_models: bool = False
) -> Dict[str, List[str]]:
"""
Extract the tests from the reverse_dependency_map and potentially filters the model tests.
Args:
reverse_map (`Dict[str, List[str]]`, *optional*):
The reverse dependency map as created by `create_reverse_dependency_map`. Will default to the result of
that function if not provided.
filter_models (`bool`, *optional*, defaults to `False`):
Whether or not to filter model tests to only include core models if a file impacts a lot of models.
Returns:
`Dict[str, List[str]]`: A dictionary that maps each file to the tests to execute if that file was modified.
"""
if reverse_map is None:
reverse_map = create_reverse_dependency_map()
# Utility that tells us if a given file is a test (taking test examples into account)
def is_test(fname):
if fname.startswith("tests"):
return True
if fname.startswith("examples") and fname.split(os.path.sep)[-1].startswith("test"):
return True
return False
# Build the test map
test_map = {module: [f for f in deps if is_test(f)] for module, deps in reverse_map.items()}
if not filter_models:
return test_map
# Now we deal with the filtering if `filter_models` is True.
num_model_tests = len(list(PATH_TO_TESTS.glob("models/*")))
def has_many_models(tests):
# We filter to core models when a given file impacts more than half the model tests.
model_tests = {Path(t).parts[2] for t in tests if t.startswith("tests/models/")}
return len(model_tests) > num_model_tests // 2
# for each module (if specified in the argument `module`) of the form `models/my_model` (i.e. starting with it),
# we always keep the tests (those are already in the argument `tests`) which are in `tests/models/my_model`.
# This is to avoid them being excluded when a module has many impacted tests: the directly related test files should
# always be included!
def filter_tests(tests, module=""):
return [
t
for t in tests
if not t.startswith("tests/models/")
or Path(t).parts[2] in IMPORTANT_MODELS
# at this point, `t` is of the form `tests/models/my_model`, and we check if `models/my_model`
# (i.e. `parts[1:3]`) is in `module`.
or "/".join(Path(t).parts[1:3]) in module
]
return {
module: (filter_tests(tests, module=module) if has_many_models(tests) else tests)
for module, tests in test_map.items()
}
def check_imports_all_exist():
"""
Isn't used per se by the test fetcher but might be used later as a quality check. Putting this here for now so the
code is not lost. This checks all imports in a given file do exist.
"""
cache = {}
all_modules = list(PATH_TO_TRANFORMERS.glob("**/*.py")) + list(PATH_TO_TESTS.glob("**/*.py"))
all_modules = [str(mod.relative_to(PATH_TO_REPO)) for mod in all_modules]
direct_deps = {m: get_module_dependencies(m, cache=cache) for m in all_modules}
for module, deps in direct_deps.items():
for dep in deps:
if not (PATH_TO_REPO / dep).is_file():
print(f"{module} has dependency on {dep} which does not exist.")
def _print_list(l) -> str:
"""
Pretty print a list of elements with one line per element and a - starting each line.
"""
return "\n".join([f"- {f}" for f in l])
def create_json_map(test_files_to_run: List[str], json_output_file: str):
"""
Creates a map from a list of tests to run to easily split them by category, when running parallelism of slow tests.
Args:
test_files_to_run (`List[str]`): The list of tests to run.
json_output_file (`str`): The path where to store the built json map.
"""
if json_output_file is None:
return
test_map = {}
for test_file in test_files_to_run:
# `test_file` is a path to a test folder/file, starting with `tests/`. For example,
# - `tests/models/bert/test_modeling_bert.py` or `tests/models/bert`
# - `tests/trainer/test_trainer.py` or `tests/trainer`
# - `tests/test_modeling_common.py`
names = test_file.split(os.path.sep)
if names[1] == "models":
# take the part like `models/bert` for modeling tests
key = os.path.sep.join(names[1:3])
elif len(names) > 2 or not test_file.endswith(".py"):
# test folders under `tests` or python files under them
# take the part like tokenization, `pipeline`, etc. for other test categories
key = os.path.sep.join(names[1:2])
else:
# common test files directly under `tests/`
key = "common"
if key not in test_map:
test_map[key] = []
test_map[key].append(test_file)
# sort the keys & values
keys = sorted(test_map.keys())
test_map = {k: " ".join(sorted(test_map[k])) for k in keys}
with open(json_output_file, "w", encoding="UTF-8") as fp:
json.dump(test_map, fp, ensure_ascii=False)
def infer_tests_to_run(
output_file: str,
diff_with_last_commit: bool = False,
filter_models: bool = True,
json_output_file: Optional[str] = None,
):
"""
The main function called by the test fetcher. Determines the tests to run from the diff.
Args:
output_file (`str`):
The path where to store the summary of the test fetcher analysis. Other files will be stored in the same
folder:
- examples_test_list.txt: The list of examples tests to run.
- test_repo_utils.txt: Will indicate if the repo utils tests should be run or not.
- doctest_list.txt: The list of doctests to run.
diff_with_last_commit (`bool`, *optional*, defaults to `False`):
Whether to analyze the diff with the last commit (for use on the main branch after a PR is merged) or with
the branching point from main (for use on each PR).
filter_models (`bool`, *optional*, defaults to `True`):
Whether or not to filter the tests to core models only, when a file modified results in a lot of model
tests.
json_output_file (`str`, *optional*):
The path where to store the json file mapping categories of tests to tests to run (used for parallelism or
the slow tests).
"""
modified_files = get_modified_python_files(diff_with_last_commit=diff_with_last_commit)
print(f"\n### MODIFIED FILES ###\n{_print_list(modified_files)}")
# Create the map that will give us all impacted modules.
reverse_map = create_reverse_dependency_map()
impacted_files = modified_files.copy()
for f in modified_files:
if f in reverse_map:
impacted_files.extend(reverse_map[f])
# Remove duplicates
impacted_files = sorted(set(impacted_files))
print(f"\n### IMPACTED FILES ###\n{_print_list(impacted_files)}")
model_impacted = {"/".join(x.split("/")[:3]) for x in impacted_files if x.startswith("tests/models/")}
# Grab the corresponding test files:
if any(x in modified_files for x in ["setup.py", ".circleci/create_circleci_config.py"]):
test_files_to_run = ["tests", "examples"]
repo_utils_launch = True
elif not filter_models and len(model_impacted) >= NUM_MODELS_TO_TRIGGER_FULL_CI:
print(
f"More than {NUM_MODELS_TO_TRIGGER_FULL_CI - 1} models are impacted and `filter_models=False`. CI is configured to test everything."
)
test_files_to_run = ["tests", "examples"]
repo_utils_launch = True
else:
# All modified tests need to be run.
test_files_to_run = [
f for f in modified_files if f.startswith("tests") and f.split(os.path.sep)[-1].startswith("test")
]
impacted_files = get_impacted_files_from_tiny_model_summary(diff_with_last_commit=diff_with_last_commit)
# Then we grab the corresponding test files.
test_map = create_module_to_test_map(reverse_map=reverse_map, filter_models=filter_models)
for f in modified_files + impacted_files:
if f in test_map:
test_files_to_run.extend(test_map[f])
test_files_to_run = sorted(set(test_files_to_run))
# Remove repo utils tests
test_files_to_run = [f for f in test_files_to_run if not f.split(os.path.sep)[1] == "repo_utils"]
# Remove SageMaker tests
test_files_to_run = [f for f in test_files_to_run if not f.split(os.path.sep)[1] == "sagemaker"]
# Make sure we did not end up with a test file that was removed
test_files_to_run = [f for f in test_files_to_run if (PATH_TO_REPO / f).exists()]
repo_utils_launch = any(f.split(os.path.sep)[0] == "utils" for f in modified_files)
if repo_utils_launch:
repo_util_file = Path(output_file).parent / "test_repo_utils.txt"
with open(repo_util_file, "w", encoding="utf-8") as f:
f.write("tests/repo_utils")
examples_tests_to_run = [f for f in test_files_to_run if f.startswith("examples")]
test_files_to_run = [f for f in test_files_to_run if not f.startswith("examples")]
print(f"\n### TEST TO RUN ###\n{_print_list(test_files_to_run)}")
if len(test_files_to_run) > 0:
with open(output_file, "w", encoding="utf-8") as f:
f.write(" ".join(test_files_to_run))
# Create a map that maps test categories to test files, i.e. `models/bert` -> [...test_modeling_bert.py, ...]
# Get all test directories (and some common test files) under `tests` and `tests/models` if `test_files_to_run`
# contains `tests` (i.e. when `setup.py` is changed).
if "tests" in test_files_to_run:
test_files_to_run = get_all_tests()
create_json_map(test_files_to_run, json_output_file)
print(f"\n### EXAMPLES TEST TO RUN ###\n{_print_list(examples_tests_to_run)}")
if len(examples_tests_to_run) > 0:
# We use `all` in the case `commit_flags["test_all"]` as well as in `create_circleci_config.py` for processing
if examples_tests_to_run == ["examples"]:
examples_tests_to_run = ["all"]
example_file = Path(output_file).parent / "examples_test_list.txt"
with open(example_file, "w", encoding="utf-8") as f:
f.write(" ".join(examples_tests_to_run))
doctest_list = get_doctest_files()
print(f"\n### DOCTEST TO RUN ###\n{_print_list(doctest_list)}")
if len(doctest_list) > 0:
doctest_file = Path(output_file).parent / "doctest_list.txt"
with open(doctest_file, "w", encoding="utf-8") as f:
f.write(" ".join(doctest_list))
def filter_tests(output_file: str, filters: List[str]):
"""
Reads the content of the output file and filters out all the tests in a list of given folders.
Args:
output_file (`str` or `os.PathLike`): The path to the output file of the tests fetcher.
filters (`List[str]`): A list of folders to filter.
"""
if not os.path.isfile(output_file):
print("No test file found.")
return
with open(output_file, "r", encoding="utf-8") as f:
test_files = f.read().split(" ")
if len(test_files) == 0 or test_files == [""]:
print("No tests to filter.")
return
if test_files == ["tests"]:
test_files = [os.path.join("tests", f) for f in os.listdir("tests") if f not in ["__init__.py"] + filters]
else:
test_files = [f for f in test_files if f.split(os.path.sep)[1] not in filters]
with open(output_file, "w", encoding="utf-8") as f:
f.write(" ".join(test_files))
def parse_commit_message(commit_message: str) -> Dict[str, bool]:
"""
Parses the commit message to detect if a command is there to skip, force all or part of the CI.
Args:
commit_message (`str`): The commit message of the current commit.
Returns:
`Dict[str, bool]`: A dictionary of strings to bools with keys the following keys: `"skip"`,
`"test_all_models"` and `"test_all"`.
"""
if commit_message is None:
return {"skip": False, "no_filter": False, "test_all": False}
command_search = re.search(r"\[([^\]]*)\]", commit_message)
if command_search is not None:
command = command_search.groups()[0]
command = command.lower().replace("-", " ").replace("_", " ")
skip = command in ["ci skip", "skip ci", "circleci skip", "skip circleci"]
no_filter = set(command.split(" ")) == {"no", "filter"}
test_all = set(command.split(" ")) == {"test", "all"}
return {"skip": skip, "no_filter": no_filter, "test_all": test_all}
else:
return {"skip": False, "no_filter": False, "test_all": False}
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--output_file", type=str, default="test_list.txt", help="Where to store the list of tests to run"
)
parser.add_argument(
"--json_output_file",
type=str,
default="test_map.json",
help="Where to store the tests to run in a dictionary format mapping test categories to test files",
)
parser.add_argument(
"--diff_with_last_commit",
action="store_true",
help="To fetch the tests between the current commit and the last commit",
)
parser.add_argument(
"--filter_tests",
action="store_true",
help="Will filter the pipeline/repo utils tests outside of the generated list of tests.",
)
parser.add_argument(
"--print_dependencies_of",
type=str,
help="Will only print the tree of modules depending on the file passed.",
default=None,
)
parser.add_argument(
"--commit_message",
type=str,
help="The commit message (which could contain a command to force all tests or skip the CI).",
default=None,
)
args = parser.parse_args()
if args.print_dependencies_of is not None:
print_tree_deps_of(args.print_dependencies_of)
elif args.filter_tests:
filter_tests(args.output_file, ["pipelines", "repo_utils"])
else:
repo = Repo(PATH_TO_REPO)
commit_message = repo.head.commit.message
commit_flags = parse_commit_message(commit_message)
if commit_flags["skip"]:
print("Force-skipping the CI")
quit()
if commit_flags["no_filter"]:
print("Running all tests fetched without filtering.")
if commit_flags["test_all"]:
print("Force-launching all tests")
is_main_branch = not repo.head.is_detached and repo.head.ref == repo.refs.main
diff_with_last_commit = args.diff_with_last_commit
if not diff_with_last_commit and is_main_branch:
print("main branch detected, fetching tests against last commit.")
diff_with_last_commit = True
if not commit_flags["test_all"]:
try:
infer_tests_to_run(
args.output_file,
diff_with_last_commit=diff_with_last_commit,
json_output_file=args.json_output_file,
filter_models=(not (commit_flags["no_filter"] or is_main_branch)),
)
filter_tests(args.output_file, ["repo_utils"])
except Exception as e:
print(f"\nError when trying to grab the relevant tests: {e}\n\nRunning all tests.")
commit_flags["test_all"] = True
if commit_flags["test_all"]:
with open(args.output_file, "w", encoding="utf-8") as f:
f.write("tests")
example_file = Path(args.output_file).parent / "examples_test_list.txt"
with open(example_file, "w", encoding="utf-8") as f:
f.write("all")
test_files_to_run = get_all_tests()
create_json_map(test_files_to_run, args.json_output_file)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/custom_init_isort.py | # coding=utf-8
# Copyright 2021 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utility that sorts the imports in the custom inits of Transformers. Transformers uses init files that delay the
import of an object to when it's actually needed. This is to avoid the main init importing all models, which would
make the line `import transformers` very slow when the user has all optional dependencies installed. The inits with
delayed imports have two halves: one definining a dictionary `_import_structure` which maps modules to the name of the
objects in each module, and one in `TYPE_CHECKING` which looks like a normal init for type-checkers. `isort` or `ruff`
properly sort the second half which looks like traditionl imports, the goal of this script is to sort the first half.
Use from the root of the repo with:
```bash
python utils/custom_init_isort.py
```
which will auto-sort the imports (used in `make style`).
For a check only (as used in `make quality`) run:
```bash
python utils/custom_init_isort.py --check_only
```
"""
import argparse
import os
import re
from typing import Any, Callable, List, Optional
# Path is defined with the intent you should run this script from the root of the repo.
PATH_TO_TRANSFORMERS = "src/transformers"
# Pattern that looks at the indentation in a line.
_re_indent = re.compile(r"^(\s*)\S")
# Pattern that matches `"key":" and puts `key` in group 0.
_re_direct_key = re.compile(r'^\s*"([^"]+)":')
# Pattern that matches `_import_structure["key"]` and puts `key` in group 0.
_re_indirect_key = re.compile(r'^\s*_import_structure\["([^"]+)"\]')
# Pattern that matches `"key",` and puts `key` in group 0.
_re_strip_line = re.compile(r'^\s*"([^"]+)",\s*$')
# Pattern that matches any `[stuff]` and puts `stuff` in group 0.
_re_bracket_content = re.compile(r"\[([^\]]+)\]")
def get_indent(line: str) -> str:
"""Returns the indent in given line (as string)."""
search = _re_indent.search(line)
return "" if search is None else search.groups()[0]
def split_code_in_indented_blocks(
code: str, indent_level: str = "", start_prompt: Optional[str] = None, end_prompt: Optional[str] = None
) -> List[str]:
"""
Split some code into its indented blocks, starting at a given level.
Args:
code (`str`): The code to split.
indent_level (`str`): The indent level (as string) to use for identifying the blocks to split.
start_prompt (`str`, *optional*): If provided, only starts splitting at the line where this text is.
end_prompt (`str`, *optional*): If provided, stops splitting at a line where this text is.
Warning:
The text before `start_prompt` or after `end_prompt` (if provided) is not ignored, just not split. The input `code`
can thus be retrieved by joining the result.
Returns:
`List[str]`: The list of blocks.
"""
# Let's split the code into lines and move to start_index.
index = 0
lines = code.split("\n")
if start_prompt is not None:
while not lines[index].startswith(start_prompt):
index += 1
blocks = ["\n".join(lines[:index])]
else:
blocks = []
# This variable contains the block treated at a given time.
current_block = [lines[index]]
index += 1
# We split into blocks until we get to the `end_prompt` (or the end of the file).
while index < len(lines) and (end_prompt is None or not lines[index].startswith(end_prompt)):
# We have a non-empty line with the proper indent -> start of a new block
if len(lines[index]) > 0 and get_indent(lines[index]) == indent_level:
# Store the current block in the result and rest. There are two cases: the line is part of the block (like
# a closing parenthesis) or not.
if len(current_block) > 0 and get_indent(current_block[-1]).startswith(indent_level + " "):
# Line is part of the current block
current_block.append(lines[index])
blocks.append("\n".join(current_block))
if index < len(lines) - 1:
current_block = [lines[index + 1]]
index += 1
else:
current_block = []
else:
# Line is not part of the current block
blocks.append("\n".join(current_block))
current_block = [lines[index]]
else:
# Just add the line to the current block
current_block.append(lines[index])
index += 1
# Adds current block if it's nonempty.
if len(current_block) > 0:
blocks.append("\n".join(current_block))
# Add final block after end_prompt if provided.
if end_prompt is not None and index < len(lines):
blocks.append("\n".join(lines[index:]))
return blocks
def ignore_underscore_and_lowercase(key: Callable[[Any], str]) -> Callable[[Any], str]:
"""
Wraps a key function (as used in a sort) to lowercase and ignore underscores.
"""
def _inner(x):
return key(x).lower().replace("_", "")
return _inner
def sort_objects(objects: List[Any], key: Optional[Callable[[Any], str]] = None) -> List[Any]:
"""
Sort a list of objects following the rules of isort (all uppercased first, camel-cased second and lower-cased
last).
Args:
objects (`List[Any]`):
The list of objects to sort.
key (`Callable[[Any], str]`, *optional*):
A function taking an object as input and returning a string, used to sort them by alphabetical order.
If not provided, will default to noop (so a `key` must be provided if the `objects` are not of type string).
Returns:
`List[Any]`: The sorted list with the same elements as in the inputs
"""
# If no key is provided, we use a noop.
def noop(x):
return x
if key is None:
key = noop
# Constants are all uppercase, they go first.
constants = [obj for obj in objects if key(obj).isupper()]
# Classes are not all uppercase but start with a capital, they go second.
classes = [obj for obj in objects if key(obj)[0].isupper() and not key(obj).isupper()]
# Functions begin with a lowercase, they go last.
functions = [obj for obj in objects if not key(obj)[0].isupper()]
# Then we sort each group.
key1 = ignore_underscore_and_lowercase(key)
return sorted(constants, key=key1) + sorted(classes, key=key1) + sorted(functions, key=key1)
def sort_objects_in_import(import_statement: str) -> str:
"""
Sorts the imports in a single import statement.
Args:
import_statement (`str`): The import statement in which to sort the imports.
Returns:
`str`: The same as the input, but with objects properly sorted.
"""
# This inner function sort imports between [ ].
def _replace(match):
imports = match.groups()[0]
# If there is one import only, nothing to do.
if "," not in imports:
return f"[{imports}]"
keys = [part.strip().replace('"', "") for part in imports.split(",")]
# We will have a final empty element if the line finished with a comma.
if len(keys[-1]) == 0:
keys = keys[:-1]
return "[" + ", ".join([f'"{k}"' for k in sort_objects(keys)]) + "]"
lines = import_statement.split("\n")
if len(lines) > 3:
# Here we have to sort internal imports that are on several lines (one per name):
# key: [
# "object1",
# "object2",
# ...
# ]
# We may have to ignore one or two lines on each side.
idx = 2 if lines[1].strip() == "[" else 1
keys_to_sort = [(i, _re_strip_line.search(line).groups()[0]) for i, line in enumerate(lines[idx:-idx])]
sorted_indices = sort_objects(keys_to_sort, key=lambda x: x[1])
sorted_lines = [lines[x[0] + idx] for x in sorted_indices]
return "\n".join(lines[:idx] + sorted_lines + lines[-idx:])
elif len(lines) == 3:
# Here we have to sort internal imports that are on one separate line:
# key: [
# "object1", "object2", ...
# ]
if _re_bracket_content.search(lines[1]) is not None:
lines[1] = _re_bracket_content.sub(_replace, lines[1])
else:
keys = [part.strip().replace('"', "") for part in lines[1].split(",")]
# We will have a final empty element if the line finished with a comma.
if len(keys[-1]) == 0:
keys = keys[:-1]
lines[1] = get_indent(lines[1]) + ", ".join([f'"{k}"' for k in sort_objects(keys)])
return "\n".join(lines)
else:
# Finally we have to deal with imports fitting on one line
import_statement = _re_bracket_content.sub(_replace, import_statement)
return import_statement
def sort_imports(file: str, check_only: bool = True):
"""
Sort the imports defined in the `_import_structure` of a given init.
Args:
file (`str`): The path to the init to check/fix.
check_only (`bool`, *optional*, defaults to `True`): Whether or not to just check (and not auto-fix) the init.
"""
with open(file, encoding="utf-8") as f:
code = f.read()
# If the file is not a custom init, there is nothing to do.
if "_import_structure" not in code:
return
# Blocks of indent level 0
main_blocks = split_code_in_indented_blocks(
code, start_prompt="_import_structure = {", end_prompt="if TYPE_CHECKING:"
)
# We ignore block 0 (everything untils start_prompt) and the last block (everything after end_prompt).
for block_idx in range(1, len(main_blocks) - 1):
# Check if the block contains some `_import_structure`s thingy to sort.
block = main_blocks[block_idx]
block_lines = block.split("\n")
# Get to the start of the imports.
line_idx = 0
while line_idx < len(block_lines) and "_import_structure" not in block_lines[line_idx]:
# Skip dummy import blocks
if "import dummy" in block_lines[line_idx]:
line_idx = len(block_lines)
else:
line_idx += 1
if line_idx >= len(block_lines):
continue
# Ignore beginning and last line: they don't contain anything.
internal_block_code = "\n".join(block_lines[line_idx:-1])
indent = get_indent(block_lines[1])
# Slit the internal block into blocks of indent level 1.
internal_blocks = split_code_in_indented_blocks(internal_block_code, indent_level=indent)
# We have two categories of import key: list or _import_structure[key].append/extend
pattern = _re_direct_key if "_import_structure = {" in block_lines[0] else _re_indirect_key
# Grab the keys, but there is a trap: some lines are empty or just comments.
keys = [(pattern.search(b).groups()[0] if pattern.search(b) is not None else None) for b in internal_blocks]
# We only sort the lines with a key.
keys_to_sort = [(i, key) for i, key in enumerate(keys) if key is not None]
sorted_indices = [x[0] for x in sorted(keys_to_sort, key=lambda x: x[1])]
# We reorder the blocks by leaving empty lines/comments as they were and reorder the rest.
count = 0
reorderded_blocks = []
for i in range(len(internal_blocks)):
if keys[i] is None:
reorderded_blocks.append(internal_blocks[i])
else:
block = sort_objects_in_import(internal_blocks[sorted_indices[count]])
reorderded_blocks.append(block)
count += 1
# And we put our main block back together with its first and last line.
main_blocks[block_idx] = "\n".join(block_lines[:line_idx] + reorderded_blocks + [block_lines[-1]])
if code != "\n".join(main_blocks):
if check_only:
return True
else:
print(f"Overwriting {file}.")
with open(file, "w", encoding="utf-8") as f:
f.write("\n".join(main_blocks))
def sort_imports_in_all_inits(check_only=True):
"""
Sort the imports defined in the `_import_structure` of all inits in the repo.
Args:
check_only (`bool`, *optional*, defaults to `True`): Whether or not to just check (and not auto-fix) the init.
"""
failures = []
for root, _, files in os.walk(PATH_TO_TRANSFORMERS):
if "__init__.py" in files:
result = sort_imports(os.path.join(root, "__init__.py"), check_only=check_only)
if result:
failures = [os.path.join(root, "__init__.py")]
if len(failures) > 0:
raise ValueError(f"Would overwrite {len(failures)} files, run `make style`.")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.")
args = parser.parse_args()
sort_imports_in_all_inits(check_only=args.check_only)
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/utils/models_to_deprecate.py | # Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Script to find a candidate list of models to deprecate based on the number of downloads and the date of the last commit.
"""
import argparse
import glob
import json
import os
from collections import defaultdict
from datetime import datetime, timezone
from pathlib import Path
from git import Repo
from huggingface_hub import HfApi
api = HfApi()
PATH_TO_REPO = Path(__file__).parent.parent.resolve()
repo = Repo(PATH_TO_REPO)
class HubModelLister:
"""
Utility for getting models from the hub based on tags. Handles errors without crashing the script.
"""
def __init__(self, tags):
self.tags = tags
self.model_list = api.list_models(tags=tags)
def __iter__(self):
try:
yield from self.model_list
except Exception as e:
print(f"Error: {e}")
return
def _extract_commit_hash(commits):
for commit in commits:
if commit.startswith("commit "):
return commit.split(" ")[1]
return ""
def get_list_of_repo_model_paths(models_dir):
# Get list of all models in the library
models = glob.glob(os.path.join(models_dir, "*/modeling_*.py"))
# Remove flax and tf models
models = [model for model in models if "_flax_" not in model]
models = [model for model in models if "_tf_" not in model]
# Get list of all deprecated models in the library
deprecated_models = glob.glob(os.path.join(models_dir, "deprecated", "*"))
# For each deprecated model, remove the deprecated models from the list of all models as well as the symlink path
for deprecated_model in deprecated_models:
deprecated_model_name = "/" + deprecated_model.split("/")[-1] + "/"
models = [model for model in models if deprecated_model_name not in model]
# Remove deprecated models
models = [model for model in models if "/deprecated" not in model]
# Remove auto
models = [model for model in models if "/auto/" not in model]
return models
def get_list_of_models_to_deprecate(
thresh_num_downloads=5_000,
thresh_date=None,
use_cache=False,
save_model_info=False,
max_num_models=-1,
):
if thresh_date is None:
thresh_date = datetime.now(timezone.utc).replace(year=datetime.now(timezone.utc).year - 1)
else:
thresh_date = datetime.strptime(thresh_date, "%Y-%m-%d").replace(tzinfo=timezone.utc)
models_dir = PATH_TO_REPO / "src/transformers/models"
model_paths = get_list_of_repo_model_paths(models_dir=models_dir)
if use_cache and os.path.exists("models_info.json"):
with open("models_info.json", "r") as f:
models_info = json.load(f)
# Convert datetimes back to datetime objects
for model, info in models_info.items():
info["first_commit_datetime"] = datetime.fromisoformat(info["first_commit_datetime"])
else:
# Build a dictionary of model info: first commit datetime, commit hash, model path
models_info = defaultdict(dict)
for model_path in model_paths:
model = model_path.split("/")[-2]
if model in models_info:
continue
commits = repo.git.log("--diff-filter=A", "--", model_path).split("\n")
commit_hash = _extract_commit_hash(commits)
commit_obj = repo.commit(commit_hash)
committed_datetime = commit_obj.committed_datetime
models_info[model]["commit_hash"] = commit_hash
models_info[model]["first_commit_datetime"] = committed_datetime
models_info[model]["model_path"] = model_path
models_info[model]["downloads"] = 0
# Some tags on the hub are formatted differently than in the library
tags = [model]
if "_" in model:
tags.append(model.replace("_", "-"))
models_info[model]["tags"] = tags
# Filter out models which were added less than a year ago
models_info = {
model: info for model, info in models_info.items() if info["first_commit_datetime"] < thresh_date
}
# We make successive calls to the hub, filtering based on the model tags
n_seen = 0
for model, model_info in models_info.items():
for model_tag in model_info["tags"]:
model_list = HubModelLister(tags=model_tag)
for i, hub_model in enumerate(model_list):
n_seen += 1
if i % 100 == 0:
print(f"Processing model {i} for tag {model_tag}")
if max_num_models != -1 and i > n_seen:
break
if hub_model.private:
continue
model_info["downloads"] += hub_model.downloads
if save_model_info and not (use_cache and os.path.exists("models_info.json")):
# Make datetimes serializable
for model, info in models_info.items():
info["first_commit_datetime"] = info["first_commit_datetime"].isoformat()
with open("models_info.json", "w") as f:
json.dump(models_info, f, indent=4)
print("\nModels to deprecate:")
n_models_to_deprecate = 0
models_to_deprecate = {}
for model, info in models_info.items():
n_downloads = info["downloads"]
if n_downloads < thresh_num_downloads:
n_models_to_deprecate += 1
models_to_deprecate[model] = info
print(f"\nModel: {model}")
print(f"Downloads: {n_downloads}")
print(f"Date: {info['first_commit_datetime']}")
print(f"\nNumber of models to deprecate: {n_models_to_deprecate}")
print("Before deprecating make sure to verify the models, including if they're used as a module in other models.")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--save_model_info", action="store_true", help="Save the retrieved model info to a json file.")
parser.add_argument(
"--use_cache", action="store_true", help="Use the cached model info instead of calling the hub."
)
parser.add_argument(
"--thresh_num_downloads",
type=int,
default=5_000,
help="Threshold number of downloads below which a model should be deprecated. Default is 5,000.",
)
parser.add_argument(
"--thresh_date",
type=str,
default=None,
help="Date to consider the first commit from. Format: YYYY-MM-DD. If unset, defaults to one year ago from today.",
)
parser.add_argument(
"--max_num_models",
type=int,
default=-1,
help="Maximum number of models to consider from the hub. -1 means all models. Useful for testing.",
)
args = parser.parse_args()
models_to_deprecate = get_list_of_models_to_deprecate(
thresh_num_downloads=args.thresh_num_downloads,
thresh_date=args.thresh_date,
use_cache=args.use_cache,
save_model_info=args.save_model_info,
max_num_models=args.max_num_models,
)
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_pipeline.py | import numpy as np
from transformers import Pipeline
def softmax(outputs):
maxes = np.max(outputs, axis=-1, keepdims=True)
shifted_exp = np.exp(outputs - maxes)
return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True)
class PairClassificationPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
if "second_text" in kwargs:
preprocess_kwargs["second_text"] = kwargs["second_text"]
return preprocess_kwargs, {}, {}
def preprocess(self, text, second_text=None):
return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework)
def _forward(self, model_inputs):
return self.model(**model_inputs)
def postprocess(self, model_outputs):
logits = model_outputs.logits[0].numpy()
probabilities = softmax(logits)
best_class = np.argmax(probabilities)
label = self.model.config.id2label[best_class]
score = probabilities[best_class].item()
logits = logits.tolist()
return {"label": label, "score": score, "logits": logits}
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_feature_extraction.py | from transformers import Wav2Vec2FeatureExtractor
class CustomFeatureExtractor(Wav2Vec2FeatureExtractor):
pass
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_processing.py | from transformers import ProcessorMixin
class CustomProcessor(ProcessorMixin):
feature_extractor_class = "AutoFeatureExtractor"
tokenizer_class = "AutoTokenizer"
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_modeling.py | import torch
from transformers import PreTrainedModel
from .custom_configuration import CustomConfig, NoSuperInitConfig
class CustomModel(PreTrainedModel):
config_class = CustomConfig
def __init__(self, config):
super().__init__(config)
self.linear = torch.nn.Linear(config.hidden_size, config.hidden_size)
def forward(self, x):
return self.linear(x)
def _init_weights(self, module):
pass
class NoSuperInitModel(PreTrainedModel):
config_class = NoSuperInitConfig
def __init__(self, config):
super().__init__(config)
self.linear = torch.nn.Linear(config.attribute, config.attribute)
def forward(self, x):
return self.linear(x)
def _init_weights(self, module):
pass
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_tokenization_fast.py | from transformers import BertTokenizerFast
from .custom_tokenization import CustomTokenizer
class CustomTokenizerFast(BertTokenizerFast):
slow_tokenizer_class = CustomTokenizer
pass
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_image_processing.py | from transformers import CLIPImageProcessor
class CustomImageProcessor(CLIPImageProcessor):
pass
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_tokenization.py | from transformers import BertTokenizer
class CustomTokenizer(BertTokenizer):
pass
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/test_module/custom_configuration.py | from transformers import PretrainedConfig
class CustomConfig(PretrainedConfig):
model_type = "custom"
def __init__(self, attribute=1, **kwargs):
self.attribute = attribute
super().__init__(**kwargs)
class NoSuperInitConfig(PretrainedConfig):
model_type = "custom"
def __init__(self, attribute=1, **kwargs):
self.attribute = attribute
| 0 |
mavonic_private_repos/transformers/utils | mavonic_private_repos/transformers/utils/tf_ops/onnx.json | {
"opsets": {
"1": [
"Abs",
"Add",
"AddV2",
"ArgMax",
"ArgMin",
"AvgPool",
"AvgPool3D",
"BatchMatMul",
"BatchMatMulV2",
"BatchToSpaceND",
"BiasAdd",
"BiasAddV1",
"Cast",
"Ceil",
"CheckNumerics",
"ComplexAbs",
"Concat",
"ConcatV2",
"Const",
"ConstV2",
"Conv1D",
"Conv2D",
"Conv2DBackpropInput",
"Conv3D",
"Conv3DBackpropInputV2",
"DepthToSpace",
"DepthwiseConv2d",
"DepthwiseConv2dNative",
"Div",
"Dropout",
"Elu",
"Equal",
"Erf",
"Exp",
"ExpandDims",
"Flatten",
"Floor",
"Gather",
"GatherNd",
"GatherV2",
"Greater",
"Identity",
"IdentityN",
"If",
"LRN",
"LSTMBlockCell",
"LeakyRelu",
"Less",
"Log",
"LogSoftmax",
"LogicalAnd",
"LogicalNot",
"LogicalOr",
"LookupTableSizeV2",
"MatMul",
"Max",
"MaxPool",
"MaxPool3D",
"MaxPoolV2",
"Maximum",
"Mean",
"Min",
"Minimum",
"MirrorPad",
"Mul",
"Neg",
"NoOp",
"NotEqual",
"OneHot",
"Pack",
"Pad",
"PadV2",
"Placeholder",
"PlaceholderV2",
"PlaceholderWithDefault",
"Pow",
"Prod",
"RFFT",
"RandomNormal",
"RandomNormalLike",
"RandomUniform",
"RandomUniformLike",
"RealDiv",
"Reciprocal",
"Relu",
"Relu6",
"Reshape",
"Rsqrt",
"Selu",
"Shape",
"Sigmoid",
"Sign",
"Size",
"Slice",
"Softmax",
"Softplus",
"Softsign",
"SpaceToBatchND",
"SpaceToDepth",
"Split",
"SplitV",
"Sqrt",
"Square",
"SquaredDifference",
"Squeeze",
"StatelessIf",
"StopGradient",
"StridedSlice",
"StringJoin",
"Sub",
"Sum",
"Tanh",
"Tile",
"TopKV2",
"Transpose",
"TruncateDiv",
"Unpack",
"ZerosLike"
],
"2": [],
"3": [],
"4": [],
"5": [],
"6": [
"AddN",
"All",
"Any",
"FloorDiv",
"FusedBatchNorm",
"FusedBatchNormV2",
"FusedBatchNormV3"
],
"7": [
"Acos",
"Asin",
"Atan",
"Cos",
"Fill",
"FloorMod",
"GreaterEqual",
"LessEqual",
"Loop",
"MatrixBandPart",
"Multinomial",
"Range",
"ResizeBilinear",
"ResizeNearestNeighbor",
"Scan",
"Select",
"SelectV2",
"Sin",
"SoftmaxCrossEntropyWithLogits",
"SparseSoftmaxCrossEntropyWithLogits",
"StatelessWhile",
"Tan",
"TensorListFromTensor",
"TensorListGetItem",
"TensorListLength",
"TensorListReserve",
"TensorListResize",
"TensorListSetItem",
"TensorListStack",
"While"
],
"8": [
"BroadcastTo",
"ClipByValue",
"FIFOQueueV2",
"HashTableV2",
"IteratorGetNext",
"IteratorV2",
"LookupTableFindV2",
"MaxPoolWithArgmax",
"QueueDequeueManyV2",
"QueueDequeueUpToV2",
"QueueDequeueV2",
"ReverseSequence"
],
"9": [
"SegmentMax",
"SegmentMean",
"SegmentMin",
"SegmentProd",
"SegmentSum",
"Sinh",
"SparseSegmentMean",
"SparseSegmentMeanWithNumSegments",
"SparseSegmentSqrtN",
"SparseSegmentSqrtNWithNumSegments",
"SparseSegmentSum",
"SparseSegmentSumWithNumSegments",
"UnsortedSegmentMax",
"UnsortedSegmentMin",
"UnsortedSegmentProd",
"UnsortedSegmentSum",
"Where"
],
"10": [
"CropAndResize",
"CudnnRNN",
"DynamicStitch",
"FakeQuantWithMinMaxArgs",
"IsFinite",
"IsInf",
"NonMaxSuppressionV2",
"NonMaxSuppressionV3",
"NonMaxSuppressionV4",
"NonMaxSuppressionV5",
"ParallelDynamicStitch",
"ReverseV2",
"Roll"
],
"11": [
"Bincount",
"Cumsum",
"InvertPermutation",
"LeftShift",
"MatrixDeterminant",
"MatrixDiagPart",
"MatrixDiagPartV2",
"MatrixDiagPartV3",
"RaggedRange",
"RightShift",
"Round",
"ScatterNd",
"SparseFillEmptyRows",
"SparseReshape",
"SparseToDense",
"TensorScatterUpdate",
"Unique"
],
"12": [
"Einsum",
"MatrixDiag",
"MatrixDiagV2",
"MatrixDiagV3",
"MatrixSetDiagV3",
"SquaredDistance"
],
"13": []
}
} | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/quality.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y time git
ENV VIRTUAL_ENV=/usr/local
RUN pip install uv && uv venv
RUN uv pip install --no-cache-dir -U pip setuptools GitPython transformers "ruff==0.1.5" urllib3
RUN apt-get install -y jq curl && apt-get clean && rm -rf /var/lib/apt/lists/* | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/pipeline-tf.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git cmake g++
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir "transformers[sklearn,tf-cpu,testing,sentencepiece,tf-speech,vision]"
RUN uv pip install --no-cache-dir "protobuf==3.20.3" tensorflow_probability
RUN apt-get clean && rm -rf /var/lib/apt/lists/* | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/examples-tf.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git
RUN apt-get install -y g++ cmake
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv
RUN uv pip install --no-cache-dir -U pip setuptools albumentations seqeval
RUN pip install --upgrade --no-cache-dir "transformers[tf-cpu,sklearn,testing,sentencepiece,tf-speech,vision]"
RUN uv pip install --no-cache-dir "protobuf==3.20.3"
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/exotic-models.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
ARG REF=main
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git libgl1-mesa-glx libgl1 g++ tesseract-ocr
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir --no-deps timm accelerate
RUN pip install -U --upgrade-strategy eager --no-cache-dir pytesseract python-Levenshtein opencv-python nltk
# RUN uv pip install --no-cache-dir natten==0.15.1+torch210cpu -f https://shi-labs.com/natten/wheels
RUN pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[testing, vision]" 'scikit-learn' 'torch-stft' 'nose' 'dataset'
# RUN git clone https://github.com/facebookresearch/detectron2.git
# RUN python3 -m pip install --no-cache-dir -e detectron2
RUN pip install 'git+https://github.com/facebookresearch/detectron2.git@92ae9f0b92aba5867824b4f12aa06a22a60a45d3'
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/torch-light.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git git-lfs
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]"
RUN pip uninstall -y transformers | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/consistency.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y time git pkg-config make git-lfs
ENV VIRTUAL_ENV=/usr/local
RUN pip install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools GitPython
RUN uv pip install --no-cache-dir --upgrade 'torch' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir tensorflow-cpu tf-keras
RUN uv pip install --no-cache-dir "transformers[flax,quality,vision,testing]"
RUN git lfs install
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/jax-light.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git g++ cmake
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax,testing,sentencepiece,flax-speech,vision]"
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/pipeline-torch.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git pkg-config openssh-client git
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]"
RUN pip uninstall -y transformers | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/torch-tf-light.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
ARG REF=main
RUN echo ${REF}
USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git git-lfs
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-cache-dir --no-deps accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN git lfs install
RUN uv pip install --no-cache-dir pypi-kenlm
RUN pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[tf-cpu,sklearn,sentencepiece,vision,testing]"
RUN uv pip install --no-cache-dir "protobuf==3.20.3" librosa
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/torch-jax-light.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-deps accelerate
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax, audio, sklearn,sentencepiece,vision,testing]"
# RUN pip install --no-cache-dir "scipy<1.13" "transformers[flax,testing,sentencepiece,flax-speech,vision]"
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean
| 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/custom-tokenizers.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git cmake wget xz-utils build-essential g++5 libprotobuf-dev protobuf-compiler
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN wget https://github.com/ku-nlp/jumanpp/releases/download/v2.0.0-rc3/jumanpp-2.0.0-rc3.tar.xz
RUN tar xvf jumanpp-2.0.0-rc3.tar.xz
RUN mkdir jumanpp-2.0.0-rc3/bld
WORKDIR ./jumanpp-2.0.0-rc3/bld
RUN wget -LO catch.hpp https://github.com/catchorg/Catch2/releases/download/v2.13.8/catch.hpp
RUN mv catch.hpp ../libs/
RUN cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local
RUN make install -j 10
RUN uv pip install --no-cache --upgrade 'torch' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir --no-deps accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir "transformers[ja,testing,sentencepiece,jieba,spacy,ftfy,rjieba]" unidic unidic-lite
# spacy is not used so not tested. Causes to failures. TODO fix later
RUN python3 -m unidic download
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN apt remove -y g++ cmake xz-utils libprotobuf-dev protobuf-compiler | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/examples-torch.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --no-cache-dir 'torch' 'torchvision' 'torchaudio' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "transformers[sklearn,sentencepiece,vision,testing]" seqeval albumentations jiwer
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* | 0 |
mavonic_private_repos/transformers | mavonic_private_repos/transformers/docker/tf-light.dockerfile | FROM python:3.10-slim
ENV PYTHONDONTWRITEBYTECODE=1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ pkg-config openssh-client git
RUN apt-get install -y cmake
ENV VIRTUAL_ENV=/usr/local
RUN pip --no-cache-dir install uv && uv venv && uv pip install --no-cache-dir -U pip setuptools
RUN pip install --upgrade --no-cache-dir "transformers[tf-cpu,sklearn,testing,sentencepiece,tf-speech,vision]"
RUN uv pip install --no-cache-dir "protobuf==3.20.3"
RUN pip uninstall -y transformers
RUN apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get autoremove && apt-get autoclean | 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh | #!/bin/bash
source ~/.bashrc
echo "running docker-entrypoint.sh"
conda activate container
echo $KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS
echo "printed TPU info"
export XRT_TPU_CONFIG="tpu_worker;0;${KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS:7}"
exec "$@"#!/bin/bash
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/dataset.yaml | apiVersion: v1
kind: PersistentVolume
metadata:
name: huggingface-cluster-disk
spec:
storageClassName: ""
capacity:
storage: 500Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: huggingface-cluster-disk-claim
gcePersistentDisk:
pdName: huggingface-cluster-disk
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: huggingface-cluster-disk-claim
spec:
# Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
# A nil storageClassName value uses the default StorageClass. For details, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Ki
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/bert-base-cased.jsonnet | local base = import 'templates/base.libsonnet';
local tpus = import 'templates/tpus.libsonnet';
local utils = import "templates/utils.libsonnet";
local volumes = import "templates/volumes.libsonnet";
local bertBaseCased = base.BaseTest {
frameworkPrefix: "hf",
modelName: "bert-base-cased",
mode: "example",
configMaps: [],
timeout: 3600, # 1 hour, in seconds
image: std.extVar('image'),
imageTag: std.extVar('image-tag'),
tpuSettings+: {
softwareVersion: "pytorch-nightly",
},
accelerator: tpus.v3_8,
volumeMap+: {
datasets: volumes.PersistentVolumeSpec {
name: "huggingface-cluster-disk",
mountPath: "/datasets",
},
},
command: utils.scriptCommand(
|||
python -m pytest -s transformers/examples/pytorch/test_xla_examples.py -v
test_exit_code=$?
echo "\nFinished running commands.\n"
test $test_exit_code -eq 0
|||
),
};
bertBaseCased.oneshotJob
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-tpu/Dockerfile | FROM google/cloud-sdk:slim
# Build args.
ARG GITHUB_REF=refs/heads/main
# TODO: This Dockerfile installs pytorch/xla 3.6 wheels. There are also 3.7
# wheels available; see below.
ENV PYTHON_VERSION=3.6
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
curl \
ca-certificates
# Install conda and python.
# NOTE new Conda does not forward the exit status... https://github.com/conda/conda/issues/8385
RUN curl -o ~/miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-4.7.12-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b && \
rm ~/miniconda.sh
ENV PATH=/root/miniconda3/bin:$PATH
RUN conda create -y --name container python=$PYTHON_VERSION
# Run the rest of commands within the new conda env.
# Use absolute path to appease Codefactor.
SHELL ["/root/miniconda3/bin/conda", "run", "-n", "container", "/bin/bash", "-c"]
RUN conda install -y python=$PYTHON_VERSION mkl
RUN pip uninstall -y torch && \
# Python 3.7 wheels are available. Replace cp36-cp36m with cp37-cp37m
gsutil cp 'gs://tpu-pytorch/wheels/torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \
gsutil cp 'gs://tpu-pytorch/wheels/torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \
gsutil cp 'gs://tpu-pytorch/wheels/torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' . && \
pip install 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
pip install 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
pip install 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
rm 'torch-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
rm 'torch_xla-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
rm 'torchvision-nightly-cp${PYTHON_VERSION/./}-cp${PYTHON_VERSION/./}m-linux_x86_64.whl' && \
apt-get install -y libomp5
ENV LD_LIBRARY_PATH=root/miniconda3/envs/container/lib
# Install huggingface/transformers at the current PR, plus dependencies.
RUN git clone https://github.com/huggingface/transformers.git && \
cd transformers && \
git fetch origin $GITHUB_REF:CI && \
git checkout CI && \
cd .. && \
pip install ./transformers && \
pip install -r ./transformers/examples/pytorch/_test_requirements.txt && \
pip install pytest
RUN python -c "import torch_xla; print(torch_xla.__version__)"
RUN python -c "import transformers as trf; print(trf.__version__)"
RUN conda init bash
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["bash"]
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-tensorflow-gpu/Dockerfile | FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
LABEL maintainer="Hugging Face"
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-pip ffmpeg
RUN python3 -m pip install --no-cache-dir --upgrade pip
ARG REF=main
RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF
RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-tensorflow,testing]
# If set to nothing, will install the latest version
ARG TENSORFLOW='2.13'
RUN [ ${#TENSORFLOW} -gt 0 ] && VERSION='tensorflow=='$TENSORFLOW'.*' || VERSION='tensorflow'; python3 -m pip install --no-cache-dir -U $VERSION
RUN python3 -m pip uninstall -y torch flax
RUN python3 -m pip install -U "itsdangerous<2.1.0"
RUN python3 -m pip install --no-cache-dir -U tensorflow_probability
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
RUN cd transformers && python3 setup.py develop
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-gpu/Dockerfile | FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04
LABEL maintainer="Hugging Face"
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update
RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-pip ffmpeg
RUN python3 -m pip install --no-cache-dir --upgrade pip
ARG REF=main
RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF
# If set to nothing, will install the latest version
ARG PYTORCH='2.1.1'
ARG TORCH_VISION=''
ARG TORCH_AUDIO=''
# Example: `cu102`, `cu113`, etc.
ARG CUDA='cu121'
RUN [ ${#PYTORCH} -gt 0 ] && VERSION='torch=='$PYTORCH'.*' || VERSION='torch'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA
RUN [ ${#TORCH_VISION} -gt 0 ] && VERSION='torchvision=='TORCH_VISION'.*' || VERSION='torchvision'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA
RUN [ ${#TORCH_AUDIO} -gt 0 ] && VERSION='torchaudio=='TORCH_AUDIO'.*' || VERSION='torchaudio'; python3 -m pip install --no-cache-dir -U $VERSION --extra-index-url https://download.pytorch.org/whl/$CUDA
RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch,testing,video]
RUN python3 -m pip uninstall -y tensorflow flax
RUN python3 -m pip install --no-cache-dir git+https://github.com/facebookresearch/detectron2.git pytesseract
RUN python3 -m pip install -U "itsdangerous<2.1.0"
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
RUN cd transformers && python3 setup.py develop
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-doc-builder/Dockerfile | FROM python:3.10
LABEL maintainer="Hugging Face"
RUN apt update
RUN git clone https://github.com/huggingface/transformers
RUN python3 -m pip install --no-cache-dir --upgrade pip && python3 -m pip install --no-cache-dir git+https://github.com/huggingface/doc-builder ./transformers[dev]
RUN apt-get -y update && apt-get install -y libsndfile1-dev && apt install -y tesseract-ocr
# Torch needs to be installed before deepspeed
RUN python3 -m pip install --no-cache-dir ./transformers[deepspeed]
RUN python3 -m pip install --no-cache-dir torchvision git+https://github.com/facebookresearch/detectron2.git pytesseract
RUN python3 -m pip install -U "itsdangerous<2.1.0"
# Test if the image could successfully build the doc. before publishing the image
RUN doc-builder build transformers transformers/docs/source/en --build_dir doc-build-dev --notebook_dir notebooks/transformers_doc --clean
RUN rm -rf doc-build-dev | 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-deepspeed-amd-gpu/Dockerfile | FROM rocm/dev-ubuntu-22.04:5.6
LABEL maintainer="Hugging Face"
ARG DEBIAN_FRONTEND=noninteractive
ARG PYTORCH='2.1.1'
ARG TORCH_VISION='0.16.1'
ARG TORCH_AUDIO='2.1.1'
ARG ROCM='5.6'
RUN apt update && \
apt install -y --no-install-recommends \
libaio-dev \
git \
# These are required to build deepspeed.
python3-dev \
python-is-python3 \
rocrand-dev \
rocthrust-dev \
hipsparse-dev \
hipblas-dev \
rocblas-dev && \
apt clean && \
rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --no-cache-dir --upgrade pip ninja "pydantic<2"
RUN python3 -m pip uninstall -y apex torch torchvision torchaudio
RUN python3 -m pip install torch==$PYTORCH torchvision==$TORCH_VISION torchaudio==$TORCH_AUDIO --index-url https://download.pytorch.org/whl/rocm$ROCM --no-cache-dir
# Pre-build DeepSpeed, so it's be ready for testing (to avoid timeout)
RUN DS_BUILD_CPU_ADAM=1 DS_BUILD_FUSED_ADAM=1 python3 -m pip install deepspeed --global-option="build_ext" --global-option="-j8" --no-cache-dir -v --disable-pip-version-check 2>&1
ARG REF=main
WORKDIR /
# Invalidate docker cache from here if new commit is available.
ADD https://api.github.com/repos/huggingface/transformers/git/refs/heads/main version.json
RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF
RUN python3 -m pip install --no-cache-dir ./transformers[accelerate,testing,sentencepiece,sklearn]
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
RUN cd transformers && python3 setup.py develop
RUN python3 -c "from deepspeed.launcher.runner import main"
# Remove nvml as it is not compatible with ROCm
RUN python3 -m pip uninstall py3nvml pynvml -y
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-pytorch-amd-gpu/Dockerfile | FROM rocm/dev-ubuntu-20.04:5.6
# rocm/pytorch has no version with 2.1.0
LABEL maintainer="Hugging Face"
ARG DEBIAN_FRONTEND=noninteractive
ARG PYTORCH='2.1.0'
ARG TORCH_VISION='0.16.0'
ARG TORCH_AUDIO='2.1.0'
ARG ROCM='5.6'
RUN apt update && \
apt install -y --no-install-recommends git libsndfile1-dev tesseract-ocr espeak-ng python3 python3-dev python3-pip ffmpeg && \
apt clean && \
rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --no-cache-dir --upgrade pip
RUN python3 -m pip install torch==$PYTORCH torchvision==$TORCH_VISION torchaudio==$TORCH_AUDIO --index-url https://download.pytorch.org/whl/rocm$ROCM
RUN python3 -m pip install --no-cache-dir --upgrade pip setuptools ninja git+https://github.com/facebookresearch/detectron2.git pytesseract "itsdangerous<2.1.0"
ARG REF=main
WORKDIR /
# Invalidate docker cache from here if new commit is available.
ADD https://api.github.com/repos/huggingface/transformers/git/refs/heads/main version.json
RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF
RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch,testing,video]
RUN python3 -m pip uninstall -y tensorflow flax
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
RUN cd transformers && python3 setup.py develop
# Remove nvml as it is not compatible with ROCm
RUN python3 -m pip uninstall py3nvml pynvml -y
| 0 |
mavonic_private_repos/transformers/docker | mavonic_private_repos/transformers/docker/transformers-quantization-latest-gpu/Dockerfile | FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04
LABEL maintainer="Hugging Face"
ARG DEBIAN_FRONTEND=noninteractive
# Use login shell to read variables from `~/.profile` (to pass dynamic created variables between RUN commands)
SHELL ["sh", "-lc"]
# The following `ARG` are mainly used to specify the versions explicitly & directly in this docker file, and not meant
# to be used as arguments for docker build (so far).
ARG PYTORCH='2.2.1'
# Example: `cu102`, `cu113`, etc.
ARG CUDA='cu118'
RUN apt update
RUN apt install -y git libsndfile1-dev tesseract-ocr espeak-ng python python3-pip ffmpeg
RUN python3 -m pip install --no-cache-dir --upgrade pip
ARG REF=main
RUN git clone https://github.com/huggingface/transformers && cd transformers && git checkout $REF
RUN [ ${#PYTORCH} -gt 0 ] && VERSION='torch=='$PYTORCH'.*' || VERSION='torch'; echo "export VERSION='$VERSION'" >> ~/.profile
RUN echo torch=$VERSION
# `torchvision` and `torchaudio` should be installed along with `torch`, especially for nightly build.
# Currently, let's just use their latest releases (when `torch` is installed with a release version)
RUN python3 -m pip install --no-cache-dir -U $VERSION torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/$CUDA
RUN python3 -m pip install --no-cache-dir -e ./transformers[dev-torch]
RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/accelerate@main#egg=accelerate
# needed in bnb and awq
RUN python3 -m pip install --no-cache-dir einops
# Add bitsandbytes for mixed int8 testing
RUN python3 -m pip install --no-cache-dir bitsandbytes
# Add auto-gptq for gtpq quantization testing
RUN python3 -m pip install --no-cache-dir auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
# Add optimum for gptq quantization testing
RUN python3 -m pip install --no-cache-dir git+https://github.com/huggingface/optimum@main#egg=optimum
# Add aqlm for quantization testing
RUN python3 -m pip install --no-cache-dir aqlm[gpu]==1.0.2
# Add hqq for quantization testing
RUN python3 -m pip install --no-cache-dir hqq
# Add autoawq for quantization testing
# >=v0.2.3 needed for compatibility with torch 2.2.1
RUN python3 -m pip install --no-cache-dir https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.3/autoawq-0.2.3+cu118-cp38-cp38-linux_x86_64.whl
# Add quanto for quantization testing
RUN python3 -m pip install --no-cache-dir quanto
# Add eetq for quantization testing
RUN python3 -m pip install git+https://github.com/NetEase-FuXi/EETQ.git
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
RUN cd transformers && python3 setup.py develop | 0 |