repo_id
stringclasses 55
values | file_path
stringlengths 42
186
| content
stringlengths 1
333k
| __index_level_0__
int64 0
0
|
---|---|---|---|
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/altclip.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# AltCLIP
## æŠèŠ
AltCLIPã¢ãã«ã¯ãã[AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679v2)ããšããè«æã§Zhongzhi ChenãGuang LiuãBo-Wen ZhangãFulong YeãQinghong YangãLedell Wuã«ãã£ãŠææ¡ãããŸãããAltCLIPïŒCLIPã®èšèªãšã³ã³ãŒããŒã®ä»£æ¿ïŒã¯ãæ§ã
ãªç»å-ããã¹ããã¢ããã³ããã¹ã-ããã¹ããã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã«ãããã¯ãŒã¯ã§ããCLIPã®ããã¹ããšã³ã³ãŒããŒãäºååŠç¿æžã¿ã®å€èšèªããã¹ããšã³ã³ãŒããŒXLM-Rã«çœ®ãæããããšã§ãã»ãŒå
šãŠã®ã¿ã¹ã¯ã§CLIPã«éåžžã«è¿ãæ§èœãåŸããããªãªãžãã«ã®CLIPã®èœåãå€èšèªç解ãªã©ã«æ¡åŒµããŸããã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*ãã®ç 究ã§ã¯ã匷åãªãã€ãªã³ã¬ã«ãã«ãã¢ãŒãã«è¡šçŸã¢ãã«ãèšç·Žããããã®æŠå¿µçã«åçŽã§å¹æçãªæ¹æ³ãææ¡ããŸããOpenAIã«ãã£ãŠãªãªãŒã¹ããããã«ãã¢ãŒãã«è¡šçŸã¢ãã«CLIPããéå§ãããã®ããã¹ããšã³ã³ãŒããäºååŠç¿æžã¿ã®å€èšèªããã¹ããšã³ã³ãŒãXLM-Rã«äº€æããæåž«åŠç¿ãšå¯Ÿç
§åŠç¿ãããªã2段éã®ãã¬ãŒãã³ã°ã¹ããŒããçšããŠèšèªãšç»åã®è¡šçŸãæŽåãããŸãããå¹
åºãã¿ã¹ã¯ã®è©äŸ¡ãéããŠãæã
ã®æ¹æ³ãæ€èšŒããŸããImageNet-CNãFlicker30k-CNãCOCO-CNãå«ãå€ãã®ã¿ã¹ã¯ã§æ°ããªæå
端ã®æ§èœãéæããŸãããããã«ãã»ãŒãã¹ãŠã®ã¿ã¹ã¯ã§CLIPã«éåžžã«è¿ãæ§èœãåŸãŠãããããã¯CLIPã®ããã¹ããšã³ã³ãŒããå€æŽããã ãã§ãå€èšèªç解ãªã©ã®æ¡åŒµãå®çŸã§ããããšã瀺åããŠããŸãã*
ãã®ã¢ãã«ã¯[jongjyh](https://huggingface.co/jongjyh)ã«ããæäŸãããŸããã
## 䜿çšäžã®ãã³ããšäœ¿çšäŸ
AltCLIPã®äœ¿çšæ¹æ³ã¯CLIPã«éåžžã«äŒŒãŠããŸããCLIPãšã®éãã¯ããã¹ããšã³ã³ãŒããŒã«ãããŸããç§ãã¡ã¯ã«ãžã¥ã¢ã«ã¢ãã³ã·ã§ã³ã§ã¯ãªãåæ¹åã¢ãã³ã·ã§ã³ã䜿çšããXLM-Rã®[CLS]ããŒã¯ã³ãããã¹ãåã蟌ã¿ãè¡šããã®ãšããŠåãããšã«çæããŠãã ããã
AltCLIPã¯ãã«ãã¢ãŒãã«ãªèŠèŠèšèªã¢ãã«ã§ããããã¯ç»åãšããã¹ãã®é¡äŒŒåºŠãããŒãã·ã§ããç»ååé¡ã«äœ¿çšã§ããŸããAltCLIPã¯ViTã®ãããªTransformerã䜿çšããŠèŠèŠçç¹åŸŽããåæ¹åèšèªã¢ãã«ã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸããããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽã¯ãåäžã®æ¬¡å
ãæã€æœåšç©ºéã«å°åœ±ãããŸããå°åœ±ãããç»åãšããã¹ãç¹åŸŽéã®ãããç©ãé¡äŒŒåºŠã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
Transformerãšã³ã³ãŒããŒã«ç»åãäžããã«ã¯ãåç»åãåºå®ãµã€ãºã®éè€ããªããããã®ç³»åã«åå²ããããããç·åœ¢ã«åã蟌ã¿ãŸããç»åå
šäœãè¡šçŸããããã®[CLS]ããŒã¯ã³ãè¿œå ãããŸããèè
ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãè¿œå ããçµæãšããŠåŸããããã¯ãã«ã®ç³»åãæšæºçãªTransformerãšã³ã³ãŒããŒã«äŸçµŠããŸãã[`CLIPImageProcessor`]ã䜿çšããŠãã¢ãã«ã®ããã«ç»åã®ãµã€ãºå€æŽïŒãŸãã¯æ¡å€§çž®å°ïŒãšæ£èŠåãè¡ãããšãã§ããŸãã
[`AltCLIPProcessor`]ã¯ãããã¹ãã®ãšã³ã³ãŒããšç»åã®ååŠçãäž¡æ¹è¡ãããã«ã[`CLIPImageProcessor`]ãš[`XLMRobertaTokenizer`]ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããŸãã以äžã®äŸã¯ã[`AltCLIPProcessor`]ãš[`AltCLIPModel`]ã䜿çšããŠç»å-ããã¹ãé¡äŒŒã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import AltCLIPModel, AltCLIPProcessor
>>> model = AltCLIPModel.from_pretrained("BAAI/AltCLIP")
>>> processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
<Tip>
ãã®ã¢ãã«ã¯`CLIPModel`ãããŒã¹ã«ããŠããããªãªãžãã«ã®[CLIP](clip)ãšåãããã«äœ¿çšããŠãã ããã
</Tip>
## AltCLIPConfig
[[autodoc]] AltCLIPConfig
- from_text_vision_configs
## AltCLIPTextConfig
[[autodoc]] AltCLIPTextConfig
## AltCLIPVisionConfig
[[autodoc]] AltCLIPVisionConfig
## AltCLIPProcessor
[[autodoc]] AltCLIPProcessor
## AltCLIPModel
[[autodoc]] AltCLIPModel
- forward
- get_text_features
- get_image_features
## AltCLIPTextModel
[[autodoc]] AltCLIPTextModel
- forward
## AltCLIPVisionModel
[[autodoc]] AltCLIPVisionModel
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=bert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-bert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/bert-base-uncased">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
BERT ã¢ãã«ã¯ãJacob DevlinãMing-Wei ChangãKenton LeeãKristina Toutanova ã«ãã£ãŠ [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) ã§ææ¡ãããŸãããããã¯
ãã¹ã¯ãããèšèªã¢ããªã³ã°ç®æšãšæ¬¡ã®æã®çµã¿åããã䜿çšããŠäºåãã¬ãŒãã³ã°ãããåæ¹åãã©ã³ã¹ãã©ãŒããŒ
Toronto Book Corpus ãš Wikipedia ãããªã倧èŠæš¡ãªã³ãŒãã¹ã§ã®äºæž¬ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãšåŒã°ããæ°ããèšèªè¡šçŸã¢ãã«ãå°å
¥ããŸãããã㯠Bidirectional Encoder Representations ã®ç¥ã§ã
ãã©ã³ã¹ãã©ãŒããŒãããæè¿ã®èšèªè¡šçŸã¢ãã«ãšã¯ç°ãªããBERT ã¯æ·±ãåæ¹åæ§ãäºåã«ãã¬ãŒãã³ã°ããããã«èšèšãããŠããŸãã
ãã¹ãŠã®ã¬ã€ã€ãŒã®å·Šãšå³ã®äž¡æ¹ã®ã³ã³ããã¹ããå
±åã§æ¡ä»¶ä»ãããããšã«ãããã©ãã«ã®ãªãããã¹ãããè¡šçŸããŸããçµæãšããŠã
äºåãã¬ãŒãã³ã°ããã BERT ã¢ãã«ã¯ãåºåå±€ã 1 ã€è¿œå ããã ãã§åŸ®èª¿æŽããŠãæå
端ã®ã¢ãã«ãäœæã§ããŸãã
å®è³ªçãªã¿ã¹ã¯åºæã®ãã®ãå¿
èŠãšããã質åå¿çãèšèªæšè«ãªã©ã®å¹
åºãã¿ã¹ã¯ã«å¯Ÿå¿
ã¢ãŒããã¯ãã£ã®å€æŽã*
*BERT ã¯æŠå¿µçã«ã¯ã·ã³ãã«ã§ãããçµéšçã«åŒ·åã§ãã 11 ã®èªç¶ãªèŠçŽ ã«é¢ããæ°ããæå
端ã®çµæãåŸãããŸãã
èšèªåŠçã¿ã¹ã¯ïŒGLUE ã¹ã³ã¢ã 80.5% ã«æŒãäžããïŒ7.7% ãã€ã³ãã®çµ¶å¯Ÿæ¹åïŒãMultiNLI ãå«ãïŒ
粟床㯠86.7% (çµ¶å¯Ÿå€ 4.6% åäž)ãSQuAD v1.1 質åå¿çãã¹ã F1 㯠93.2 (çµ¶å¯Ÿå€ 1.5 ãã€ã³ã)
æ¹å) ããã³ SQuAD v2.0 ãã¹ã F1 ãã 83.1 (5.1 ãã€ã³ãã®çµ¶å¯Ÿæ¹å)ã*
## Usage tips
- BERT ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
- BERT ã¯ããã¹ã¯èšèªã¢ããªã³ã° (MLM) ããã³æ¬¡ã®æäºæž¬ (NSP) ã®ç®æšã䜿çšããŠãã¬ãŒãã³ã°ãããŸãããããã¯
ãã¹ã¯ãããããŒã¯ã³ã®äºæž¬ã NLU ã§ã¯äžè¬ã«å¹ççã§ãããããã¹ãçæã«ã¯æé©ã§ã¯ãããŸããã
- ã©ã³ãã ãã¹ãã³ã°ã䜿çšããŠå
¥åãç Žå£ããŸããããæ£ç¢ºã«ã¯ãäºåãã¬ãŒãã³ã°äžã«ãããŒã¯ã³ã®æå®ãããå²å (é垞㯠15%) ã次ã«ãã£ãŠãã¹ã¯ãããŸãã
* 確ç0.8ã®ç¹å¥ãªãã¹ã¯ããŒã¯ã³
* 確ç 0.1 ã§ãã¹ã¯ãããããŒã¯ã³ãšã¯ç°ãªãã©ã³ãã ãªããŒã¯ã³
* 確ç 0.1 ã®åãããŒã¯ã³
- ã¢ãã«ã¯å
ã®æãäºæž¬ããå¿
èŠããããŸããã2 çªç®ã®ç®çããããŸããå
¥å㯠2 ã€ã®æ A ãš B (éã«åé¢ããŒã¯ã³ãã) ã§ãã確ç 50% ã§ã¯ãæã¯ã³ãŒãã¹å
ã§é£ç¶ããŠããŸãããæ®ãã® 50% ã§ã¯é¢é£æ§ããããŸãããã¢ãã«ã¯ãæãé£ç¶ããŠãããã©ãããäºæž¬ããå¿
èŠããããŸãã
ãã®ã¢ãã«ã¯ [thomwolf](https://huggingface.co/thomwolf) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/bert) ã«ãããŸãã
## Resources
BERT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="text-classification"/>
- ã«é¢ããããã°æçš¿ [å¥ã®èšèªã§ã® BERT ããã¹ãåé¡](https://www.philschmid.de/bert-text-classification-in-a-different-language)ã
- [ãã«ãã©ãã« ããã¹ãåé¡ã®ããã® BERT (ããã³ãã®å人) ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb) ã®ããŒãããã¯.
- æ¹æ³ã«é¢ããããŒããã㯠[PyTorch ã䜿çšãããã«ãã©ãã«åé¡ã®ããã® BERT ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/abhmishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)ã
- æ¹æ³ã«é¢ããããŒããã㯠[èŠçŽã®ããã« BERT ã䜿çšã㊠EncoderDecoder ã¢ãã«ããŠã©ãŒã ã¹ã¿ãŒããã](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)ã
- [`BertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)ã
- [`TFBertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)ã
- [`FlaxBertForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb)ã
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
<PipelineTag pipeline="token-classification"/>
- [Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition](https://www.philschmid.de/huggingface-transformers-keras-tf) ã®äœ¿çšæ¹æ³ã«é¢ããããã°æçš¿ã
- ååèªã®æåã®åèªéšåã®ã¿ã䜿çšãã [åºæè¡šçŸèªèã®ããã® BERT ã®åŸ®èª¿æŽ](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb) ã®ããŒãããã¯ããŒã¯ã³åäžã®åèªã©ãã«å
ãåèªã®ã©ãã«ããã¹ãŠã®åèªéšåã«äŒæããã«ã¯ã代ããã«ããŒãããã¯ã®ãã® [ããŒãžã§ã³](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT.ipynb) ãåç
§ããŠãã ããã
- [`BertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)ã
- [`TFBertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)ã
- [`FlaxBertForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification) ã«ãã£ãŠãµããŒããããŠããŸãã
- [ããŒã¯ã³åé¡](https://huggingface.co/course/chapter7/2?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
<PipelineTag pipeline="fill-mask"/>
- [`BertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) ã§ãµããŒããããŠããã [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)ã
- [`TFBertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/lang-modeling#run_mlmpy) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
- [`FlaxBertForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) ããã³ [ããŒãããã¯]( https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb)ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã°](https://huggingface.co/course/chapter7/3?fw=pt) ð€ é¡ãã° ã³ãŒã¹ã®ç« ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
<PipelineTag pipeline="question-answering"/>
- [`BertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)ã
- [`TFBertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)ã
- [`FlaxBertForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering) ã§ãµããŒããããŠããŸãã
- [質ååç](https://huggingface.co/course/chapter7/7?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
**è€æ°ã®éžæè¢**
- [`BertForMultipleChoice`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)ã
- [`TFBertForMultipleChoice`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)ã
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
â¡ïž **æšè«**
- æ¹æ³ã«é¢ããããã°æçš¿ [Hugging Face Transformers ãš AWS Inferentia ã䜿çšã㊠BERT æšè«ãé«éåãã](https://huggingface.co/blog/bert-inferentia-sagemaker)ã
- æ¹æ³ã«é¢ããããã°æçš¿ [GPU äžã® DeepSpeed-Inference ã䜿çšã㊠BERT æšè«ãé«éåãã](https://www.philschmid.de/bert-deepspeed-inference)ã
âïž **äºåãã¬ãŒãã³ã°**
- [Hugging Face Transformers ãš Habana Gaudi ã䜿çšãã BERT ã®äºåãã¬ãŒãã³ã° ã«é¢ããããã°æçš¿](https://www.philschmid.de/pre-training-bert-habana)ã
ð **ãããã€**
- æ¹æ³ã«é¢ããããã°æçš¿ [ãã°ãã§ã€ã¹æé©åã§ãã©ã³ã¹ãã©ãŒããŒã ONNX ã«å€æãã](https://www.philschmid.de/convert-transformers-to-onnx)ã
- æ¹æ³ã«é¢ããããã°æçš¿ [AWS äžã® Habana Gaudi ã䜿çšãããã°é¡ãã©ã³ã¹ãã©ãŒããŒã®ããã®æ·±å±€åŠç¿ç°å¢ã®ã»ããã¢ãã](https://www.philschmid.de/getting-started-habana-gaudi#conclusion)ã
- ã«é¢ããããã°æçš¿ [Hugging Face TransformersãAmazon SageMakerãããã³ Terraform ã¢ãžã¥ãŒã«ã䜿çšããèªåã¹ã±ãŒãªã³ã° BERT](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker-advanced)ã
- ã«é¢ããããã°æçš¿ [HuggingFaceãAWS LambdaãDocker ã䜿çšãããµãŒããŒã¬ã¹ BERT](https://www.philschmid.de/serverless-bert-with-huggingface-aws-lambda-docker)ã
- ã«é¢ããããã°æçš¿ [Amazon SageMaker ãš Training Compiler ã䜿çšãã Hugging Face Transformers BERT 埮調æŽ](https://www.philschmid.de/huggingface-amazon-sagemaker-training-compiler)ã
- ã«é¢ããããã°æçš¿ [Transformers ãš Amazon SageMaker ã䜿çšãã BERT ã®ã¿ã¹ã¯åºæã®ç¥èã®èžç](https://www.philschmid.de/knowledge-distillation-bert-transformers)
## BertConfig
[[autodoc]] BertConfig
- all
## BertTokenizer
[[autodoc]] BertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
<frameworkcontent>
<pt>
## BertTokenizerFast
[[autodoc]] BertTokenizerFast
</pt>
<tf>
## TFBertTokenizer
[[autodoc]] TFBertTokenizer
</tf>
</frameworkcontent>
## Bert specific outputs
[[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput
[[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput
[[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput
<frameworkcontent>
<pt>
## BertModel
[[autodoc]] BertModel
- forward
## BertForPreTraining
[[autodoc]] BertForPreTraining
- forward
## BertLMHeadModel
[[autodoc]] BertLMHeadModel
- forward
## BertForMaskedLM
[[autodoc]] BertForMaskedLM
- forward
## BertForNextSentencePrediction
[[autodoc]] BertForNextSentencePrediction
- forward
## BertForSequenceClassification
[[autodoc]] BertForSequenceClassification
- forward
## BertForMultipleChoice
[[autodoc]] BertForMultipleChoice
- forward
## BertForTokenClassification
[[autodoc]] BertForTokenClassification
- forward
## BertForQuestionAnswering
[[autodoc]] BertForQuestionAnswering
- forward
</pt>
<tf>
## TFBertModel
[[autodoc]] TFBertModel
- call
## TFBertForPreTraining
[[autodoc]] TFBertForPreTraining
- call
## TFBertModelLMHeadModel
[[autodoc]] TFBertLMHeadModel
- call
## TFBertForMaskedLM
[[autodoc]] TFBertForMaskedLM
- call
## TFBertForNextSentencePrediction
[[autodoc]] TFBertForNextSentencePrediction
- call
## TFBertForSequenceClassification
[[autodoc]] TFBertForSequenceClassification
- call
## TFBertForMultipleChoice
[[autodoc]] TFBertForMultipleChoice
- call
## TFBertForTokenClassification
[[autodoc]] TFBertForTokenClassification
- call
## TFBertForQuestionAnswering
[[autodoc]] TFBertForQuestionAnswering
- call
</tf>
<jax>
## FlaxBertModel
[[autodoc]] FlaxBertModel
- __call__
## FlaxBertForPreTraining
[[autodoc]] FlaxBertForPreTraining
- __call__
## FlaxBertForCausalLM
[[autodoc]] FlaxBertForCausalLM
- __call__
## FlaxBertForMaskedLM
[[autodoc]] FlaxBertForMaskedLM
- __call__
## FlaxBertForNextSentencePrediction
[[autodoc]] FlaxBertForNextSentencePrediction
- __call__
## FlaxBertForSequenceClassification
[[autodoc]] FlaxBertForSequenceClassification
- __call__
## FlaxBertForMultipleChoice
[[autodoc]] FlaxBertForMultipleChoice
- __call__
## FlaxBertForTokenClassification
[[autodoc]] FlaxBertForTokenClassification
- __call__
## FlaxBertForQuestionAnswering
[[autodoc]] FlaxBertForQuestionAnswering
- __call__
</jax>
</frameworkcontent> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/cpmant.md | <!--Copyright 2022 The HuggingFace Team and The OpenBMB Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CPMAnt
## Overview
CPM-Ant ã¯ã10B ãã©ã¡ãŒã¿ãåãããªãŒãã³ãœãŒã¹ã®äžåœèªã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã« (PLM) ã§ããããã¯ãCPM-Live ã®ã©ã€ã ãã¬ãŒãã³ã° ããã»ã¹ã®æåã®ãã€ã«ã¹ããŒã³ã§ããããŸãããã¬ãŒãã³ã°ããã»ã¹ã¯è²»çšå¯Ÿå¹æãé«ããç°å¢ã«åªãããã®ã§ãã CPM-Ant ã¯ãCUGE ãã³ãããŒã¯ã§ã®ãã«ã¿ ãã¥ãŒãã³ã°ã§ãææãªçµæãéæããŠããŸãããã« ã¢ãã«ã«å ããŠãããŸããŸãªããŒããŠã§ã¢æ§æã®èŠä»¶ãæºããããŸããŸãªå§çž®ããŒãžã§ã³ãæäŸããŠããŸãã [詳现ãèŠã](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live)
ãã®ã¢ãã«ã¯ [OpenBMB](https://huggingface.co/openbmb) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) ã«ãããŸãã
## Resources
- [CPM-Live](https://github.com/OpenBMB/CPM-Live/tree/cpm-ant/cpm-live) ã«é¢ãããã¥ãŒããªã¢ã«ã
## CpmAntConfig
[[autodoc]] CpmAntConfig
- all
## CpmAntTokenizer
[[autodoc]] CpmAntTokenizer
- all
## CpmAntModel
[[autodoc]] CpmAntModel
- all
## CpmAntForCausalLM
[[autodoc]] CpmAntForCausalLM
- all | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deberta-v2.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeBERTa-v2
## Overview
DeBERTa ã¢ãã«ã¯ãPengcheng HeãXiaodong LiuãJianfeng GaoãWeizhu Chen ã«ãã£ãŠ [DeBERTa: Decoding-enhanced BERT with Disentangled Attendant](https://arxiv.org/abs/2006.03654) ã§ææ¡ãããŸãããGoogle ã®ã¢ãã«ã«åºã¥ããŠããŸãã
2018幎ã«ãªãªãŒã¹ãããBERTã¢ãã«ãš2019幎ã«ãªãªãŒã¹ãããFacebookã®RoBERTaã¢ãã«ã
ããã¯ããã€ãã泚æã解ãã»ããã䜿çšãããããŒã¿ã®ååã䜿çšããŠåŒ·åããããã¹ã¯ ãã³ãŒã ãã¬ãŒãã³ã°ãåãã RoBERTa ã«åºã¥ããŠæ§ç¯ãããŠããŸãã
ããã«ã¿ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ããããã¥ãŒã©ã«èšèªã¢ãã«ã®æè¿ã®é²æ©ã«ãããå€ãã®èªç¶èšèªã¢ãã«ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããŸããã
èšèªåŠç (NLP) ã¿ã¹ã¯ããã®è«æã§ã¯ãæ°ããã¢ãã« ã¢ãŒããã¯ã㣠DeBERTa (Decoding-enhanced BERT with
ããã¯ã2 ã€ã®æ°ããæè¡ã䜿çšã㊠BERT ã¢ãã«ãš RoBERTa ã¢ãã«ãæ¹åããŸãã 1ã€ç®ã¯ã
ãã€ãã解ã泚æã¡ã«ããºã ãååèªã¯ããã®å
容ããšã³ã³ãŒããã 2 ã€ã®ãã¯ãã«ã䜿çšããŠè¡šçŸããã
åèªéã®æ³šæã®éã¿ã¯ããããã®åèªã®ãã€ã解é€è¡åã䜿çšããŠèšç®ãããŸãã
å
容ãšçžå¯Ÿçãªäœçœ®ã 2 çªç®ã«ã匷åããããã¹ã¯ ãã³ãŒãã䜿çšããŠãåºåãœããããã¯ã¹ ã¬ã€ã€ã次ã®ããã«çœ®ãæããŸãã
ã¢ãã«ã®äºåãã¬ãŒãã³ã°çšã«ãã¹ã¯ãããããŒã¯ã³ãäºæž¬ããŸããããã 2 ã€ã®ææ³ã«ããå¹çã倧å¹
ã«åäžããããšã瀺ããŸãã
ã¢ãã«ã®äºåãã¬ãŒãã³ã°ãšäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã®åäžã RoBERTa-Large ãšæ¯èŒãããšãDeBERTa ã¢ãã«ã¯ååã®ã¬ãã«ã§ãã¬ãŒãã³ã°ãããŠããŸãã
ãã¬ãŒãã³ã° ããŒã¿ã¯å¹
åºã NLP ã¿ã¹ã¯ã§äžè²«ããŠåªããããã©ãŒãã³ã¹ã瀺ããMNLI 㧠+0.9% ã®æ¹åãéæããŸããã
(90.2% 察 91.1%)ãSQuAD v2.0 ã§ã¯ +2.3% (88.4% 察 90.7%)ãRACE ã§ã¯ +3.6% (83.2% 察 86.8%) ã§ããã DeBERTa ã³ãŒããš
äºåãã¬ãŒãã³ã°ãããã¢ãã«ã¯ https://github.com/microsoft/DeBERTa ã§å
¬éãããŸãã*
次ã®æ
å ±ã¯ã[å
ã®å®è£
ã§çŽæ¥è¡šç€ºãããŸã ãªããžããª](https://github.com/microsoft/DeBERTa)ã DeBERTa v2 ã¯ãDeBERTa ã¢ãã«ã® 2 çªç®ã®ããŒãžã§ã³ã§ããããã«ã¯ä»¥äžãå«ãŸããŸã
SuperGLUE åäžã¢ãã«ã®æåºã«äœ¿çšããã 1.5B ã¢ãã«ã¯ã人éã®ããŒã¹ã©ã€ã³ 89.8 ã«å¯Ÿã㊠89.9 ãéæããŸãããããªãã¯ã§ãã
ãã®æçš¿ã«é¢ãã詳现ã«ã€ããŠã¯ãèè
ã®ããã¥ã¡ã³ããåç
§ããŠãã ããã
[ããã°](https://www.microsoft.com/en-us/research/blog/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark/)
v2 ã®æ°æ©èœ:
- **èªåœ** v2 ã§ã¯ããã¬ãŒãã³ã° ããŒã¿ããæ§ç¯ããããµã€ãº 128K ã®æ°ããèªåœã䜿çšããããã«ããŒã¯ãã€ã¶ãŒãå€æŽãããŸããã
GPT2 ããŒã¹ã®ããŒã¯ãã€ã¶ãŒã®ä»£ããã«ãããŒã¯ãã€ã¶ãŒã¯
[sentencepiece ããŒã¹](https://github.com/google/sentencepiece) ããŒã¯ãã€ã¶ãŒã
- **nGiE(nGram Induced Input Encoding)** DeBERTa-v2 ã¢ãã«ã¯ãæåã®ç³ã¿èŸŒã¿å±€ãšã¯å¥ã«è¿œå ã®ç³ã¿èŸŒã¿å±€ã䜿çšããŸãã
ãã©ã³ã¹ãã©ãŒããŒå±€ã䜿çšããŠãå
¥åããŒã¯ã³ã®ããŒã«ã«äŸåé¢ä¿ãããããåŠç¿ããŸãã
- **äœçœ®å°åœ±è¡åã泚ç®ã¬ã€ã€ãŒã®ã³ã³ãã³ãå°åœ±è¡åãšå
±æ** 以åã«åºã¥ã
å®éšã§ã¯ãããã©ãŒãã³ã¹ã«åœ±é¿ãäžããããšãªããã©ã¡ãŒã¿ãä¿åã§ããŸãã
- **ãã±ãããé©çšããŠçžå¯Ÿäœçœ®ããšã³ã³ãŒãããŸã** DeBERTa-v2 ã¢ãã«ã¯ãã° ãã±ããã䜿çšããŠçžå¯Ÿäœçœ®ããšã³ã³ãŒãããŸã
T5ã«äŒŒãŠããŸãã
- **900M ã¢ãã« & 1.5B ã¢ãã«** 2 ã€ã®è¿œå ã¢ãã« ãµã€ãº: 900M ãš 1.5B ãå©çšå¯èœã§ãããã«ãããããã©ãŒãã³ã¹ã倧å¹
ã«åäžããŸãã
äžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã
ãã®ã¢ãã«ã¯ [DeBERTa](https://huggingface.co/DeBERTa) ã«ãã£ãŠå¯çš¿ãããŸããããã®ã¢ãã«ã® TF 2.0 å®è£
ã¯ã
[kamalkraj](https://huggingface.co/kamalkraj) ã«ããæçš¿ãå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/microsoft/DeBERTa) ã«ãããŸãã
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [ãã¹ã¯èšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## DebertaV2Config
[[autodoc]] DebertaV2Config
## DebertaV2Tokenizer
[[autodoc]] DebertaV2Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## DebertaV2TokenizerFast
[[autodoc]] DebertaV2TokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences
<frameworkcontent>
<pt>
## DebertaV2Model
[[autodoc]] DebertaV2Model
- forward
## DebertaV2PreTrainedModel
[[autodoc]] DebertaV2PreTrainedModel
- forward
## DebertaV2ForMaskedLM
[[autodoc]] DebertaV2ForMaskedLM
- forward
## DebertaV2ForSequenceClassification
[[autodoc]] DebertaV2ForSequenceClassification
- forward
## DebertaV2ForTokenClassification
[[autodoc]] DebertaV2ForTokenClassification
- forward
## DebertaV2ForQuestionAnswering
[[autodoc]] DebertaV2ForQuestionAnswering
- forward
## DebertaV2ForMultipleChoice
[[autodoc]] DebertaV2ForMultipleChoice
- forward
</pt>
<tf>
## TFDebertaV2Model
[[autodoc]] TFDebertaV2Model
- call
## TFDebertaV2PreTrainedModel
[[autodoc]] TFDebertaV2PreTrainedModel
- call
## TFDebertaV2ForMaskedLM
[[autodoc]] TFDebertaV2ForMaskedLM
- call
## TFDebertaV2ForSequenceClassification
[[autodoc]] TFDebertaV2ForSequenceClassification
- call
## TFDebertaV2ForTokenClassification
[[autodoc]] TFDebertaV2ForTokenClassification
- call
## TFDebertaV2ForQuestionAnswering
[[autodoc]] TFDebertaV2ForQuestionAnswering
- call
## TFDebertaV2ForMultipleChoice
[[autodoc]] TFDebertaV2ForMultipleChoice
- call
</tf>
</frameworkcontent>
| 0 |