repo_id
stringclasses 55
values | file_path
stringlengths 42
186
| content
stringlengths 1
333k
| __index_level_0__
int64 0
0
|
---|---|---|---|
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/question_answering.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Question answering
[[open-in-colab]]
<Youtube id="ajPx5LwJD-I"/>
質åå¿çã¿ã¹ã¯ã¯ã質åã«å¯ŸããŠåçãè¿ããŸãã AlexaãSiriãGoogle ãªã©ã®ä»®æ³ã¢ã·ã¹ã¿ã³ãã«å€©æ°ãå°ããããšããããªãã質åå¿çã¢ãã«ã䜿çšããããšãããã¯ãã§ãã質åå¿çã¿ã¹ã¯ã«ã¯äžè¬çã« 2 ã€ã®ã¿ã€ãããããŸãã
- æœåº: äžããããã³ã³ããã¹ãããåçãæœåºããŸãã
- æœè±¡ç: 質åã«æ£ããçããã³ã³ããã¹ãããåçãçæããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. æœåºç質åå¿ççšã« [SQuAD](https://huggingface.co/datasets/squad) ããŒã¿ã»ããäžã® [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/question-answering) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SQuAD dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã SQuAD ããŒã¿ã»ããã®å°ãããµãã»ãããèªã¿èŸŒã¿ãŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> squad = load_dataset("squad", split="train[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> squad = squad.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
'id': '5733be284776f41900661182',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'title': 'University_of_Notre_Dame'
}
```
ããã«ã¯ããã€ãã®éèŠãªãã£ãŒã«ãããããŸãã
- `answers`: åçããŒã¯ã³ãšåçããã¹ãã®éå§äœçœ®ã
- `context`: ã¢ãã«ãçããæœåºããããã«å¿
èŠãªèæ¯æ
å ±ã
- `question`: ã¢ãã«ãçããå¿
èŠããã質åã
## Preprocess
<Youtube id="qgaM0weJHpA"/>
次ã®ã¹ãããã§ã¯ãDistilBERT ããŒã¯ãã€ã¶ãŒãããŒãããŠ`question`ãã£ãŒã«ããš`context`ãã£ãŒã«ããåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
```
質åå¿çã¿ã¹ã¯ã«ç¹æã®ã泚æãã¹ãååŠçæé ãããã€ããããŸãã
1. ããŒã¿ã»ããå
ã®äžéšã®äŸã«ã¯ãã¢ãã«ã®æ倧å
¥åé·ãè¶
ããéåžžã«é·ããã³ã³ããã¹ãããå«ãŸããå ŽåããããŸããããé·ãã·ãŒã±ã³ã¹ãåŠçããã«ã¯ã`truncation="only_second"` ãèšå®ã㊠`context` ã®ã¿ãåãæšãŠãŸãã
2. 次ã«ãèšå®ã«ãã£ãŠãåçã®éå§äœçœ®ãšçµäºäœçœ®ãå
ã® `context`ã«ãããã³ã°ããŸãã
ã`return_offset_mapping=True`ãã
3. ãããã³ã°ãæå
ã«ããã®ã§ãçãã®éå§ããŒã¯ã³ãšçµäºããŒã¯ã³ãèŠã€ããããšãã§ããŸãã [`~tokenizers.Encoding.sequence_ids`] ã¡ãœããã䜿çšããŠã
ãªãã»ããã®ã©ã®éšåã`question`ã«å¯Ÿå¿ããã©ã®éšåã`context`ã«å¯Ÿå¿ããããèŠã€ããŸãã
以äžã«ã`answer`ã®éå§ããŒã¯ã³ãšçµäºããŒã¯ã³ãåãè©°ããŠ`context`ã«ãããã³ã°ããé¢æ°ãäœæããæ¹æ³ã瀺ããŸãã
```py
>>> def preprocess_function(examples):
... questions = [q.strip() for q in examples["question"]]
... inputs = tokenizer(
... questions,
... examples["context"],
... max_length=384,
... truncation="only_second",
... return_offsets_mapping=True,
... padding="max_length",
... )
... offset_mapping = inputs.pop("offset_mapping")
... answers = examples["answers"]
... start_positions = []
... end_positions = []
... for i, offset in enumerate(offset_mapping):
... answer = answers[i]
... start_char = answer["answer_start"][0]
... end_char = answer["answer_start"][0] + len(answer["text"][0])
... sequence_ids = inputs.sequence_ids(i)
... # Find the start and end of the context
... idx = 0
... while sequence_ids[idx] != 1:
... idx += 1
... context_start = idx
... while sequence_ids[idx] == 1:
... idx += 1
... context_end = idx - 1
... # If the answer is not fully inside the context, label it (0, 0)
... if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
... start_positions.append(0)
... end_positions.append(0)
... else:
... # Otherwise it's the start and end token positions
... idx = context_start
... while idx <= context_end and offset[idx][0] <= start_char:
... idx += 1
... start_positions.append(idx - 1)
... idx = context_end
... while idx >= context_start and offset[idx][1] >= end_char:
... idx -= 1
... end_positions.append(idx + 1)
... inputs["start_positions"] = start_positions
... inputs["end_positions"] = end_positions
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] é¢æ°ã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` é¢æ°ãé«éåã§ããŸããäžèŠãªåãåé€ããŸãã
```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
```
次ã«ã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãã[`DefaultDataCollatââor`] ã¯ããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
<frameworkcontent>
<pt>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
</pt>
<tf>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent>
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForQuestionAnswering`] ã䜿çšã㊠DitilBERT ãããŒãããŸãã
```py
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_squad["train"],
... eval_dataset=tokenized_squad["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
</ãã³ã>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
... init_lr=2e-5,
... num_warmup_steps=0,
... num_train_steps=total_train_steps,
... )
```
次ã«ã[`TFAutoModelForQuestionAnswering`] ã䜿çšã㊠DistilBERT ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering("distilbert/distilbert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_squad["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_squad["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer)
```
ãã¬ãŒãã³ã°ãéå§ããåã«æåŸã«ã»ããã¢ããããããšã¯ãã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããããã¯ãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã [`~transformers.PushToHubCallback`] ã§ããã·ã¥ããå Žæãæå®ããããšã§å®è¡ã§ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_qa_model",
... tokenizer=tokenizer,
... )
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
質åå¿ççšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çŽ°ãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)ã
</Tip>
## Evaluate
質åå¿çã®è©äŸ¡ã«ã¯ã倧éã®åŸåŠçãå¿
èŠã§ããæéãããããããªãããã«ããã®ã¬ã€ãã§ã¯è©äŸ¡ã¹ããããçç¥ããŠããŸãã [`Trainer`] ã¯ãã¬ãŒãã³ã°äžã«è©äŸ¡æ倱ãèšç®ãããããã¢ãã«ã®ããã©ãŒãã³ã¹ã«ã€ããŠå®å
šã«åãããªãããã§ã¯ãããŸããã
ãã£ãšæéãããã質åå¿ççšã®ã¢ãã«ãè©äŸ¡ããæ¹æ³ã«èå³ãããå Žåã¯ã[質åå¿ç](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ã®ç« ãåç
§ããŠãã ããã ð€ãã°ãã§ã€ã¹ã³ãŒã¹ããïŒ
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
質åãšãã¢ãã«ã«äºæž¬ããããã³ã³ããã¹ããèãåºããŸãã
```py
>>> question = "How many programming languages does BLOOM support?"
>>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠè³ªåå¿ççšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ããã¹ããæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
>>> question_answerer(question=question, context=context)
{'score': 0.2058267742395401,
'start': 10,
'end': 95,
'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åã㊠PyTorch ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> import torch
>>> from transformers import AutoModelForQuestionAnswering
>>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> with torch.no_grad():
... outputs = model(**inputs)
```
ã¢ãã«åºåããéå§äœçœ®ãšçµäºäœçœ®ã®æãé«ã確çãååŸããŸãã
```py
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
```
äºæž¬ãããããŒã¯ã³ããã³ãŒãããŠçããååŸããŸãã
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åããTensorFlow ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, text, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> outputs = model(**inputs)
```
ã¢ãã«åºåããéå§äœçœ®ãšçµäºäœçœ®ã®æãé«ã確çãååŸããŸãã
```py
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
```
äºæž¬ãããããŒã¯ã³ããã³ãŒãããŠçããååŸããŸãã
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/translation.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Translation
[[open-in-colab]]
<Youtube id="1JvfrvZgi6c"/>
翻蚳ã§ã¯ãäžé£ã®ããã¹ããããèšèªããå¥ã®èšèªã«å€æããŸããããã¯ãã·ãŒã±ã³ã¹éåé¡ãšããŠå®åŒåã§ããããã€ãã®ã¿ã¹ã¯ã® 1 ã€ã§ããã翻蚳ãèŠçŽãªã©ãå
¥åããäœããã®åºåãè¿ãããã®åŒ·åãªãã¬ãŒã ã¯ãŒã¯ã§ãã翻蚳ã·ã¹ãã ã¯éåžžãç°ãªãèšèªã®ããã¹ãéã®ç¿»èš³ã«äœ¿çšãããŸãããé³å£°ããŸãã¯ããã¹ãããé³å£°ãžã®å€æãé³å£°ããããã¹ããžã®å€æãªã©ãé³å£°éã®çµã¿åããã«ã䜿çšã§ããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [OPUS Books](https://huggingface.co/datasets/opus_books) ããŒã¿ã»ããã®è±èª-ãã©ã³ã¹èªãµãã»ããã® [T5](https://huggingface.co/google-t5/t5-small) ã埮調æŽããŠãè±èªã®ããã¹ãã次ã®åœ¢åŒã«ç¿»èš³ããŸãããã©ã³ã¹èªã
2. 埮調æŽãããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/translation) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate sacrebleu
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load OPUS Books dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã [OPUS Books](https://huggingface.co/datasets/opus_books) ããŒã¿ã»ããã®è±èªãšãã©ã³ã¹èªã®ãµãã»ãããèªã¿èŸŒã¿ãŸãã
```py
>>> from datasets import load_dataset
>>> books = load_dataset("opus_books", "en-fr")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> books = books["train"].train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> books["train"][0]
{'id': '90560',
'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.',
'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientÎt nous fûmes rentrés dans notre élément.'}}
```
`translation`: ããã¹ãã®è±èªãšãã©ã³ã¹èªã®ç¿»èš³ã
## Preprocess
<Youtube id="XAR8jnZZuUs"/>
次ã®ã¹ãããã§ã¯ãT5 ããŒã¯ãã€ã¶ãŒãããŒãããŠè±èªãšãã©ã³ã¹èªã®èšèªãã¢ãåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> checkpoint = "google-t5/t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
```
äœæããååŠçé¢æ°ã¯æ¬¡ã®ããšãè¡ãå¿
èŠããããŸãã
1. T5 ãããã翻蚳ã¿ã¹ã¯ã§ããããšãèªèã§ããããã«ãå
¥åã®åã«ããã³ãââããä»ããŸããè€æ°ã® NLP ã¿ã¹ã¯ãå¯èœãªäžéšã®ã¢ãã«ã§ã¯ãç¹å®ã®ã¿ã¹ã¯ã®ããã³ãããå¿
èŠã§ãã
2. è±èªã®èªåœã§äºåãã¬ãŒãã³ã°ãããããŒã¯ãã€ã¶ãŒã䜿çšããŠãã©ã³ã¹èªã®ããã¹ããããŒã¯ã³åããããšã¯ã§ããªããããå
¥å (è±èª) ãšã¿ãŒã²ãã (ãã©ã³ã¹èª) ãå¥ã
ã«ããŒã¯ã³åããŸãã
3. `max_length`ãã©ã¡ãŒã¿ã§èšå®ãããæ倧é·ãè¶
ããªãããã«ã·ãŒã±ã³ã¹ãåãè©°ããŸãã
```py
>>> source_lang = "en"
>>> target_lang = "fr"
>>> prefix = "translate English to French: "
>>> def preprocess_function(examples):
... inputs = [prefix + example[source_lang] for example in examples["translation"]]
... targets = [example[target_lang] for example in examples["translation"]]
... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True)
... return model_inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` é¢æ°ãé«éåã§ããŸãã
```py
>>> tokenized_books = books.map(preprocess_function, batched=True)
```
次ã«ã[`DataCollatââorForSeq2Seq`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸããããŒã¿ã»ããå
šäœãæ倧é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
<frameworkcontent>
<pt>
```py
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
```
</pt>
<tf>
```py
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) ã¡ããªã¯ã¹ãããŒãããŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®æ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã次ãåç
§ããŠãã ãã)ã
```py
>>> import evaluate
>>> metric = evaluate.load("sacrebleu")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ã㊠SacreBLEU ã¹ã³ã¢ãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def postprocess_text(preds, labels):
... preds = [pred.strip() for pred in preds]
... labels = [[label.strip()] for label in labels]
... return preds, labels
>>> def compute_metrics(eval_preds):
... preds, labels = eval_preds
... if isinstance(preds, tuple):
... preds = preds[0]
... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
... labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
... result = metric.compute(predictions=decoded_preds, references=decoded_labels)
... result = {"bleu": result["score"]}
... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
... result["gen_len"] = np.mean(prediction_lens)
... result = {k: round(v, 4) for k, v in result.items()}
... return result
```
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForSeq2SeqLM`] ã䜿çšã㊠T5 ãããŒãããŸãã
```py
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`Seq2SeqTrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] 㯠SacreBLEU ã¡ããªã¯ã¹ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Seq2SeqTrainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_opus_books_model",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... weight_decay=0.01,
... save_total_limit=3,
... num_train_epochs=2,
... predict_with_generate=True,
... fp16=True,
... push_to_hub=True,
... )
>>> trainer = Seq2SeqTrainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_books["train"],
... eval_dataset=tokenized_books["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
次ã«ã[`TFAutoModelForSeq2SeqLM`] ã䜿çšã㊠T5 ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_books["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... tokenized_books["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ãã¬ãŒãã³ã°ãéå§ããåã«ã»ããã¢ããããæåŸã® 2 ã€ã®ããšã¯ãäºæž¬ãã SacreBLEU ã¡ããªã¯ã¹ãèšç®ããã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããã©ã¡ãã [Keras ã³ãŒã«ããã¯](../main_classes/keras_callbacks) ã䜿çšããŠè¡ãããŸãã
`compute_metrics` é¢æ°ã [`~transformers.KerasMetricCallback`] ã«æž¡ããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`] ã§ã¢ãã«ãšããŒã¯ãã€ã¶ãŒãããã·ã¥ããå Žæãæå®ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_opus_books_model",
... tokenizer=tokenizer,
... )
```
次ã«ãã³ãŒã«ããã¯ããŸãšããŠãã³ãã«ããŸãã
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks)
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
翻蚳çšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çŽ°ãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
å¥ã®èšèªã«ç¿»èš³ãããããã¹ããèãåºããŸãã T5 ã®å Žåãäœæ¥äžã®ã¿ã¹ã¯ã«å¿ããŠå
¥åã«æ¥é èŸãä»ããå¿
èŠããããŸããè±èªãããã©ã³ã¹èªã«ç¿»èš³ããå Žåã¯ã以äžã«ç€ºãããã«å
¥åã«æ¥é èŸãä»ããå¿
èŠããããŸãã
```py
>>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç¿»èš³çšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã¹ããããã«æž¡ããŸãã
```py
>>> from transformers import pipeline
# Change `xx` to the language of the input and `yy` to the language of the desired output.
# Examples: "en" for English, "fr" for French, "de" for German, "es" for Spanish, "zh" for Chinese, etc; translation_en_to_fr translates English to French
# You can view all the lists of languages here - https://huggingface.co/languages
>>> translator = pipeline("translation_xx_to_yy", model="my_awesome_opus_books_model")
>>> translator(text)
[{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åãã`input_ids` ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="pt").input_ids
```
[`~generation.GenerationMixin.generate`] ã¡ãœããã䜿çšããŠç¿»èš³ãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[Text Generation](../main_classes/text_generation) API ã確èªããŠãã ããã
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lignées partagent des ressources avec des bactéries enfixant l'azote.'
```
</pt>
<tf>
`input_ids`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã tensors:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="tf").input_ids
```
[`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ã¡ãœããã䜿çšããŠç¿»èš³ãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[Text Generation](../main_classes/text_generation) API ã確èªããŠãã ããã
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.'
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/monocular_depth_estimation.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Monocular depth estimation
åçŒå¥¥è¡ãæšå®ã¯ãã·ãŒã³ã®å¥¥è¡ãæ
å ±ãç»åããäºæž¬ããããšãå«ãã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã§ãã
åäžã®ç»åãèšãæããã°ãã·ãŒã³å
ã®ãªããžã§ã¯ãã®è·é¢ãè·é¢ããæšå®ããããã»ã¹ã§ãã
åäžã«ã¡ã©ã®èŠç¹ã
åçŒå¥¥è¡ãæšå®ã«ã¯ã3D åæ§ç¯ãæ¡åŒµçŸå®ãèªåé転ã
ãããŠããããå·¥åŠãã¢ãã«ããªããžã§ã¯ãéã®è€éãªé¢ä¿ãç解ããå¿
èŠããããããããã¯å°é£ãªäœæ¥ã§ãã
ã·ãŒã³ãšããã«å¯Ÿå¿ãã深床æ
å ±ïŒç
§ææ¡ä»¶ãªã©ã®èŠå ã®åœ±é¿ãåããå¯èœæ§ããããŸãïŒ
ãªã¯ã«ãŒãžã§ã³ãšãã¯ã¹ãã£ã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/depth-estimation) ã確èªããããšããå§ãããŸãã
</Tip>
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ãåŠã³ãŸãã
* 深床æšå®ãã€ãã©ã€ã³ãäœæãã
* æåã§æ·±åºŠæšå®æšè«ãå®è¡ããŸã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers
```
## Depth estimation pipeline
深床æšå®ããµããŒãããã¢ãã«ã§æšè«ãè©Šãæãç°¡åãªæ¹æ³ã¯ã察å¿ãã [`pipeline`] ã䜿çšããããšã§ãã
[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=Depth-estimation&sort=downloads) ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> from transformers import pipeline
>>> checkpoint = "vinvino02/glpn-nyu"
>>> depth_estimator = pipeline("depth-estimation", model=checkpoint)
```
次ã«ãåæããç»åãéžæããŸãã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/>
</div>
ç»åããã€ãã©ã€ã³ã«æž¡ããŸãã
```py
>>> predictions = depth_estimator(image)
```
ãã€ãã©ã€ã³ã¯ 2 ã€ã®ãšã³ããªãå«ãèŸæžãè¿ããŸããæåã®ãã®ã¯`predicted_ Depth`ãšåŒã°ãã次ã®å€ãæã€ãã³ãœã«ã§ãã
æ·±ãã¯åãã¯ã»ã«ã®ã¡ãŒãã«åäœã§è¡šãããŸãã
2 çªç®ã®`depth`ã¯ã深床æšå®çµæãèŠèŠåãã PIL ç»åã§ãã
èŠèŠåãããçµæãèŠãŠã¿ãŸãããã
```py
>>> predictions["depth"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/>
</div>
## Depth estimation inference by hand
深床æšå®ãã€ãã©ã€ã³ã®äœ¿çšæ¹æ³ãç解ããã®ã§ãåãçµæãæåã§è€è£œããæ¹æ³ãèŠãŠã¿ãŸãããã
ãŸãã[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=Depth-estimation&sort=downloads) ããã¢ãã«ãšé¢é£ããã»ããµãããŒãããŸãã
ããã§ã¯ãåãšåããã§ãã¯ãã€ã³ãã䜿çšããŸãã
```py
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
>>> checkpoint = "vinvino02/glpn-nyu"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
>>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint)
```
å¿
èŠãªç»åå€æãåŠçãã`image_processor`ã䜿çšããŠãã¢ãã«ã®ç»åå
¥åãæºåããŸãã
ãµã€ãºå€æŽãæ£èŠåãªã©:
```py
>>> pixel_values = image_processor(image, return_tensors="pt").pixel_values
```
æºåãããå
¥åãã¢ãã«ã«æž¡ããŸãã
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(pixel_values)
... predicted_depth = outputs.predicted_depth
```
çµæãèŠèŠåããŸãã
```py
>>> import numpy as np
>>> # interpolate to original size
>>> prediction = torch.nn.functional.interpolate(
... predicted_depth.unsqueeze(1),
... size=image.size[::-1],
... mode="bicubic",
... align_corners=False,
... ).squeeze()
>>> output = prediction.numpy()
>>> formatted = (output * 255 / np.max(output)).astype("uint8")
>>> depth = Image.fromarray(formatted)
>>> depth
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/>
</div>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/image_captioning.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image captioning
[[open-in-colab]]
ç»åã®ãã£ãã·ã§ã³ä»ãã¯ãç¹å®ã®ç»åã®ãã£ãã·ã§ã³ãäºæž¬ããã¿ã¹ã¯ã§ããäžè¬çãªçŸå®äžçã®ã¢ããªã±ãŒã·ã§ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
èŠèŠé害è
ãããŸããŸãªç¶æ³ãä¹ãè¶ããããããæ¯æŽããŸãããããã£ãŠãç»åã®ãã£ãã·ã§ã³
ç»åã説æããããšã§äººã
ã®ã³ã³ãã³ããžã®ã¢ã¯ã»ã·ããªãã£ãåäžãããã®ã«åœ¹ç«ã¡ãŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
* ç»åãã£ãã·ã§ã³ ã¢ãã«ã埮調æŽããŸãã
* 埮調æŽãããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate -q
pip install jiwer -q
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```python
from huggingface_hub import notebook_login
notebook_login()
```
## Load the Pokémon BLIP captions dataset
ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªã䜿çšããŠã{image-caption} ãã¢ã§æ§æãããããŒã¿ã»ãããèªã¿èŸŒã¿ãŸããç¬èªã®ç»åãã£ãã·ã§ã³ ããŒã¿ã»ãããäœæããã«ã¯
PyTorch ã§ã¯ã[ãã®ããŒãããã¯](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb) ãåç
§ã§ããŸãã
```py
ds = load_dataset("lambdalabs/pokemon-blip-captions")
ds
```
```bash
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 833
})
})
```
ããŒã¿ã»ããã«ã¯ `image`ãš`text`ã® 2 ã€ã®æ©èœããããŸãã
<Tip>
å€ãã®ç»åãã£ãã·ã§ã³ ããŒã¿ã»ããã«ã¯ãç»åããšã«è€æ°ã®ãã£ãã·ã§ã³ãå«ãŸããŠããŸãããã®ãããªå Žåãäžè¬çãªæŠç¥ã¯ããã¬ãŒãã³ã°äžã«å©çšå¯èœãªãã£ãã·ã§ã³ã®äžããã©ã³ãã ã«ãã£ãã·ã§ã³ããµã³ããªã³ã°ããããšã§ãã
</Tip>
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã®ãã¬ã€ã³ ã¹ããªããããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```python
ds = ds["train"].train_test_split(test_size=0.1)
train_ds = ds["train"]
test_ds = ds["test"]
```
ãã¬ãŒãã³ã° ã»ããããã®ããã€ãã®ãµã³ãã«ãèŠèŠåããŠã¿ãŸãããã
```python
from textwrap import wrap
import matplotlib.pyplot as plt
import numpy as np
def plot_images(images, captions):
plt.figure(figsize=(20, 20))
for i in range(len(images)):
ax = plt.subplot(1, len(images), i + 1)
caption = captions[i]
caption = "\n".join(wrap(caption, 12))
plt.title(caption)
plt.imshow(images[i])
plt.axis("off")
sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)]
sample_captions = [train_ds[i]["text"] for i in range(5)]
plot_images(sample_images_to_visualize, sample_captions)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"/>
</div>
## Preprocess the dataset
ããŒã¿ã»ããã«ã¯ 2 ã€ã®ã¢ããªã㣠(ç»åãšããã¹ã) ããããããååŠçãã€ãã©ã€ã³ã¯ç»åãšãã£ãã·ã§ã³ãååŠçããŸãã
ãããè¡ãã«ã¯ã埮調æŽããããšããŠããã¢ãã«ã«é¢é£ä»ããããããã»ããµ ã¯ã©ã¹ãããŒãããŸãã
```python
from transformers import AutoProcessor
checkpoint = "microsoft/git-base"
processor = AutoProcessor.from_pretrained(checkpoint)
```
ããã»ããµã¯å
éšã§ç»åãååŠçã (ãµã€ãºå€æŽããã¯ã»ã« ã¹ã±ãŒãªã³ã°ãå«ã)ããã£ãã·ã§ã³ãããŒã¯ã³åããŸãã
```python
def transforms(example_batch):
images = [x for x in example_batch["image"]]
captions = [x for x in example_batch["text"]]
inputs = processor(images=images, text=captions, padding="max_length")
inputs.update({"labels": inputs["input_ids"]})
return inputs
train_ds.set_transform(transforms)
test_ds.set_transform(transforms)
```
ããŒã¿ã»ããã®æºåãã§ãããã埮調æŽçšã«ã¢ãã«ãã»ããã¢ããã§ããŸãã
## Load a base model
["microsoft/git-base"](https://huggingface.co/microsoft/git-base) ã [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) ãªããžã§ã¯ãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(checkpoint)
```
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(checkpoint)
```
## Evaluate
ç»åãã£ãã·ã§ã³ ã¢ãã«ã¯éåžžã[Rouge Score](https://huggingface.co/spaces/evaluate-metric/rouge) ãŸã㯠[Word Error Rate](https://huggingface.co/spaces/evaluate-metric/) ã§è©äŸ¡ãããŸããããã ã£ãïŒããã®ã¬ã€ãã§ã¯ãWord Error Rate (WER) ã䜿çšããŸãã
ãããè¡ãã«ã¯ ð€ Evaluate ã©ã€ãã©ãªã䜿çšããŸãã WER ã®æœåšçãªå¶éããã®ä»ã®åé¡ç¹ã«ã€ããŠã¯ã[ãã®ã¬ã€ã](https://huggingface.co/spaces/evaluate-metric/wer) ãåç
§ããŠãã ããã
```python
from evaluate import load
import torch
wer = load("wer")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predicted = logits.argmax(-1)
decoded_labels = processor.batch_decode(labels, skip_special_tokens=True)
decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True)
wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels)
return {"wer_score": wer_score}
```
## Train!
ããã§ãã¢ãã«ã®åŸ®èª¿æŽãéå§ããæºåãæŽããŸãããããã«ã¯ ð€ [`Trainer`] ã䜿çšããŸãã
ãŸãã[`TrainingArguments`] ã䜿çšããŠãã¬ãŒãã³ã°åŒæ°ãå®çŸ©ããŸãã
```python
from transformers import TrainingArguments, Trainer
model_name = checkpoint.split("/")[1]
training_args = TrainingArguments(
output_dir=f"{model_name}-pokemon",
learning_rate=5e-5,
num_train_epochs=50,
fp16=True,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
gradient_accumulation_steps=2,
save_total_limit=3,
eval_strategy="steps",
eval_steps=50,
save_strategy="steps",
save_steps=50,
logging_steps=50,
remove_unused_columns=False,
push_to_hub=True,
label_names=["labels"],
load_best_model_at_end=True,
)
```
Trainer 次ã«ã次ã«ãããŒã¿ã»ãããšã¢ãã«ãšäžç·ã« ð€ ã«æž¡ããŸãã
```python
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
```
ãã¬ãŒãã³ã°ãéå§ããã«ã¯ã[`Trainer`] ãªããžã§ã¯ãã® [`~Trainer.train`] ãåŒã³åºãã ãã§ãã
```python
trainer.train()
```
ãã¬ãŒãã³ã°ãé²ãã«ã€ããŠããã¬ãŒãã³ã°ã®æ倱ãã¹ã ãŒãºã«æžå°ããããšãããããŸãã
ãã¬ãŒãã³ã°ãå®äºãããã [`~Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```python
trainer.push_to_hub()
```
## Inference
`test_ds` ãããµã³ãã«ç»åãååŸããŠã¢ãã«ããã¹ãããŸãã
```python
from PIL import Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png"
image = Image.open(requests.get(url, stream=True).raw)
image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"/>
</div>
ã¢ãã«çšã®ç»åãæºåããŸãã
```python
device = "cuda" if torch.cuda.is_available() else "cpu"
inputs = processor(images=image, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values
```
[`generate`] ãåŒã³åºããŠäºæž¬ããã³ãŒãããŸãã
```python
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
```bash
a drawing of a pink and blue pokemon
```
埮調æŽãããã¢ãã«ã«ãããéåžžã«åªãããã£ãã·ã§ã³ãçæãããããã§ãã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/summarization.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Summarization
[[open-in-colab]]
<Youtube id="yHnr5Dk2zCI"/>
èŠçŽã«ããããã¹ãŠã®éèŠãªæ
å ±ããŸãšããçãããŒãžã§ã³ã®ææžãŸãã¯èšäºãäœæãããŸããããã¯ã翻蚳ãšäžŠãã§ãã·ãŒã±ã³ã¹éã®ã¿ã¹ã¯ãšããŠå®åŒåã§ããã¿ã¹ã¯ã®ãã 1 ã€ã®äŸã§ããèŠçŽã¯æ¬¡ã®ããã«ãªããŸãã
- æœåº: ææžããæãé¢é£æ§ã®é«ãæ
å ±ãæœåºããŸãã
- æœè±¡ç: æãé¢é£æ§ã®é«ãæ
å ±ãæããæ°ããããã¹ããçæããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. æœè±¡çãªèŠçŽã®ããã«ã[BillSum](https://huggingface.co/datasets/billsum) ããŒã¿ã»ããã®ã«ãªãã©ã«ãã¢å·è«æ±æžãµãã»ãã㧠[T5](https://huggingface.co/google-t5/t5-small) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/summarization) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate rouge_score
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load BillSum dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã BillSum ããŒã¿ã»ããã®å°ããã«ãªãã©ã«ãã¢å·è«æ±æžãµãã»ãããèªã¿èŸŒã¿ãŸãã
```py
>>> from datasets import load_dataset
>>> billsum = load_dataset("billsum", split="ca_test")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> billsum = billsum.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> billsum["train"][0]
{'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.',
'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employeeâs or dependentâs actual or perceived gender identity, including, but not limited to, the employeeâs or dependentâs identification as transgender.\n(2) For purposes of this section, âcontractâ includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractorâs operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractorâs presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractorâs insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.',
'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'}
```
䜿çšãããã£ãŒã«ãã 2 ã€ãããŸãã
- `text`: ã¢ãã«ãžã®å
¥åãšãªãè«æ±æžã®ããã¹ãã
- `summary`: ã¢ãã«ã®ã¿ãŒã²ãããšãªã `text` ã®èŠçŽçã
## Preprocess
次ã®ã¹ãããã§ã¯ãT5 ããŒã¯ãã€ã¶ãŒãããŒãããŠãtextããš`summary`ãåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> checkpoint = "google-t5/t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
```
äœæããååŠçé¢æ°ã¯æ¬¡ã®ããšãè¡ãå¿
èŠããããŸãã
1. T5 ããããèŠçŽã¿ã¹ã¯ã§ããããšãèªèã§ããããã«ãå
¥åã®åã«ããã³ãââããä»ããŸããè€æ°ã® NLP ã¿ã¹ã¯ãå¯èœãªäžéšã®ã¢ãã«ã§ã¯ãç¹å®ã®ã¿ã¹ã¯ã®ããã³ãããå¿
èŠã§ãã
2. ã©ãã«ãããŒã¯ã³åãããšãã«ããŒã¯ãŒã `text_target` åŒæ°ã䜿çšããŸãã
3. `max_length`ãã©ã¡ãŒã¿ã§èšå®ãããæ倧é·ãè¶
ããªãããã«ã·ãŒã±ã³ã¹ãåãè©°ããŸãã
```py
>>> prefix = "summarize: "
>>> def preprocess_function(examples):
... inputs = [prefix + doc for doc in examples["text"]]
... model_inputs = tokenizer(inputs, max_length=1024, truncation=True)
... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True)
... model_inputs["labels"] = labels["input_ids"]
... return model_inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` é¢æ°ãé«éåã§ããŸãã
```py
>>> tokenized_billsum = billsum.map(preprocess_function, batched=True)
```
次ã«ã[`DataCollatââorForSeq2Seq`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸããããŒã¿ã»ããå
šäœãæ倧é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
<frameworkcontent>
<pt>
```py
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
```
</pt>
<tf>
```py
>>> from transformers import DataCollatorForSeq2Seq
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) ã¡ããªãã¯ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ) ã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã次ãåç
§ããŠãã ãã)ã
```py
>>> import evaluate
>>> rouge = evaluate.load("rouge")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ã㊠ROUGE ã¡ããªã¯ã¹ãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
... labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
... result["gen_len"] = np.mean(prediction_lens)
... return {k: round(v, 4) for k, v in result.items()}
```
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForSeq2SeqLM`] ã䜿çšã㊠T5 ãããŒãããŸãã
```py
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`Seq2SeqTrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] 㯠ROUGE ã¡ããªã¯ã¹ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Seq2SeqTrainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = Seq2SeqTrainingArguments(
... output_dir="my_awesome_billsum_model",
... eval_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... weight_decay=0.01,
... save_total_limit=3,
... num_train_epochs=4,
... predict_with_generate=True,
... fp16=True,
... push_to_hub=True,
... )
>>> trainer = Seq2SeqTrainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_billsum["train"],
... eval_dataset=tokenized_billsum["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
次ã«ã[`TFAutoModelForSeq2SeqLM`] ã䜿çšã㊠T5 ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_billsum["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... tokenized_billsum["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ãã¬ãŒãã³ã°ãéå§ããåã«ã»ããã¢ããããæåŸã® 2 ã€ã®ããšã¯ãäºæž¬ãã ROUGE ã¹ã³ã¢ãèšç®ããã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããã©ã¡ãã [Keras ã³ãŒã«ããã¯](../main_classes/keras_callbacks) ã䜿çšããŠè¡ãããŸãã
`compute_metrics` é¢æ°ã [`~transformers.KerasMetricCallback`] ã«æž¡ããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_billsum_model",
... tokenizer=tokenizer,
... )
```
次ã«ãã³ãŒã«ããã¯ããŸãšããŠãã³ãã«ããŸãã
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks)
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
èŠçŽçšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ã察å¿ããã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
èŠçŽãããããã¹ããèãåºããŸãã T5 ã®å Žåãäœæ¥äžã®ã¿ã¹ã¯ã«å¿ããŠå
¥åã«æ¥é èŸãä»ããå¿
èŠããããŸããèŠçŽããã«ã¯ã以äžã«ç€ºãããã«å
¥åã«ãã¬ãã£ãã¯ã¹ãä»ããå¿
èŠããããŸãã
```py
>>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes."
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠèŠçŽçšã® `pipeline` ãã€ã³ã¹ã¿ã³ã¹åããããã¹ããããã«æž¡ããŸãã
```py
>>> from transformers import pipeline
>>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model")
>>> summarizer(text)
[{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}]
```
å¿
èŠã«å¿ããŠã`pipeline`ãã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
Tokenize the text and return the `input_ids` as PyTorch tensors:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model")
>>> inputs = tokenizer(text, return_tensors="pt").input_ids
```
[`~generation.GenerationMixin.generate`] ã¡ãœããã䜿çšããŠèŠçŽãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[Text Generation](../main_classes/text_generation) API ã確èªããŠãã ããã
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.'
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åãã`input_ids`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model")
>>> inputs = tokenizer(text, return_tensors="tf").input_ids
```
[`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ã¡ãœããã䜿çšããŠèŠçŽãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[Text Generation](../main_classes/text_generation) API ã確èªããŠãã ããã
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.'
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/visual_question_answering.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Visual Question Answering
[[open-in-colab]]
Visual Question Answering (VQA) ã¯ãç»åã«åºã¥ããŠèªç±åœ¢åŒã®è³ªåã«çããã¿ã¹ã¯ã§ãã
ãã®ã¿ã¹ã¯ããµããŒãããã¢ãã«ãžã®å
¥åã¯éåžžãç»åãšè³ªåã®çµã¿åããã§ãããåºåã¯
èªç¶èšèªã§è¡šçŸãããçãã
VQA ã®æ³šç®ãã¹ã䜿çšäŸã«ã¯æ¬¡ã®ãããªãã®ããããŸãã
* èŠèŠé害è
åãã®ã¢ã¯ã»ã·ããªã㣠ã¢ããªã±ãŒã·ã§ã³ã
* æè²: è¬çŸ©ãæç§æžã§ç€ºãããŠããèŠèŠçãªè³æã«ã€ããŠè³ªåãæããããããšã VQA ã¯ãã€ã³ã¿ã©ã¯ãã£ããªåç©é€šã®å±ç€ºç©ãå²è·¡ã§ãå©çšã§ããŸãã
* ã«ã¹ã¿ã㌠ãµãŒãã¹ãšé»ååååŒ: VQA ã¯ããŠãŒã¶ãŒã補åã«ã€ããŠè³ªåã§ããããã«ããããšã§ãŠãŒã¶ãŒ ãšã¯ã¹ããªãšã³ã¹ãåäžãããŸãã
* ç»åæ€çŽ¢: VQA ã¢ãã«ã䜿çšããŠãç¹å®ã®ç¹åŸŽãæã€ç»åãæ€çŽ¢ã§ããŸããããšãã°ããŠãŒã¶ãŒã¯ãç¬ã¯ããŸãã?ããšå°ããããšãã§ããŸããäžé£ã®ç»åããç¬ãåã£ãŠãããã¹ãŠã®ç»åãæ€çŽ¢ããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ãåŠã³ãŸãã
- [`Graphcore/vqa` ããŒã¿ã»ãã](https://huggingface.co/datasets/Graphcore/vqa) äžã§åé¡ VQA ã¢ãã«ãç¹ã« [ViLT](../model_doc/vilt) ã埮調æŽããŸãã
- 埮調æŽããã ViLT ãæšè«ã«äœ¿çšããŸãã
- BLIP-2 ãªã©ã®çæã¢ãã«ã䜿çšããŠãŒãã·ã§ãã VQA æšè«ãå®è¡ããŸãã
## Fine-tuning ViLT
ViLT ã¢ãã«ã¯ãVision Transformer (ViT) ã«ããã¹ãåã蟌ã¿ãçµã¿èŸŒãã§ãããæå°éã®èšèšãå¯èœã«ããŸãã
èŠèŠãšèšèªã®äºåãã¬ãŒãã³ã° (VLP)ããã®ã¢ãã«ã¯ãããã€ãã®äžæµã¿ã¹ã¯ã«äœ¿çšã§ããŸãã VQA ã¿ã¹ã¯ã®å Žåãåé¡å
head ã¯æäžéš (`[CLS]` ããŒã¯ã³ã®æçµçãªé衚瀺ç¶æ
ã®æäžéšã«ããç·åœ¢å±€) ã«é
眮ãããã©ã³ãã ã«åæåãããŸãã
ãããã£ãŠãèŠèŠç質åå¿ç㯠**åé¡åé¡** ãšããŠæ±ãããŸãã
BLIPãBLIP-2ãInstructBLIP ãªã©ã®æè¿ã®ã¢ãã«ã¯ãVQA ãçæã¿ã¹ã¯ãšããŠæ±ããŸãããã®ã¬ã€ãã®åŸåã§ã¯ã
ãŒãã·ã§ãã VQA æšè«ã«ãããã䜿çšããæ¹æ³ã瀺ããŸãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers datasets
```
ã¢ãã«ãã³ãã¥ããã£ãšå
±æããããšããå§ãããŸãã Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãð€ ããã«ã¢ããããŒãããŸãã
ããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
ã¢ãã«ã®ãã§ãã¯ãã€ã³ããã°ããŒãã«å€æ°ãšããŠå®çŸ©ããŸãããã
```py
>>> model_checkpoint = "dandelin/vilt-b32-mlm"
```
## Load the data
説æã®ç®çã§ããã®ã¬ã€ãã§ã¯ã泚éä»ãã®èŠèŠçãªè³ªåã«çãããGraphcore/vqaãããŒã¿ã»ããã®éåžžã«å°ããªãµã³ãã«ã䜿çšããŸãã
å®å
šãªããŒã¿ã»ãã㯠[ð€ Hub](https://huggingface.co/datasets/Graphcore/vqa) ã§èŠã€ããããšãã§ããŸãã
[`Graphcore/vqa` ããŒã¿ã»ãã](https://huggingface.co/datasets/Graphcore/vqa) ã®ä»£ããã«ã
å
¬åŒ [VQA ããŒã¿ã»ãã ããŒãž](https://visualqa.org/download.html) ããåãããŒã¿ãæåã§ååŸããŸãããã©ããŒãããå Žåã¯ã
ã«ã¹ã¿ã ããŒã¿ã䜿çšãããã¥ãŒããªã¢ã«ã§ã¯ã[ç»åããŒã¿ã»ãããäœæãã](https://huggingface.co/docs/datasets/image_dataset#loading-script) æ¹æ³ã確èªããŠãã ããã
ð€ ããŒã¿ã»ããã®ããã¥ã¡ã³ãã®ã¬ã€ãã
æ€èšŒåå²ããæåã® 200 åã®äŸãããŒãããããŒã¿ã»ããã®æ©èœã調ã¹ãŠã¿ãŸãããã
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]")
>>> dataset
Dataset({
features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'],
num_rows: 200
})
```
ããŒã¿ã»ããã®ç¹åŸŽãç解ããããã«äŸãèŠãŠã¿ãŸãããã
```py
>>> dataset[0]
{'question': 'Where is he looking?',
'question_type': 'none of the above',
'question_id': 262148000,
'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg',
'answer_type': 'other',
'label': {'ids': ['at table', 'down', 'skateboard', 'table'],
'weights': [0.30000001192092896,
1.0,
0.30000001192092896,
0.30000001192092896]}}
```
ãã®ã¿ã¹ã¯ã«é¢é£ããæ©èœã«ã¯æ¬¡ã®ãã®ããããŸãã
* `question`: ç»åããåçãã質å
* `image_id`: 質åãåç
§ããç»åãžã®ãã¹
* `label`: 泚é
æ®ãã®æ©èœã¯å¿
èŠãªãã®ã§åé€ã§ããŸãã
```py
>>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
```
ã芧ã®ãšããã`label`æ©èœã«ã¯ãããŸããŸãªãã¥ãŒãã³ã»ã¢ãããŒã¿ãŒã«ãã£ãŠåéããããåã質åã«å¯Ÿããè€æ°ã®åç (ããã§ã¯`id`ãšåŒã³ãŸã) ãå«ãŸããŠããŸãã
質åã«å¯Ÿããçãã¯äž»èŠ³çãªãã®ã«ãªãå¯èœæ§ãããããã§ãããã®å Žåãåé¡ã¯ "圌ã¯ã©ããèŠãŠããã®ãïŒ"ãšããããšã§ããäžéšã®äººã
ããã«ã¯ "ããŠã³" ãšãã泚éãä»ããããä»ã®ãã®ã«ã¯ "ããŒãã«ã§" ãšãã泚éãä»ããããå¥ã®æ³šéã«ã¯ "ã¹ã±ãŒãããŒã" ãšãã泚éãä»ããããŸããã
ç»åãèŠãŠãã©ã®çããåºãããèããŠãã ããã
```python
>>> from PIL import Image
>>> image = Image.open(dataset[0]['image_id'])
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/>
</div>
質åãšåçã®ãããŸããã®ããããã®ãããªããŒã¿ã»ããã¯ãã«ãã©ãã«åé¡åé¡ãšããŠæ±ãããŸã (
è€æ°ã®åçãæå¹ã§ããå¯èœæ§ããããŸã)ãããã«ãã¯ã³ããã ãšã³ã³ãŒãããããã¯ãã«ãäœæããã ãã§ã¯ãªãã
泚éå
ã«ç¹å®ã®åçãåºçŸããåæ°ã«åºã¥ããœãã ãšã³ã³ãŒãã£ã³ã°ã
ããšãã°ãäžã®äŸã§ã¯ã"down"ãšããåçãä»ã®åçãããé »ç¹ã«éžæãããããã
ã¹ã³ã¢ (ããŒã¿ã»ããã§ã¯`weight`ãšåŒã°ããŸã) 㯠1.0 ã§ãæ®ãã®åçã®ã¹ã³ã¢ã¯ 1.0 æªæºã§ãã
åŸã§é©åãªåé¡ãããã䜿çšããŠã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããããã«ã2 ã€ã®èŸæžãäœæããŸãããã
ã©ãã«åãæŽæ°ã«å€æããããŸãã¯ãã®é:
```py
>>> import itertools
>>> labels = [item['ids'] for item in dataset['label']]
>>> flattened_labels = list(itertools.chain(*labels))
>>> unique_labels = list(set(flattened_labels))
>>> label2id = {label: idx for idx, label in enumerate(unique_labels)}
>>> id2label = {idx: label for label, idx in label2id.items()}
```
ãããã³ã°ãã§ããã®ã§ãæååã®åçããã® ID ã«çœ®ãæããããã«ååŠçããã䟿å©ã«ããããã«ããŒã¿ã»ããããã©ããåããããšãã§ããŸãã
```python
>>> def replace_ids(inputs):
... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]]
... return inputs
>>> dataset = dataset.map(replace_ids)
>>> flat_dataset = dataset.flatten()
>>> flat_dataset.features
{'question': Value(dtype='string', id=None),
'image_id': Value(dtype='string', id=None),
'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None),
'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
```
## Preprocessing data
次ã®ã¹ãããã§ã¯ãViLT ããã»ããµãããŒãããŠãã¢ãã«ã®ç»åããŒã¿ãšããã¹ã ããŒã¿ãæºåããŸãã
[`ViltProcessor`] ã¯ãBERT ããŒã¯ãã€ã¶ãŒãš ViLT ç»åããã»ããµã䟿å©ãªåäžããã»ããµã«ã©ããããŸãã
```py
>>> from transformers import ViltProcessor
>>> processor = ViltProcessor.from_pretrained(model_checkpoint)
```
ããŒã¿ãååŠçããã«ã¯ã[`ViltProcessor`] ã䜿çšããŠç»åãšè³ªåããšã³ã³ãŒãããå¿
èŠããããŸããããã»ããµãŒã¯äœ¿çšããŸã
[`BertTokenizerFast`] ã䜿çšããŠããã¹ããããŒã¯ã³åããããã¹ã ããŒã¿ã® `input_ids`ã`attention_mask`ãããã³ `token_type_ids` ãäœæããŸãã
ç»åã«é¢ããŠã¯ãããã»ããµã¯ [`ViltImageProcessor`] ãå©çšããŠç»åã®ãµã€ãºå€æŽãšæ£èŠåãè¡ãã`pixel_values` ãš `pixel_mask` ãäœæããŸãã
ãããã®ååŠçã¹ãããã¯ãã¹ãŠå
éšã§è¡ããã`processor`ãåŒã³åºãã ãã§æžã¿ãŸãããã ããããã§ãå¿
èŠãªã®ã¯ã
察象ã®ã©ãã«ãæºåããŸãããã®è¡šçŸã§ã¯ãåèŠçŽ ã¯èããããçã (ã©ãã«) ã«å¯Ÿå¿ããŸããæ£è§£ã®å ŽåãèŠçŽ ã¯ä¿æãããŸãã
ããããã®ã¹ã³ã¢ (éã¿) ãèšå®ãããæ®ãã®èŠçŽ 㯠0 ã«èšå®ãããŸãã
次ã®é¢æ°ã¯ãç»åãšè³ªåã« `processor` ãé©çšããäžã§èª¬æããããã«ã©ãã«ããã©ãŒãããããŸãã
```py
>>> import torch
>>> def preprocess_data(examples):
... image_paths = examples['image_id']
... images = [Image.open(image_path) for image_path in image_paths]
... texts = examples['question']
... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt")
... for k, v in encoding.items():
... encoding[k] = v.squeeze()
... targets = []
... for labels, scores in zip(examples['label.ids'], examples['label.weights']):
... target = torch.zeros(len(id2label))
... for label, score in zip(labels, scores):
... target[label] = score
... targets.append(target)
... encoding["labels"] = targets
... return encoding
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.map`] é¢æ°ã䜿çšããŸãã `map` ãé«éåããã«ã¯ã次ã®ããã«ããŸãã
ããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããã«ã¯ã`batched=True` ãèšå®ããŸãããã®æç¹ã§ãäžèŠãªåã¯èªç±ã«åé€ããŠãã ããã
```py
>>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights'])
>>> processed_dataset
Dataset({
features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'],
num_rows: 200
})
```
æåŸã®ã¹ããããšããŠã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
## Train the model
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`ViltForQuestionAnswering`] 㧠ViLT ãããŒãããŸããã©ãã«ã®æ°ãæå®ããŸã
ã©ãã«ãããã³ã°ãšãšãã«:
```py
>>> from transformers import ViltForQuestionAnswering
>>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id)
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã
```py
>>> from transformers import TrainingArguments
>>> repo_id = "MariaK/vilt_finetuned_200"
>>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããã»ããµãŒãããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=processed_dataset,
... tokenizer=processor,
... )
```
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æããð€ ããã§æçµã¢ãã«ãå
±æããŸãã
```py
>>> trainer.push_to_hub()
```
## Inference
ViLT ã¢ãã«ã埮調æŽããð€ Hub ã«ã¢ããããŒãããã®ã§ããããæšè«ã«äœ¿çšã§ããŸãããã£ãšãåçŽãª
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ãã
```py
>>> from transformers import pipeline
>>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200")
```
ãã®ã¬ã€ãã®ã¢ãã«ã¯ 200 ã®äŸã§ã®ã¿ãã¬ãŒãã³ã°ãããŠãããããå€ããæåŸ
ããªãã§ãã ãããå°ãªããšãããããããã©ããèŠãŠã¿ãŸããã
ããŒã¿ããäœããåŠç¿ããæšè«ã説æããããã«ããŒã¿ã»ããããæåã®äŸãåãåºããŸãã
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> print(question)
>>> pipe(image, question, top_k=1)
"Where is he looking?"
[{'score': 0.5498199462890625, 'answer': 'down'}]
```
ããŸãèªä¿¡ããããŸããããã¢ãã«ã¯ç¢ºãã«äœããåŠç¿ããŸãããããå€ãã®äŸãšããé·ããã¬ãŒãã³ã°ãè¡ããšãã¯ããã«è¯ãçµæãåŸãããŸãã
å¿
èŠã«å¿ããŠããã€ãã©ã€ã³ã®çµæãæåã§è€è£œããããšãã§ããŸãã
1. ç»åãšè³ªåãååŸããã¢ãã«ã®ããã»ããµã䜿çšããŠã¢ãã«çšã«æºåããŸãã
2. ã¢ãã«ãéããŠçµæãŸãã¯ååŠçã転éããŸãã
3. ããžãããããæãå¯èœæ§ã®é«ãåçã® ID ãååŸãã`id2label` ã§å®éã®åçãèŠã€ããŸãã
```py
>>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200")
>>> image = Image.open(example['image_id'])
>>> question = example['question']
>>> # prepare inputs
>>> inputs = processor(image, question, return_tensors="pt")
>>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200")
>>> # forward pass
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits
>>> idx = logits.argmax(-1).item()
>>> print("Predicted answer:", model.config.id2label[idx])
Predicted answer: down
```
## Zero-shot VQA
以åã®ã¢ãã«ã§ã¯ãVQA ãåé¡ã¿ã¹ã¯ãšããŠæ±ããŸããã BLIPãBLIP-2ãInstructBLIP ã¢ãããŒããªã©ã®äžéšã®æè¿ã®ã¢ãã«
çæã¿ã¹ã¯ãšããŠã® VQAã [BLIP-2](../model_doc/blip-2) ãäŸãšããŠèããŠã¿ãŸããããæ°ããããžã¥ã¢ã«èšèªã®äºåãã¬ãŒãã³ã°ãå°å
¥ããŸãã
äºåã«ãã¬ãŒãã³ã°ãããããžã§ã³ ãšã³ã³ãŒããŒãš LLM ãä»»æã«çµã¿åãããŠäœ¿çšââã§ãããã©ãã€ã (詳现ã«ã€ããŠã¯ã[BLIP-2 ããã°æçš¿](https://huggingface.co/blog/blip-2) ãåç
§)ã
ããã«ãããèŠèŠçãªè³ªåå¿çãå«ãè€æ°ã®èŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®çµæãéæããããšãã§ããŸãã
ãã®ã¢ãã«ã VQA ã«äœ¿çšããæ¹æ³ã説æããŸãããããŸããã¢ãã«ãããŒãããŸããããããã§ã¯ã¢ãã«ãæ瀺çã«éä¿¡ããŸãã
GPU (å©çšå¯èœãªå Žå)ããã㯠[`Trainer`] ãèªåçã«åŠçããããããã¬ãŒãã³ã°æã«äºåã«è¡ãå¿
èŠã¯ãããŸããã§ããã
```py
>>> from transformers import AutoProcessor, Blip2ForConditionalGeneration
>>> import torch
>>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
>>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
```
ã¢ãã«ã¯ç»åãšããã¹ããå
¥åãšããŠåãåããããVQA ããŒã¿ã»ããã®æåã®äŸãšãŸã£ããåãç»åãšè³ªåã®ãã¢ã䜿çšããŠã¿ãŸãããã
```py
>>> example = dataset[0]
>>> image = Image.open(example['image_id'])
>>> question = example['question']
```
èŠèŠçãªè³ªåå¿çã¿ã¹ã¯ã« BLIP-2 ã䜿çšããã«ã¯ãããã¹ã ããã³ãããç¹å®ã®åœ¢åŒ (`Question: {} Answer:`) ã«åŸãå¿
èŠããããŸãã
```py
>>> prompt = f"Question: {question} Answer:"
```
次ã«ãã¢ãã«ã®ããã»ããµã§ç»å/ããã³ãããååŠçããåŠçãããå
¥åãã¢ãã«ã«æž¡ããåºåããã³ãŒãããå¿
èŠããããŸãã
```py
>>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16)
>>> generated_ids = model.generate(**inputs, max_new_tokens=10)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
>>> print(generated_text)
"He is looking at the crowd"
```
ã芧ã®ãšãããã¢ãã«ã¯çŸ€è¡ãšé¡ã®åã (äžãåããŠãã) ãèªèããŸããããèŠéããŠããããã§ãã
芳客ãã¹ã±ãŒã¿ãŒã®åŸãã«ãããšããäºå®ãããã§ãã人éã泚éãä»ããããŒã¿ã»ãããååŸããããšãäžå¯èœãªå Žåã«ã¯ãããã¯
ãã®ã¢ãããŒãã«ãããæçšãªçµæãããã«åŸãããŸãã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/language_modeling.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Causal language modeling
[[open-in-colab]]
èšèªã¢ããªã³ã°ã«ã¯ãå æçã¢ããªã³ã°ãšãã¹ã¯ãããèšèªã¢ããªã³ã°ã® 2 ã€ã®ã¿ã€ãããããŸãããã®ã¬ã€ãã§ã¯ãå æé¢ä¿ã®ããèšèªã¢ããªã³ã°ã«ã€ããŠèª¬æããŸãã
å æèšèªã¢ãã«ã¯ããã¹ãçæã«ãã䜿çšãããŸãããããã®ã¢ãã«ã¯ã次ã®ãããªã¯ãªãšã€ãã£ããªã¢ããªã±ãŒã·ã§ã³ã«äœ¿çšã§ããŸãã
ç¬èªã®ããã¹ã ã¢ããã³ãã£ãŒãéžæããããCopilot ã CodeParrot ãªã©ã®ã€ã³ããªãžã§ã³ããªã³ãŒãã£ã³ã° ã¢ã·ã¹ã¿ã³ããéžæããŸãã
<Youtube id="Vpjb1lu0MDk"/>
å æèšèªã¢ããªã³ã°ã¯ãäžé£ã®ããŒã¯ã³å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããŸããã¢ãã«ã¯ã次ã®ããŒã¯ã³ã«ã®ã¿å¯Ÿå¿ã§ããŸãã
å·Šãããã¯ãã¢ãã«ãå°æ¥ã®ããŒã¯ã³ãèªèã§ããªãããšãæå³ããŸãã GPT-2 ã¯å æçèšèªã¢ãã«ã®äžäŸã§ãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [ELI5](https:/) ã® [r/askscience](https://www.reddit.com/r/askscience/) ãµãã»ãã㧠[DistilGPT2](https://huggingface.co/distilbert/distilgpt2) ã埮調æŽããŸãã /huggingface.co/datasets/eli5) ããŒã¿ã»ããã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/text-generation) ã確èªããããšããå§ãããŸããu
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load ELI5 dataset
ãŸããELI5 ããŒã¿ã»ããã® r/askscience ãµãã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããããŒãããŸãã
ããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train_asks` ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> eli5 = eli5.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
'score': [6, 3],
'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
'answers_urls': {'url': []},
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls': {'url': []}}
```
ããã¯å€ãã®ããšã®ããã«èŠãããããããŸããããå®éã«é¢å¿ãããã®ã¯`text`ãã£ãŒã«ãã ãã§ããèšèªã¢ããªã³ã°ã®åªããŠããç¹
ã¿ã¹ã¯ã§ã¯ã次ã®åèªãã©ãã« * ã§ãããããã©ãã« (æåž«ãªãã¿ã¹ã¯ãšãåŒã°ããŸã) ã¯å¿
èŠãããŸããã
## Preprocess
<Youtube id="ma1TrR7gE7I"/>
次ã®ã¹ãããã¯ã`text`ãµããã£ãŒã«ããåŠçããããã« DistilGPT2 ããŒã¯ãã€ã¶ãŒãããŒãããããšã§ãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
```
äžã®äŸãããããããã«ã`text`ãã£ãŒã«ãã¯å®éã«ã¯`answers`å
ã«ãã¹ããããŠããŸããã€ãŸãã次ã®ããšãå¿
èŠã«ãªããŸãã
[` flatten`](https://huggingface.co/docs/datasets/process.html#flatten) ã¡ãœããã䜿çšããŠããã¹ããããæ§é ãã `text` ãµããã£ãŒã«ããæœåºããŸãã
```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
'answers.score': [6, 3],
'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
"Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
'answers_urls.url': [],
'document': '',
'q_id': 'nyxfp',
'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
'subreddit': 'askscience',
'title': 'Few questions about this space walk photograph.',
'title_urls.url': []}
```
`answers`æ¥é èŸã§ç€ºãããããã«ãåãµããã£ãŒã«ãã¯åå¥ã®åã«ãªãã`text`ãã£ãŒã«ãã¯ãªã¹ãã«ãªããŸããããã®ä»£ãã
åæãåå¥ã«ããŒã¯ã³åããå Žåã¯ããªã¹ããæååã«å€æããŠããããããŸãšããŠããŒã¯ã³åã§ããããã«ããŸãã
以äžã¯ãåäŸã®æååã®ãªã¹ããçµåããçµæãããŒã¯ã³åããæåã®ååŠçé¢æ°ã§ãã
```py
>>> def preprocess_function(examples):
... return tokenizer([" ".join(x) for x in examples["answers.text"]])
```
ãã®ååŠçé¢æ°ãããŒã¿ã»ããå
šäœã«é©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `map` é¢æ°ãé«éåããã«ã¯ã`batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçãã`num_proc` ã§ããã»ã¹ã®æ°ãå¢ãããŸããäžèŠãªåãåé€ããŸãã
```py
>>> tokenized_eli5 = eli5.map(
... preprocess_function,
... batched=True,
... num_proc=4,
... remove_columns=eli5["train"].column_names,
... )
```
ãã®ããŒã¿ã»ããã«ã¯ããŒã¯ã³ ã·ãŒã±ã³ã¹ãå«ãŸããŠããŸããããã®äžéšã¯ã¢ãã«ã®æ倧å
¥åé·ãããé·ããªããŸãã
2 çªç®ã®ååŠçé¢æ°ã䜿çšããŠã
- ãã¹ãŠã®ã·ãŒã±ã³ã¹ãé£çµããŸã
- é£çµãããã·ãŒã±ã³ã¹ã`block_size`ã§å®çŸ©ãããçããã£ã³ã¯ã«åå²ããŸããããã¯ãæ倧å
¥åé·ããçããGPU RAM ã«ååãªé·ãã§ããå¿
èŠããããŸãã
```py
>>> block_size = 128
>>> def group_texts(examples):
... # Concatenate all texts.
... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
... total_length = len(concatenated_examples[list(examples.keys())[0]])
... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
... # customize this part to your needs.
... if total_length >= block_size:
... total_length = (total_length // block_size) * block_size
... # Split by chunks of block_size.
... result = {
... k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
... for k, t in concatenated_examples.items()
... }
... result["labels"] = result["input_ids"].copy()
... return result
```
Apply the `group_texts` function over the entire dataset:
```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```
次ã«ã[`DataCollatââorForLanguageModeling`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã *åçã«ããã£ã³ã°*ããæ¹ãå¹ççã§ãã
ããŒã¿ã»ããå
šäœãæ倧é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æãæé·ã®é·ãã«ããŸãã
<frameworkcontent>
<pt>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšãã`mlm=False` ãèšå®ããŸããããã¯ãå
¥åã 1 èŠçŽ åå³ã«ã·ããããã©ãã«ãšããŠäœ¿çšããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```
</pt>
<tf>
ã·ãŒã±ã³ã¹çµäºããŒã¯ã³ãããã£ã³ã° ããŒã¯ã³ãšããŠäœ¿çšãã`mlm=False` ãèšå®ããŸããããã¯ãå
¥åã 1 èŠçŽ åå³ã«ã·ããããã©ãã«ãšããŠäœ¿çšããŸãã
```py
>>> from transformers import DataCollatorForLanguageModeling
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
```
</tf>
</frameworkcontent>
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[åºæ¬ãã¥ãŒããªã¢ã«](../training#train-with-pytorch-trainer) ãåç
§ããŠãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForCausalLM`] ã䜿çšã㊠DistilGPT2 ãããŒãããŸãã
```py
>>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
2. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_eli5_clm-model",
... eval_strategy="epoch",
... learning_rate=2e-5,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=lm_dataset["train"],
... eval_dataset=lm_dataset["test"],
... data_collator=data_collator,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.evaluate`] ã¡ãœããã䜿çšããŠã¢ãã«ãè©äŸ¡ãããã®è€éããååŸããŸãã
```py
>>> import math
>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
```
次ã«ã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[åºæ¬ãã¥ãŒããªã¢ã«](../training#train-a-tensorflow-model-with-keras) ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer, AdamWeightDecay
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```
次ã«ã[`TFAutoModelForCausalLM`] ã䜿çšã㊠DistilGPT2 ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForCausalLM
>>> model = TFAutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> tf_train_set = model.prepare_tf_dataset(
... lm_dataset["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_test_set = model.prepare_tf_dataset(
... lm_dataset["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ããã¯ãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã [`~transformers.PushToHubCallback`] ã§ããã·ã¥ããå Žæãæå®ããããšã§å®è¡ã§ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_eli5_clm-model",
... tokenizer=tokenizer,
... )
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
å æèšèªã¢ããªã³ã°çšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ã察å¿ããããã¥ã¡ã³ããåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ããã¹ããçæããããã³ãããèãåºããŸãã
```py
>>> prompt = "Somatic hypermutation allows the immune system to"
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠããã¹ãçæçšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ããã¹ããæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model")
>>> generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}]
```
<frameworkcontent>
<pt>
ããã¹ããããŒã¯ã³åãããinput_idsãã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="pt").input_ids
```
[`~generation.GenerationMixin.generate`] ã¡ãœããã䜿çšããŠããã¹ããçæããŸãã
ããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[ããã¹ãçææŠç¥](../generation_strategies) ããŒãžãåç
§ããŠãã ããã
```py
>>> from transformers import AutoModelForCausalLM
>>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"]
```
</pt>
<tf>
ããã¹ããããŒã¯ã³åãã`input_ids`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="tf").input_ids
```
[`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ã¡ãœããã䜿çšããŠèŠçŽãäœæããŸããããŸããŸãªããã¹ãçææŠç¥ãšçæãå¶åŸ¡ããããã®ãã©ã¡ãŒã¿ãŒã®è©³çŽ°ã«ã€ããŠã¯ã[ããã¹ãçææŠç¥](../generation_strategies) ããŒãžãåç
§ããŠãã ããã
```py
>>> from transformers import TFAutoModelForCausalLM
>>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```
çæãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for']
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/idefics.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image tasks with IDEFICS
[[open-in-colab]]
åå¥ã®ã¿ã¹ã¯ã¯ç¹æ®ãªã¢ãã«ã埮調æŽããããšã§å¯ŸåŠã§ããŸãããå¥ã®ã¢ãããŒããå¯èœã§ãã
æè¿ç»å ŽããŠäººæ°ãåããŠããã®ã¯ã埮調æŽãè¡ããã«ããŸããŸãªã¿ã¹ã¯ã«å€§èŠæš¡ãªã¢ãã«ã䜿çšããããšã§ãã
ããšãã°ã倧èŠæš¡ãªèšèªã¢ãã«ã¯ãèŠçŽã翻蚳ãåé¡ãªã©ã® NLP ã¿ã¹ã¯ãåŠçã§ããŸãã
ãã®ã¢ãããŒãã¯ãããã¹ããªã©ã®åäžã®ã¢ããªãã£ã«éå®ãããªããªããŸããããã®ã¬ã€ãã§ã¯ã次ã®ãããªæ¹æ³ã説æããŸãã
IDEFICS ãšåŒã°ãã倧èŠæš¡ãªãã«ãã¢ãŒãã« ã¢ãã«ã䜿çšããŠãç»åãšããã¹ãã®ã¿ã¹ã¯ã解決ããŸãã
[IDEFICS](../model_doc/idefics) ã¯ã[Flamingo](https://huggingface.co/papers/2204.14198) ã«åºã¥ããªãŒãã³ã¢ã¯ã»ã¹ã®ããžã§ã³ããã³èšèªã¢ãã«ã§ãã
DeepMind ã«ãã£ãŠæåã«éçºãããæå
端ã®èŠèŠèšèªã¢ãã«ãã¢ãã«ã¯ä»»æã®ç»åã·ãŒã±ã³ã¹ãåãå
¥ããŸã
ããã¹ããå
¥åããåºåãšããŠäžè²«ããããã¹ããçæããŸããç»åã«é¢ãã質åã«çããããèŠèŠçãªã³ã³ãã³ãã«ã€ããŠèª¬æãããã
è€æ°ã®ã€ã¡ãŒãžã«åºã¥ããã¹ããŒãªãŒãäœæãããªã©ã IDEFICS ã«ã¯ 2 ã€ã®ããªãšãŒã·ã§ã³ããããŸã - [800 åãã©ã¡ãŒã¿](https://huggingface.co/HuggingFaceM4/idefics-80b)
ããã³ [90 åã®ãã©ã¡ãŒã¿](https://huggingface.co/HuggingFaceM4/idefics-9b)ãã©ã¡ãã ð€ Hub ã§å
¥æã§ããŸããåããªãšãŒã·ã§ã³ã«ã€ããŠã现ãã調æŽãããæ瀺ãèŠã€ããããšãã§ããŸãã
äŒè©±ã®ãŠãŒã¹ã±ãŒã¹ã«é©å¿ããã¢ãã«ã®ããŒãžã§ã³ã
ãã®ã¢ãã«ã¯éåžžã«å€çšéã§ãå¹
åºãç»åã¿ã¹ã¯ããã«ãã¢ãŒãã« ã¿ã¹ã¯ã«äœ¿çšã§ããŸãããããã
倧èŠæš¡ãªã¢ãã«ã§ãããšããããšã¯ã倧éã®èšç®ãªãœãŒã¹ãšã€ã³ãã©ã¹ãã©ã¯ãã£ãå¿
èŠã§ããããšãæå³ããŸããããã¯ããªã次第ã§ã
ãã®ã¢ãããŒãã¯ãåå¥ã®ã¿ã¹ã¯ããšã«ç¹åããã¢ãã«ã埮調æŽãããããããŠãŒã¹ã±ãŒã¹ã«é©ããŠããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ãåŠç¿ããŸãã
- [IDEFICS ãããŒã](#loading-the-model) ããã³ [ã¢ãã«ã®éååããŒãžã§ã³ãããŒã](#quantized-model)
- IDEFICS ã次ã®ç®çã§äœ¿çšããŸãã
- [ç»åãã£ãã·ã§ã³](#image-captioning)
- [ããã³ããç»åãã£ãã·ã§ã³](#prompted-image-captioning)
- [Few-shot ããã³ãã](#few-shot-prompting)
- [ããžã¥ã¢ã«è³ªååç](#visual-question-answering)
- [ç»ååé¡](#image-classification)
- [ç»åã¬ã€ãä»ãããã¹ãçæ](#image-guided-text-generation)
- [ãããã¢ãŒãã§æšè«ãå®è¡ãã](#running-inference-in-batch-mode)
- [äŒè©±çšã« IDEFICS åœä»€ãå®è¡](#idefics-instruct-for-conversational-use)
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q bitsandbytes sentencepiece accelerate transformers
```
<Tip>
éååãããŠããªãããŒãžã§ã³ã®ã¢ãã« ãã§ãã¯ãã€ã³ãã䜿çšããŠæ¬¡ã®äŸãå®è¡ããã«ã¯ãå°ãªããšã 20GB ã® GPU ã¡ã¢ãªãå¿
èŠã§ãã
</Tip>
## Loading the model
ãŸãã¯ã¢ãã«ã® 90 ååã®ãã©ã¡ãŒã¿ãŒã®ãã§ãã¯ãã€ã³ããããŒãããŸãããã
```py
>>> checkpoint = "HuggingFaceM4/idefics-9b"
```
ä»ã® Transformers ã¢ãã«ãšåæ§ã«ãããã»ããµãšã¢ãã«èªäœããã§ãã¯ãã€ã³ãããããŒãããå¿
èŠããããŸãã
IDEFICS ããã»ããµã¯ã[`LlamaTokenizer`] ãš IDEFICS ç»åããã»ããµãåäžã®ããã»ããµã«ã©ããããŠåŠçããŸãã
ã¢ãã«ã®ããã¹ããšç»åã®å
¥åãæºåããŸãã
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto")
```
`device_map`ã`auto`ã«èšå®ãããšãã¢ãã«ã®éã¿ãæãæé©åãããç¶æ
ã§ããŒãããã³ä¿åããæ¹æ³ãèªåçã«æ±ºå®ãããŸãã
æ¢åã®ããã€ã¹ãèæ
®ããæ¹æ³ã
### Quantized model
ãã€ã¡ã¢ãª GPU ã®å¯çšæ§ãåé¡ãšãªãå Žåã¯ãã¢ãã«ã®éååãããããŒãžã§ã³ãããŒãã§ããŸããã¢ãã«ãš
ããã»ããµã 4 ããã粟床ã§äœ¿çšããå Žåã`BitsAndBytesConfig`ã`from_pretrained`ã¡ãœããã«æž¡ããšãã¢ãã«ãå§çž®ãããŸãã
ããŒãäžã«ãã®å Žã§ã
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig
>>> quantization_config = BitsAndBytesConfig(
... load_in_4bit=True,
... bnb_4bit_compute_dtype=torch.float16,
... )
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> model = IdeficsForVisionText2Text.from_pretrained(
... checkpoint,
... quantization_config=quantization_config,
... device_map="auto"
... )
```
ææ¡ãããæ¹æ³ã®ããããã§ã¢ãã«ãããŒãããã®ã§ãIDEFICS ã䜿çšã§ããã¿ã¹ã¯ã®æ¢çŽ¢ã«é²ã¿ãŸãããã
## Image captioning
ç»åã®ãã£ãã·ã§ã³ä»ãã¯ãç¹å®ã®ç»åã®ãã£ãã·ã§ã³ãäºæž¬ããã¿ã¹ã¯ã§ããäžè¬çãªçšéã¯èŠèŠé害è
ãæ¯æŽããããšã§ã
人ã
ã¯ããŸããŸãªç¶æ³ãããã²ãŒãããŸããããšãã°ããªã³ã©ã€ã³ã§ç»åã³ã³ãã³ããæ¢çŽ¢ããŸãã
ã¿ã¹ã¯ã説æããã«ã¯ããã£ãã·ã§ã³ãä»ããç»åãååŸããŸããäŸ:
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-im-captioning.jpg" alt="Image of a puppy in a flower bed"/>
</div>
åçæäŸïŒ[Hendo Wang](https://unsplash.com/@hendoo)
IDEFICS ã¯ããã¹ããšç»åã®ããã³ãããåãå
¥ããŸãããã ããç»åã«ãã£ãã·ã§ã³ãä»ããã«ã¯ãããã¹ã ããã³ããããŠãŒã¶ãŒã«æäŸããå¿
èŠã¯ãããŸããã
ã¢ãã«ãååŠçãããå
¥åç»åã®ã¿ãããã¹ã ããã³ããããªãå Žåãã¢ãã«ã¯ããã¹ãã®çæãéå§ããŸãã
BOS (Beginning-of-sequence) ããŒã¯ã³ã«ãããã£ãã·ã§ã³ãäœæãããŸãã
ã¢ãã«ãžã®ç»åå
¥åãšããŠãç»åãªããžã§ã¯ã (`PIL.Image`) ãŸãã¯ç»åãååŸã§ãã URL ã®ããããã䜿çšã§ããŸãã
```py
>>> prompt = [
... "https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80",
... ]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
A puppy in a flower bed
```
<Tip>
å¢å æã«çºçãããšã©ãŒãé¿ããããã«ã`generate`ã®åŒã³åºãã«`bad_words_ids`ãå«ããããšããå§ãããŸãã
`max_new_tokens`: ã¢ãã«ã¯ãæ°ãã `<image>` ãŸã㯠`<fake_token_around_image>` ããŒã¯ã³ãçæããå¿
èŠããããŸãã
ã¢ãã«ã«ãã£ãŠç»åãçæãããŠããŸããã
ãã®ã¬ã€ãã®ããã«ãªã³ã¶ãã©ã€ã§èšå®ããããšãã[ããã¹ãçææŠç¥](../generation_strategies) ã¬ã€ãã§èª¬æãããŠããããã« `GenerationConfig` ã«ä¿åããããšãã§ããŸãã
</Tip>
## Prompted image captioning
ããã¹ã ããã³ãããæäŸããããšã§ç»åãã£ãã·ã§ã³ãæ¡åŒµã§ããã¢ãã«ã¯ç»åãæå®ããŠç¶è¡ããŸããæã£ãŠãããŸããã
å¥ã®å³ã§èª¬æããŸãã
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-prompted-im-captioning.jpg" alt="Image of the Eiffel Tower at night"/>
</div>
åçæäŸïŒ[Denys Nevozhai](https://unsplash.com/@dnevozhai)ã
ããã¹ãããã³ç»åã®ããã³ãããåäžã®ãªã¹ããšããŠã¢ãã«ã®ããã»ããµã«æž¡ããé©åãªå
¥åãäœæã§ããŸãã
```py
>>> prompt = [
... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80",
... "This is an image of ",
... ]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
This is an image of the Eiffel Tower in Paris, France.
```
## Few-shot prompting
IDEFICS ã¯ãŒãã·ã§ããã§åªããçµæã瀺ããŸãããã¿ã¹ã¯ã«ãã£ãŠã¯ç¹å®ã®åœ¢åŒã®ãã£ãã·ã§ã³ãå¿
èŠã«ãªãå Žåãããã£ãã·ã§ã³ãä»å±ããå ŽåããããŸãã
ã¿ã¹ã¯ã®è€éããå¢å€§ããããã®ä»ã®å¶éãŸãã¯èŠä»¶ãå°æ°ã®ã·ã§ããã®ããã³ããã䜿çšããŠãã³ã³ããã¹ãå
ã®åŠç¿ãæå¹ã«ããããšãã§ããŸãã
ããã³ããã«äŸãæå®ããããšã§ãæå®ãããäŸã®åœ¢åŒãæš¡å£ããçµæãçæããããã«ã¢ãã«ãæäœã§ããŸãã
åã®ãšããã§ã«å¡ã®ç»åãã¢ãã«ã®äŸãšããŠäœ¿çšããã¢ãã«ã«ãã¢ã³ã¹ãã¬ãŒã·ã§ã³ããããã³ãããäœæããŠã¿ãŸãããã
ç»åå
ã®ãªããžã§ã¯ããäœã§ããããç¥ãããšã«å ããŠãããã«é¢ããèå³æ·±ãæ
å ±ãååŸããããšèããŠããŸãã
次ã«ãèªç±ã®å¥³ç¥ã®ç»åã«å¯ŸããŠåãå¿ç圢åŒãååŸã§ãããã©ãããèŠãŠã¿ãŸãããã
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg" alt="Image of the Statue of Liberty"/>
</div>
åçæäŸïŒ[Juan Mayobre](https://unsplash.com/@jmayobres)ã
```py
>>> prompt = ["User:",
... "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80",
... "Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n",
... "User:",
... "https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80",
... "Describe this image.\nAssistant:"
... ]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
User: Describe this image.
Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.
User: Describe this image.
Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall.
```
ã¢ãã«ã¯ 1 ã€ã®äŸ (ã€ãŸãã1 ã·ã§ãã) ã ãããã¿ã¹ã¯ã®å®è¡æ¹æ³ãåŠç¿ããŠããããšã«æ³šç®ããŠãã ãããããè€éãªã¿ã¹ã¯ã®å Žåã¯ã
ããå€ãã®äŸ (3 ã·ã§ããã5 ã·ã§ãããªã©) ãèªç±ã«è©ŠããŠã¿ãŠãã ããã
## Visual question answering
Visual Question Answering (VQA) ã¯ãç»åã«åºã¥ããŠèªç±åœ¢åŒã®è³ªåã«çããã¿ã¹ã¯ã§ããç»åã«äŒŒãŠãã
ãã£ãã·ã§ã³ã¯ãã¢ã¯ã»ã·ããªã㣠ã¢ããªã±ãŒã·ã§ã³ã ãã§ãªããæè² (èŠèŠè³æã«ã€ããŠã®æšè«) ã«ã䜿çšã§ããŸãã
ãµãŒãã¹ïŒç»åãåºã«ããååã«é¢ãã質åïŒãç»åæ€çŽ¢ãªã©ã
ãã®ã¿ã¹ã¯çšã«æ°ããç»åãååŸããŸãããã
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-vqa.jpg" alt="Image of a couple having a picnic"/>
</div>
åçæäŸ [Jarritos Mexican Soda](https://unsplash.com/@jarritos).
é©åãªæ瀺ãããã³ããããããšã§ãã¢ãã«ãç»åãã£ãã·ã§ã³ããèŠèŠçãªè³ªåãžã®å¿çã«å°ãããšãã§ããŸãã
```py
>>> prompt = [
... "Instruction: Provide an answer to the question. Use the image to answer.\n",
... "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "Question: Where are these people and what's the weather like? Answer:"
... ]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Provide an answer to the question. Use the image to answer.
Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day.
```
## Image classification
IDEFICS ã¯ã次ã®ããŒã¿ãå«ãããŒã¿ã«ã€ããŠæ瀺çã«ãã¬ãŒãã³ã°ããªããŠããç»åãããŸããŸãªã«ããŽãªã«åé¡ã§ããŸãã
ãããã®ç¹å®ã®ã«ããŽãªããã®ã©ãã«ä»ãã®äŸãã«ããŽãªã®ãªã¹ããæå®ãããã®ç»åãšããã¹ãã䜿çšããŠç解ãã
æ©èœãå©çšãããšãã¢ãã«ã¯ç»åãã©ã®ã«ããŽãªã«å±ããââå¯èœæ§ãé«ãããæšæž¬ã§ããŸãã
ããšãã°ã次ã®ãããªéèã¹ã¿ã³ãã®ç»åããããšããŸãã
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-classification.jpg" alt="Image of a vegetable stand"/>
</div>
åçæäŸïŒ[Peter Wendt](https://unsplash.com/@peterwendt)ã
ç»åã次ã®ããããã®ã«ããŽãªã«åé¡ããããã«ã¢ãã«ã«æ瀺ã§ããŸãã
```py
>>> categories = ['animals','vegetables', 'city landscape', 'cars', 'office']
>>> prompt = [f"Instruction: Classify the following image into a single category from the following list: {categories}.\n",
... "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "Category: "
... ]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=6, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office'].
Category: Vegetables
```
äžã®äŸã§ã¯ãç»åã 1 ã€ã®ã«ããŽãªã«åé¡ããããã«ã¢ãã«ã«æ瀺ããŠããŸãããã©ã³ã¯åé¡ãè¡ãããã«ã¢ãã«ã«æ瀺ããããšãã§ããŸãã
## Image-guided text generation
ããã¯ãªãšã€ãã£ããªã¢ããªã±ãŒã·ã§ã³ã®å Žåã¯ãç»åã¬ã€ãä»ãããã¹ãçæã䜿çšããŠãç»åã«åºã¥ããŠããã¹ããçæã§ããŸããããã¯å¯èœã§ã
補åãåºåãã·ãŒã³ã®èª¬æãªã©ãäœæããã®ã«åœ¹ç«ã¡ãŸãã
IDEFICS ã«ãèµ€ããã¢ã®åçŽãªç»åã«åºã¥ããŠã¹ããŒãªãŒãæžãããã«ä¿ããŠã¿ãŸãããã
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-story-generation.jpg" alt="Image of a red door with a pumpkin on the steps"/>
</div>
åçæäŸïŒ[Craig Tidball](https://unsplash.com/@devonshiremedia)ã
```py
>>> prompt = ["Instruction: Use the image to write a story. \n",
... "https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80",
... "Story: \n"]
>>> inputs = processor(prompt, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> print(generated_text[0])
Instruction: Use the image to write a story.
Story:
Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world.
One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat.
The little girl ran inside and told her mother about the man.
Her mother said, âDonât worry, honey. Heâs just a friendly ghost.â
The little girl wasnât sure if she believed her mother, but she went outside anyway.
When she got to the door, the man was gone.
The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep.
He was wearing a long black coat and a top hat.
The little girl ran
```
IDEFICS ã¯çé¢å
ã«ããã«ããã£ã«æ°ã¥ãã幜éã«é¢ããäžæ°å³ãªãããŠã£ãŒã³ã®è©±ãããããã§ãã
<Tip>
ãã®ãããªé·ãåºåã®å Žåãããã¹ãçææŠç¥ã埮調æŽãããšå€§ããªã¡ãªãããåŸãããŸããããã¯åœ¹ã«ç«ã¡ãŸã
çæãããåºåã®å質ã倧å¹
ã«åäžããŸãã [ããã¹ãçææŠç¥](../generation_strategies) ã確èªããŠãã ããã
詳ããç¥ãããšãã§ãã
</Tip>
## Running inference in batch mode
ãããŸã§ã®ãã¹ãŠã®ã»ã¯ã·ã§ã³ã§ã¯ãIDEFICS ã 1 ã€ã®äŸãšããŠèª¬æããŸãããéåžžã«äŒŒãæ¹æ³ã§ãæšè«ãå®è¡ã§ããŸãã
ããã³ããã®ãªã¹ããæž¡ãããšã«ããããµã³ãã«ã®ããããååŸããŸãã
```py
>>> prompts = [
... [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80",
... "This is an image of ",
... ],
... [ "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "This is an image of ",
... ],
... [ "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80",
... "This is an image of ",
... ],
... ]
>>> inputs = processor(prompts, return_tensors="pt").to("cuda")
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> for i,t in enumerate(generated_text):
... print(f"{i}:\n{t}\n")
0:
This is an image of the Eiffel Tower in Paris, France.
1:
This is an image of a couple on a picnic blanket.
2:
This is an image of a vegetable stand.
```
## IDEFICS instruct for conversational use
äŒè©±åã®ãŠãŒã¹ã±ãŒã¹ã®å Žåã¯ãð€ ããã§ã¢ãã«ã®åŸ®èª¿æŽãããæ瀺ãããããŒãžã§ã³ãèŠã€ããããšãã§ããŸãã
`HuggingFaceM4/idefics-80b-instruct` ããã³ `HuggingFaceM4/idefics-9b-instruct`ã
ãããã®ãã§ãã¯ãã€ã³ãã¯ãæåž«ããã¢ãã«ãšåœä»€ã¢ãã«ãçµã¿åãããããããã®åºæ¬ã¢ãã«ã埮調æŽããçµæã§ãã
ããŒã¿ã»ããã埮調æŽããããšã§ãããŠã³ã¹ããªãŒã ã®ããã©ãŒãã³ã¹ãåäžãããªãããäŒè©±èšå®ã§ã¢ãã«ããã䜿ããããããŸãã
äŒè©±ã§ã®äœ¿çšãšããã³ããã¯ãåºæ¬ã¢ãã«ã®äœ¿çšãšéåžžã«äŒŒãŠããŸãã
```py
>>> import torch
>>> from transformers import IdeficsForVisionText2Text, AutoProcessor
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> checkpoint = "HuggingFaceM4/idefics-9b-instruct"
>>> model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
>>> prompts = [
... [
... "User: What is in this image?",
... "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG",
... "<end_of_utterance>",
... "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.<end_of_utterance>",
... "\nUser:",
... "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052",
... "And who is that?<end_of_utterance>",
... "\nAssistant:",
... ],
... ]
>>> # --batched mode
>>> inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device)
>>> # --single sample mode
>>> # inputs = processor(prompts[0], return_tensors="pt").to(device)
>>> # Generation args
>>> exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids
>>> bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
>>> generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
>>> for i, t in enumerate(generated_text):
... print(f"{i}:\n{t}\n")
```
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/knowledge_distillation_for_image_classification.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Knowledge Distillation for Computer Vision
[[open-in-colab]]
ç¥èã®èžçã¯ããã倧èŠæš¡ã§è€éãªã¢ãã« (æåž«) ããããå°èŠæš¡ã§åçŽãªã¢ãã« (çåŸ) ã«ç¥èãäŒéããããã«äœ¿çšãããææ³ã§ããããã¢ãã«ããå¥ã®ã¢ãã«ã«ç¥èãæœåºããã«ã¯ãç¹å®ã®ã¿ã¹ã¯ (ãã®å Žåã¯ç»ååé¡) ã§ãã¬ãŒãã³ã°ãããäºåãã¬ãŒãã³ã°æžã¿æåž«ã¢ãã«ãååŸããç»ååé¡ã§ãã¬ãŒãã³ã°ãããçåŸã¢ãã«ãã©ã³ãã ã«åæåããŸãã次ã«ãåŠçã¢ãã«ããã¬ãŒãã³ã°ããŠããã®åºåãšæåž«ã®åºåã®å·®ãæå°éã«æããåäœãæš¡å£ããŸãããã㯠[Distilling the Knowledge in a Neural Network by Hinton et al](https://arxiv.org/abs/1503.02531) ã§æåã«å°å
¥ãããŸããããã®ã¬ã€ãã§ã¯ãã¿ã¹ã¯åºæã®ç¥èã®èžçãè¡ããŸããããã«ã¯ [Beans ããŒã¿ã»ãã](https://huggingface.co/datasets/beans) ã䜿çšããŸãã
ãã®ã¬ã€ãã§ã¯ã[埮調æŽããã ViT ã¢ãã«](https://huggingface.co/merve/vit-mobilenet-beans-224) (æåž«ã¢ãã«) ãæœåºã㊠[MobileNet](https://huggingface.co/google/mobilenet_v2_1.4_224) (åŠçã¢ãã«) ð€ Transformers ã® [Trainer API](https://huggingface.co/docs/transformers/en/main_classes/trainer#trainer) ã䜿çšããŸãã
èžçãšããã»ã¹ã®è©äŸ¡ã«å¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŸãããã
```bash
pip install transformers datasets accelerate tensorboard evaluate --upgrade
```
ãã®äŸã§ã¯ãæåž«ã¢ãã«ãšããŠ`merve/beans-vit-224`ã¢ãã«ã䜿çšããŠããŸããããã¯ãBean ããŒã¿ã»ããã«åºã¥ããŠåŸ®èª¿æŽããã`google/vit-base-patch16-224-in21k`ã«åºã¥ãç»ååé¡ã¢ãã«ã§ãããã®ã¢ãã«ãã©ã³ãã ã«åæåããã MobileNetV2 ã«æœåºããŸãã
次ã«ãããŒã¿ã»ãããããŒãããŸãã
```python
from datasets import load_dataset
dataset = load_dataset("beans")
```
ãã®å Žåãåã解å床ã§åãåºåãè¿ããããããã©ã¡ãã®ã¢ãã«ã®ç»åããã»ããµã䜿çšã§ããŸãã `dataset`ã®`map()`ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã®ãã¹ãŠã®åå²ã«ååŠçãé©çšããŸãã
```python
from transformers import AutoImageProcessor
teacher_processor = AutoImageProcessor.from_pretrained("merve/beans-vit-224")
def process(examples):
processed_inputs = teacher_processor(examples["image"])
return processed_inputs
processed_datasets = dataset.map(process, batched=True)
```
åºæ¬çã«ãæã
ã¯çåŸã¢ãã«ïŒã©ã³ãã ã«åæåãããMobileNetïŒãæåž«ã¢ãã«ïŒåŸ®èª¿æŽãããããžã§ã³å€æåšïŒãæš¡å£ããããšãæãããããå®çŸããããã«ããŸãæåž«ãšçåŸããããžããåºåãåŸãã次ã«ãããããã®ãœããã¿ãŒã²ããã®éèŠåºŠãå¶åŸ¡ãããã©ã¡ãŒã¿`temperature`ã§åå²ããã`lambda`ãšåŒã°ãããã©ã¡ãŒã¿ã¯èžçãã¹ã®éèŠåºŠãéãããã®äŸã§ã¯ã`temperature=5`ã`lambda=0.5`ãšãããçåŸãšæåž«ã®éã®çºæ£ãèšç®ããããã«ãKullback-Leiblerçºæ£æ倱ã䜿çšããŸãã2ã€ã®ããŒã¿PãšQãäžãããããšããKLãã€ããŒãžã§ã³ã¹ã¯Qã䜿ã£ãŠPãè¡šçŸããããã«ã©ãã ãã®äœåãªæ
å ±ãå¿
èŠãã説æããŸãããã2ã€ãåãã§ããã°ãQããPã説æããããã«å¿
èŠãªä»ã®æ
å ±ã¯ãªãã®ã§ããããã®KLãã€ããŒãžã§ã³ã¹ã¯ãŒãã«ãªããŸãã
```python
from transformers import TrainingArguments, Trainer
import torch
import torch.nn as nn
import torch.nn.functional as F
class ImageDistilTrainer(Trainer):
def __init__(self, *args, teacher_model=None, **kwargs):
super().__init__(*args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
def compute_loss(self, student, inputs, return_outputs=False):
student_output = self.student(**inputs)
with torch.no_grad():
teacher_output = self.teacher(**inputs)
# Compute soft targets for teacher and student
soft_teacher = F.softmax(teacher_output.logits / self.temperature, dim=-1)
soft_student = F.log_softmax(student_output.logits / self.temperature, dim=-1)
# Compute the loss
distillation_loss = self.loss_function(soft_student, soft_teacher) * (self.temperature ** 2)
# Compute the true label loss
student_target_loss = student_output.loss
# Calculate final loss
loss = (1. - self.lambda_param) * student_target_loss + self.lambda_param * distillation_loss
return (loss, student_output) if return_outputs else loss
```
次ã«ãHugging Face Hub ã«ãã°ã€ã³ããŠã`trainer`ãéããŠã¢ãã«ã Hugging Face Hub ã«ããã·ã¥ã§ããããã«ããŸãã
```python
from huggingface_hub import notebook_login
notebook_login()
```
æåž«ã¢ãã«ãšçåŸã¢ãã«ã§ãã`TrainingArguments`ãèšå®ããŸãããã
```python
from transformers import AutoModelForImageClassification, MobileNetV2Config, MobileNetV2ForImageClassification
training_args = TrainingArguments(
output_dir="my-awesome-model",
num_train_epochs=30,
fp16=True,
logging_dir=f"{repo_name}/logs",
logging_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model="accuracy",
report_to="tensorboard",
push_to_hub=True,
hub_strategy="every_save",
hub_model_id=repo_name,
)
num_labels = len(processed_datasets["train"].features["labels"].names)
# initialize models
teacher_model = AutoModelForImageClassification.from_pretrained(
"merve/beans-vit-224",
num_labels=num_labels,
ignore_mismatched_sizes=True
)
# training MobileNetV2 from scratch
student_config = MobileNetV2Config()
student_config.num_labels = num_labels
student_model = MobileNetV2ForImageClassification(student_config)
```
`compute_metrics` é¢æ°ã䜿çšããŠããã¹ã ã»ããã§ã¢ãã«ãè©äŸ¡ã§ããŸãããã®é¢æ°ã¯ããã¬ãŒãã³ã° ããã»ã¹äžã«ã¢ãã«ã®`accuracy`ãš`f1`ãèšç®ããããã«äœ¿çšãããŸãã
```python
import evaluate
import numpy as np
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
acc = accuracy.compute(references=labels, predictions=np.argmax(predictions, axis=1))
return {"accuracy": acc["accuracy"]}
```
å®çŸ©ãããã¬ãŒãã³ã°åŒæ°ã䜿çšããŠ`Trainer`ãåæåããŸããããããŒã¿ç
§åè£
眮ãåæåããŸãã
```python
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
trainer = ImageDistilTrainer(
student_model=student_model,
teacher_model=teacher_model,
training_args=training_args,
train_dataset=processed_datasets["train"],
eval_dataset=processed_datasets["validation"],
data_collator=data_collator,
tokenizer=teacher_extractor,
compute_metrics=compute_metrics,
temperature=5,
lambda_param=0.5
)
```
ããã§ã¢ãã«ããã¬ãŒãã³ã°ã§ããããã«ãªããŸããã
```python
trainer.train()
```
ãã¹ã ã»ããã§ã¢ãã«ãè©äŸ¡ã§ããŸãã
```python
trainer.evaluate(processed_datasets["test"])
```
ãã¹ã ã»ããã§ã¯ãã¢ãã«ã®ç²ŸåºŠã¯ 72% ã«éããŸããèžçå¹çã®å¥å
šæ§ãã§ãã¯ãè¡ãããã«ãåããã€ããŒãã©ã¡ãŒã¿ã䜿çšã㊠Bean ããŒã¿ã»ãã㧠MobileNet ãæåãããã¬ãŒãã³ã°ãããã¹ã ã»ãã㧠63% ã®ç²ŸåºŠã芳å¯ããŸãããèªè
ã®çæ§ã«ã¯ãããŸããŸãªäºåãã¬ãŒãã³ã°æžã¿æåž«ã¢ãã«ãåŠçã¢ãŒããã¯ãã£ãèžçãã©ã¡ãŒã¿ãè©ŠããŠããã ãããã®çµæãå ±åããŠããã ããããå§ãããŸããæœåºãããã¢ãã«ã®ãã¬ãŒãã³ã° ãã°ãšãã§ãã¯ãã€ã³ã㯠[ãã®ãªããžããª](https://huggingface.co/merve/vit-mobilenet-beans-224) ã«ãããæåãããã¬ãŒãã³ã°ããã MobileNetV2 ã¯ãã® [ãªããžããª](https://huggingface.co/merve/resnet-mobilenet-beans-5)ã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/image_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image classification
[[open-in-colab]]
<Youtube id="tjAIM7BOYhw"/>
ç»ååé¡ã§ã¯ãç»åã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãŸããããã¹ããé³å£°ã®åé¡ãšã¯ç°ãªããå
¥åã¯
ç»åãæ§æãããã¯ã»ã«å€ãæå·ã®æ€åºãªã©ãç»ååé¡ã«ã¯å€ãã®çšéããããŸã
èªç¶çœå®³ã®åŸãäœç©ã®å¥åº·ç¶æ
ãç£èŠããããç
æ°ã®å
åããªããå»çç»åãã¹ã¯ãªãŒãã³ã°ãããããã®ã«åœ¹ç«ã¡ãŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [Food-101](https://huggingface.co/datasets/food101) ããŒã¿ã»ããã® [ViT](model_doc/vit) ã埮調æŽããŠãç»åå
ã®é£åãåé¡ããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/image-classification) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load Food-101 dataset
Datasetsãð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã Food-101 ããŒã¿ã»ããã®å°ãããµãã»ãããèªã¿èŸŒã¿ãŸããããã«ããã次ã®æ©äŒãåŸãããŸã
å®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããŠãã ããã
```py
>>> from datasets import load_dataset
>>> food = load_dataset("food101", split="train[:5000]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> food = food.train_test_split(test_size=0.2)
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> food["train"][0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>,
'label': 79}
```
ããŒã¿ã»ããå
ã®åäŸã«ã¯ 2 ã€ã®ãã£ãŒã«ãããããŸãã
- `image`: é£åã® PIL ç»å
- `label`: é£åã®ã©ãã«ã¯ã©ã¹
ã¢ãã«ãã©ãã« ID ããã©ãã«åãååŸããããããããã«ãã©ãã«åããããããèŸæžãäœæããŸãã
æŽæ°ãžã®å€æããŸãã¯ãã®é:
```py
>>> labels = food["train"].features["label"].names
>>> label2id, id2label = dict(), dict()
>>> for i, label in enumerate(labels):
... label2id[label] = str(i)
... id2label[str(i)] = label
```
ããã§ãã©ãã« ID ãã©ãã«åã«å€æã§ããããã«ãªããŸããã
```py
>>> id2label[str(79)]
'prime_rib'
```
## Preprocess
次ã®ã¹ãããã§ã¯ãViT ç»åããã»ããµãããŒãããŠç»åããã³ãœã«ã«åŠçããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "google/vit-base-patch16-224-in21k"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
```
<frameworkcontent>
<pt>
ããã€ãã®ç»åå€æãç»åã«é©çšããŠãã¢ãã«ã®éåŠç¿ã«å¯Ÿããå
ç¢æ§ãé«ããŸããããã§ã¯ torchvision ã® [`transforms`](https://pytorch.org/vision/stable/transforms.html) ã¢ãžã¥ãŒã«ã䜿çšããŸãããä»»æã®ç»åã©ã€ãã©ãªã䜿çšããããšãã§ããŸãã
ç»åã®ã©ã³ãã ãªéšåãããªãã³ã°ãããµã€ãºãå€æŽããç»åã®å¹³åãšæšæºåå·®ã§æ£èŠåããŸãã
```py
>>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor
>>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
>>> size = (
... image_processor.size["shortest_edge"]
... if "shortest_edge" in image_processor.size
... else (image_processor.size["height"], image_processor.size["width"])
... )
>>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])
```
次ã«ãå€æãé©çšããç»åã® `pixel_values` (ã¢ãã«ãžã®å
¥å) ãè¿ãååŠçé¢æ°ãäœæããŸãã
```py
>>> def transforms(examples):
... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]]
... del examples["image"]
... return examples
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.with_transform`] ã¡ãœããã䜿çšããŸããå€æã¯ãããŒã¿ã»ããã®èŠçŽ ãèªã¿èŸŒããšãã«ãªã³ã¶ãã©ã€ã§é©çšãããŸãã
```py
>>> food = food.with_transform(transforms)
```
次ã«ã[`DefaultDataCollatââor`] ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãã`DefaultDataCollatââor` ã¯ããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
éå°é©åãåé¿ããã¢ãã«ãããå
ç¢ã«ããããã«ãããŒã¿ã»ããã®ãã¬ãŒãã³ã°éšåã«ããŒã¿æ¡åŒµãè¿œå ããŸãã
ããã§ã¯ãKeras ååŠçã¬ã€ã€ãŒã䜿çšããŠãã¬ãŒãã³ã° ããŒã¿ã®å€æ (ããŒã¿æ¡åŒµãå«ã) ãå®çŸ©ããŸãã
æ€èšŒããŒã¿ã®å€æ (äžå€®ã®ããªãã³ã°ããµã€ãºå€æŽãæ£èŠåã®ã¿)ã `tf.image` ãŸãã¯
ä»ã®ã©ã€ãã©ãªã§ãæ§ããŸããã
```py
>>> from tensorflow import keras
>>> from tensorflow.keras import layers
>>> size = (image_processor.size["height"], image_processor.size["width"])
>>> train_data_augmentation = keras.Sequential(
... [
... layers.RandomCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... layers.RandomFlip("horizontal"),
... layers.RandomRotation(factor=0.02),
... layers.RandomZoom(height_factor=0.2, width_factor=0.2),
... ],
... name="train_data_augmentation",
... )
>>> val_data_augmentation = keras.Sequential(
... [
... layers.CenterCrop(size[0], size[1]),
... layers.Rescaling(scale=1.0 / 127.5, offset=-1),
... ],
... name="val_data_augmentation",
... )
```
次ã«ãäžåºŠã« 1 ã€ã®ç»åã§ã¯ãªããç»åã®ãããã«é©åãªå€æãé©çšããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> import tensorflow as tf
>>> from PIL import Image
>>> def convert_to_tf_tensor(image: Image):
... np_image = np.array(image)
... tf_image = tf.convert_to_tensor(np_image)
... # `expand_dims()` is used to add a batch dimension since
... # the TF augmentation layers operates on batched inputs.
... return tf.expand_dims(tf_image, 0)
>>> def preprocess_train(example_batch):
... """Apply train_transforms across a batch."""
... images = [
... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch
... def preprocess_val(example_batch):
... """Apply val_transforms across a batch."""
... images = [
... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"]
... ]
... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images]
... return example_batch
```
ð€ ããŒã¿ã»ãã [`~datasets.Dataset.set_transform`] ã䜿çšããŠããã®å Žã§å€æãé©çšããŸãã
```py
food["train"].set_transform(preprocess_train)
food["test"].set_transform(preprocess_val)
```
æåŸã®ååŠçã¹ããããšããŠã`DefaultDataCollatââor`ã䜿çšããŠãµã³ãã«ã®ããããäœæããŸãã ð€ Transformers ã®ä»ã®ããŒã¿ç
§åæ©èœãšã¯ç°ãªãã
`DefaultDataCollatââor` ã¯ãããã£ã³ã°ãªã©ã®è¿œå ã®ååŠçãé©çšããŸããã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸããããã«ããŒãã§ããŸã
ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããè©äŸ¡æ¹æ³ããã®ã¿ã¹ã¯ã§ã¯ãããŒãããŸã
[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ææš (詳现ã«ã€ããŠã¯ãð€ è©äŸ¡ [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ããã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³):
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
ãã㧠`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãèšå®ãããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForImageClassification`] ã䜿çšã㊠ViT ãããŒãããŸããã©ãã«ã®æ°ãšäºæ³ãããã©ãã«ã®æ°ãããã³ã©ãã« ãããã³ã°ãæå®ããŸãã
```py
>>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
>>> model = AutoModelForImageClassification.from_pretrained(
... checkpoint,
... num_labels=len(labels),
... id2label=id2label,
... label2id=label2id,
... )
```
ãã®æç¹ã§æ®ã£ãŠããã¹ããã㯠3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã `image` åãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ãã `image` åããªããšã`pixel_values` ãäœæã§ããŸããããã®åäœãé²ãã«ã¯ã`remove_unused_columns=False`ãèšå®ããŠãã ãããä»ã«å¿
èŠãªãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã ãã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_food_model",
... remove_unused_columns=False,
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=food["train"],
... eval_dataset=food["test"],
... tokenizer=image_processor,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ããŸã [åºæ¬ãã¥ãŒããªã¢ã«](./training#train-a-tensorflow-model-with-keras) ã確èªããŠãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ã次ã®æé ã«åŸããŸãã
1. ãã¬ãŒãã³ã°ã®ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ãããªããã£ãã€ã¶ãŒãšåŠç¿çã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
2. äºåãã¬ãŒãã³ã°ãããã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸãã
3. ð€ ããŒã¿ã»ããã `tf.data.Dataset` ã«å€æããŸãã
4. ã¢ãã«ãã³ã³ãã€ã«ããŸãã
5. ã³ãŒã«ããã¯ãè¿œå ãã`fit()` ã¡ãœããã䜿çšããŠãã¬ãŒãã³ã°ãå®è¡ããŸãã
6. ã¢ãã«ã ð€ Hub ã«ã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æããŸãã
ãŸãããã€ããŒãã©ã¡ãŒã¿ãŒããªããã£ãã€ã¶ãŒãåŠç¿çã¹ã±ãžã¥ãŒã«ãå®çŸ©ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_epochs = 5
>>> num_train_steps = len(food["train"]) * num_epochs
>>> learning_rate = 3e-5
>>> weight_decay_rate = 0.01
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
次ã«ãã©ãã« ãããã³ã°ãšãšãã« [`TFAutoModelForImageClassification`] ã䜿çšã㊠ViT ãèªã¿èŸŒã¿ãŸãã
```py
>>> from transformers import TFAutoModelForImageClassification
>>> model = TFAutoModelForImageClassification.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
```
Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Dataset.to_tf_dataset`] and your `data_collator`:
```py
>>> # converting our train dataset to tf.data.Dataset
>>> tf_train_dataset = food["train"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... )
>>> # converting our test dataset to tf.data.Dataset
>>> tf_eval_dataset = food["test"].to_tf_dataset(
... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator
... )
```
`compile()` ã䜿çšããŠãã¬ãŒãã³ã°çšã«ã¢ãã«ãèšå®ããŸãã
```py
>>> from tensorflow.keras.losses import SparseCategoricalCrossentropy
>>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
>>> model.compile(optimizer=optimizer, loss=loss)
```
äºæž¬ãã粟床ãèšç®ããã¢ãã«ã ð€ ããã«ããã·ã¥ããã«ã¯ã[Keras callbacks](../main_classes/keras_callbacks) ã䜿çšããŸãã
`compute_metrics` é¢æ°ã [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback) ã«æž¡ããŸãã
[PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback) ã䜿çšããŠã¢ãã«ãã¢ããããŒãããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset)
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="food_classifier",
... tokenizer=image_processor,
... save_strategy="no",
... )
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ã
ã¢ãã«ã埮調æŽããããã®ã³ãŒã«ããã¯:
```py
>>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks)
Epoch 1/5
250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290
Epoch 2/5
250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690
Epoch 3/5
250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820
Epoch 4/5
250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900
Epoch 5/5
250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890
```
ããã§ãšãïŒã¢ãã«ã埮調æŽããð€ Hub ã§å
±æããŸãããããã§æšè«ã«äœ¿çšã§ããããã«ãªããŸããã
</tf>
</frameworkcontent>
<Tip>
ç»ååé¡çšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çŽ°ãªäŸã«ã€ããŠã¯ã察å¿ãã [PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããç»åãèªã¿èŸŒã¿ãŸãã
```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> image = ds["image"][0]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/>
</div>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç»ååé¡çšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> classifier = pipeline("image-classification", model="my_awesome_food_model")
>>> classifier(image)
[{'score': 0.31856709718704224, 'label': 'beignets'},
{'score': 0.015232225880026817, 'label': 'bruschetta'},
{'score': 0.01519392803311348, 'label': 'chicken_wings'},
{'score': 0.013022331520915031, 'label': 'pork_chop'},
{'score': 0.012728818692266941, 'label': 'prime_rib'}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ç»åããã»ããµãããŒãããŠç»åãååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> import torch
>>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model")
>>> inputs = image_processor(image, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import AutoModelForImageClassification
>>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ãããã©ãã«ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠã©ãã«ã«å€æããŸãã
```py
>>> predicted_label = logits.argmax(-1).item()
>>> model.config.id2label[predicted_label]
'beignets'
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ç»åããã»ããµãããŒãããŠç»åãååŠçãã`input`ã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier")
>>> inputs = image_processor(image, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import TFAutoModelForImageClassification
>>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier")
>>> logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ãããã©ãã«ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠã©ãã«ã«å€æããŸãã
```py
>>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
>>> model.config.id2label[predicted_class_id]
'beignets'
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/multiple_choice.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Multiple choice
[[open-in-colab]]
å€è¢éžæã¿ã¹ã¯ã¯è³ªåå¿çã«äŒŒãŠããŸãããããã€ãã®åè£ã®åçãã³ã³ããã¹ããšãšãã«æäŸãããæ£ããåçãéžæããããã«ã¢ãã«ããã¬ãŒãã³ã°ãããç¹ãç°ãªããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [SWAG](https://huggingface.co/datasets/swag) ããŒã¿ã»ããã®ãéåžžãæ§æ㧠[BERT](https://huggingface.co/google-bert/bert-base-uncased) ã埮調æŽããŠãæé©ãªããŒã¿ã»ãããéžæããŸãè€æ°ã®éžæè¢ãšäœããã®ã³ã³ããã¹ããèæ
®ããŠåçããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SWAG dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã SWAG ããŒã¿ã»ããã®ãéåžžãæ§æãããŒãããŸãã
```py
>>> from datasets import load_dataset
>>> swag = load_dataset("swag", "regular")
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
```
ããã«ã¯ããããã®ãã£ãŒã«ããããããã«èŠããŸãããå®éã¯éåžžã«ç°¡åã§ãã
- `sent1` ãš `sent2`: ãããã®ãã£ãŒã«ãã¯æã®å§ãŸãã瀺ãããã® 2 ã€ãçµã¿åããããš `startphrase` ãã£ãŒã«ããåŸãããŸãã
- `ending`: æã®çµããæ¹ãšããŠèããããçµããæ¹ã瀺åããŸãããæ£ããã®ã¯ 1 ã€ã ãã§ãã
- `label`: æ£ããæã®çµãããèå¥ããŸãã
## Preprocess
次ã®ã¹ãããã§ã¯ãBERT ããŒã¯ãã€ã¶ãŒãããŒãããŠãæã®å§ãŸããš 4 ã€ã®å¯èœãªçµãããåŠçããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
```
äœæããååŠçé¢æ°ã¯æ¬¡ã®ããšãè¡ãå¿
èŠããããŸãã
1. `sent1` ãã£ãŒã«ãã®ã³ããŒã 4 ã€äœæããããããã `sent2` ãšçµã¿åãããŠæã®å§ãŸããåçŸããŸãã
2. `sent2` ã 4 ã€ã®å¯èœãªææ«å°Ÿã®ãããããšçµã¿åãããŸãã
3. ããã 2 ã€ã®ãªã¹ããããŒã¯ã³åã§ããããã«ãã©ããåãããã®åŸãåäŸã«å¯Ÿå¿ãã `input_ids`ã`attention_mask`ãããã³ `labels` ãã£ãŒã«ããå«ãŸããããã«éãã©ããåããŸãã
```py
>>> ending_names = ["ending0", "ending1", "ending2", "ending3"]
>>> def preprocess_function(examples):
... first_sentences = [[context] * 4 for context in examples["sent1"]]
... question_headers = examples["sent2"]
... second_sentences = [
... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
... ]
... first_sentences = sum(first_sentences, [])
... second_sentences = sum(second_sentences, [])
... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] ã¡ãœããã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` é¢æ°ãé«éåã§ããŸãã
```py
tokenized_swag = swag.map(preprocess_function, batched=True)
```
ð€ Transformers ã«ã¯å€è¢éžæçšã®ããŒã¿ç
§ååšããªãããã[`DataCollatââorWithPadding`] ã調æŽããŠãµã³ãã«ã®ããããäœæããå¿
èŠããããŸããããŒã¿ã»ããå
šäœãæ倧é·ãŸã§ããã£ã³ã°ããã®ã§ã¯ãªããç
§åäžã«ãããå
ã®æé·ã®é·ããŸã§æã *åçã«ããã£ã³ã°* ããæ¹ãå¹ççã§ãã
`DataCollatââorForMultipleChoice` ã¯ããã¹ãŠã®ã¢ãã«å
¥åãå¹³åŠåããããã£ã³ã°ãé©çšããŠãçµæãéå¹³åŠåããŸãã
<frameworkcontent>
<pt>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import torch
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="pt",
... )
... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
... batch["labels"] = torch.tensor(labels, dtype=torch.int64)
... return batch
```
</pt>
<tf>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import tensorflow as tf
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="tf",
... )
... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
... return batch
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®æ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã次ãåç
§ããŠãã ãã)ã
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForMultipleChoice`] ã䜿çšã㊠BERT ãããŒãããŸãã
```py
>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_swag["train"],
... eval_dataset=tokenized_swag["validation"],
... tokenizer=tokenizer,
... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããŸãããã«ã
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-a-tensorflow-model-with-keras) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ããªããã£ãã€ã¶ãŒé¢æ°ãåŠç¿çã¹ã±ãžã¥ãŒã«ãããã³ããã€ãã®ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãŒãã»ããã¢ããããããšããå§ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 2
>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
```
次ã«ã[`TFAutoModelForMultipleChoice`] ã䜿çšã㊠BERT ãããŒãã§ããŸãã
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_swag["train"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_swag["validation"],
... shuffle=False,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ã䜿çšããŠãã¬ãŒãã³ã°çšã®ã¢ãã«ãèšå®ããŸãã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ãã¬ãŒãã³ã°ãéå§ããåã«ã»ããã¢ããããæåŸã® 2 ã€ã®ããšã¯ãäºæž¬ãã粟床ãèšç®ããããšãšãã¢ãã«ãããã«ããã·ã¥ããæ¹æ³ãæäŸããããšã§ããã©ã¡ãã [Keras ã³ãŒã«ããã¯](../main_classes/keras_callbacks) ã䜿çšããŠè¡ãããŸãã
`compute_metrics` é¢æ°ã [`~transformers.KerasMetricCallback`] ã«æž¡ããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`] ã§ã¢ãã«ãšããŒã¯ãã€ã¶ãŒãããã·ã¥ããå Žæãæå®ããŸãã
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_model",
... tokenizer=tokenizer,
... )
```
次ã«ãã³ãŒã«ããã¯ããŸãšããŠãã³ãã«ããŸãã
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ãã³ãŒã«ããã¯ãæå®ã㊠[`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ãåŒã³åºããã¢ãã«ã埮調æŽããŸãã
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
```
ãã¬ãŒãã³ã°ãå®äºãããšãã¢ãã«ã¯èªåçã«ããã«ã¢ããããŒãããã誰ã§ã䜿çšã§ããããã«ãªããŸãã
</tf>
</frameworkcontent>
<Tip>
è€æ°éžæçšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çŽ°ãªäŸã«ã€ããŠã¯ã察å¿ããã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
[PyTorch ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)
ãŸã㯠[TensorFlow ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)ã
</Tip>
# Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
ããã€ãã®ããã¹ããš 2 ã€ã®åçåè£ãèããŠãã ããã
```py
>>> prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
>>> candidate1 = "The law does not apply to croissants and brioche."
>>> candidate2 = "The law applies to baguettes."
```
<frameworkcontent>
<pt>
åããã³ãããšåçåè£ã®ãã¢ãããŒã¯ã³åããPyTorch ãã³ãœã«ãè¿ããŸããããã€ãã®`lables`ãäœæããå¿
èŠããããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
>>> labels = torch.tensor(0).unsqueeze(0)
```
å
¥åãšã©ãã«ãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import AutoModelForMultipleChoice
>>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
>>> logits = outputs.logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããŸãã
```py
>>> predicted_class = logits.argmax().item()
>>> predicted_class
'0'
```
</pt>
<tf>
åããã³ãããšåçåè£ã®ãã¢ãããŒã¯ã³åããTensorFlow ãã³ãœã«ãè¿ããŸãã
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
>>> outputs = model(inputs)
>>> logits = outputs.logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããŸãã
```py
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
>>> predicted_class
'0'
```
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/sequence_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Sequence classification
[[open-in-colab]]
<Youtube id="dKE8SIt9C-w"/>
ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ãç»åã®åã
ã®ãã¯ã»ã«ã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãŸããã»ã°ã¡ã³ããŒã·ã§ã³ã«ã¯ããã€ãã®ã¿ã€ãããããŸãããã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®å Žåãåããªããžã§ã¯ãã®äžæã®ã€ã³ã¹ã¿ã³ã¹éã®åºå¥ã¯è¡ãããŸãããäž¡æ¹ã®ãªããžã§ã¯ãã«åãã©ãã«ãä»ããããŸã (ããšãã°ããcar-1ããšãcar-2ãã®ä»£ããã«ãcarã)ãã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®äžè¬çãªçŸå®äžçã®ã¢ããªã±ãŒã·ã§ã³ã«ã¯ãæ©è¡è
ãéèŠãªäº€éæ
å ±ãèå¥ããããã®èªåé転è»ã®ãã¬ãŒãã³ã°ãå»çç»åå
ã®çŽ°èãšç°åžžã®èå¥ãè¡æç»åããã®ç°å¢å€åã®ç£èŠãªã©ãå«ãŸããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [SceneParse150](https://huggingface.co/datasets/scene_parse_150) ããŒã¿ã»ããã® [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/text-classification) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q datasets transformers evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SceneParse150 dataset
ãŸããSceneParse150 ããŒã¿ã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããèªã¿èŸŒã¿ãŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("scene_parse_150", split="train[:50]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> ds = ds.train_test_split(test_size=0.2)
>>> train_ds = ds["train"]
>>> test_ds = ds["test"]
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> train_ds[0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>,
'scene_category': 368}
```
- `image`: ã·ãŒã³ã® PIL ã€ã¡ãŒãžã
- `annotation`: ã»ã°ã¡ã³ããŒã·ã§ã³ ãããã® PIL ã€ã¡ãŒãžãã¢ãã«ã®ã¿ãŒã²ããã§ããããŸãã
- `scene_category`: ããããã³ããããªãã£ã¹ããªã©ã®ç»åã·ãŒã³ã説æããã«ããŽãª IDããã®ã¬ã€ãã§ã¯ããimageããšãannotationãã®ã¿ãå¿
èŠã«ãªããŸããã©ã¡ãã PIL ã€ã¡ãŒãžã§ãã
ãŸããã©ãã« ID ãã©ãã« ã¯ã©ã¹ã«ãããããèŸæžãäœæããããšãã§ããŸããããã¯ãåŸã§ã¢ãã«ãèšå®ãããšãã«åœ¹ç«ã¡ãŸãããããããããã³ã°ãããŠã³ããŒããã`id2label` ããã³ `label2id` ãã£ã¯ã·ã§ããªãäœæããŸãã
```py
>>> import json
>>> from huggingface_hub import cached_download, hf_hub_url
>>> repo_id = "huggingface/label-files"
>>> filename = "ade20k-id2label.json"
>>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
>>> id2label = {int(k): v for k, v in id2label.items()}
>>> label2id = {v: k for k, v in id2label.items()}
>>> num_labels = len(id2label)
```
## Preprocess
次ã®ã¹ãããã§ã¯ãSegFormer ç»åããã»ããµãããŒãããŠãã¢ãã«ã®ç»åãšæ³šéãæºåããŸãããã®ããŒã¿ã»ããã®ãããªäžéšã®ããŒã¿ã»ããã¯ãããã¯ã°ã©ãŠã³ã ã¯ã©ã¹ãšããŠãŒãã€ã³ããã¯ã¹ã䜿çšããŸãããã ããå®éã«ã¯èæ¯ã¯ã©ã¹ã¯ 150 åã®ã¯ã©ã¹ã«å«ãŸããŠããªãããã`reduce_labels=True`ãèšå®ããŠãã¹ãŠã®ã©ãã«ãã 1 ã€ãåŒãå¿
èŠããããŸãããŒãã€ã³ããã¯ã¹ã¯ `255` ã«çœ®ãæãããããããSegFormer ã®æ倱é¢æ°ã«ãã£ãŠç¡èŠãããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "nvidia/mit-b0"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
```
<frameworkcontent>
<pt>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãããã®ã¬ã€ãã§ã¯ã[torchvision](https://pytorch.org) ã® [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html) é¢æ°ã䜿çšããŸãã /vision/stable/index.html) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ç»åã©ã€ãã©ãªã䜿çšããããšãã§ããŸãã
```py
>>> from torchvision.transforms import ColorJitter
>>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéãæºåããããã® 2 ã€ã®ååŠçé¢æ°ãäœæããŸãããããã®é¢æ°ã¯ãç»åã`pixel_values`ã«å€æãã泚éã`labels`ã«å€æããŸãããã¬ãŒãã³ã° ã»ããã®å Žåãç»åãç»åããã»ããµã«æäŸããåã«`jitter`ãé©çšãããŸãããã¹ã ã»ããã®å Žåããã¹ãäžã«ããŒã¿æ¡åŒµãé©çšãããªããããç»åããã»ããµã¯`images`ãåãåã£ãŠæ£èŠåãã`labels` ã®ã¿ãåãåããŸãã
```py
>>> def train_transforms(example_batch):
... images = [jitter(x) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [x for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«`jitter`ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] é¢æ°ã䜿çšããŸããå€æã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãã
ãã®ã¬ã€ãã§ã¯ã[`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ããããã£ã䜿çšããããšãã§ããŸããç»å
奜ããªå³æžé€šã
2 ã€ã®å¥ã
ã®å€æé¢æ°ãå®çŸ©ããŸãã
- ç»åæ¡åŒµãå«ããã¬ãŒãã³ã° ããŒã¿å€æ
- ð€ Transformers ã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¢ãã«ã¯ãã£ãã«åªå
ã®ã¬ã€ã¢ãŠããæ³å®ããŠãããããç»åã転眮ããã ãã®æ€èšŒããŒã¿å€æ
```py
>>> import tensorflow as tf
>>> def aug_transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.image.random_brightness(image, 0.25)
... image = tf.image.random_contrast(image, 0.5, 2.0)
... image = tf.image.random_saturation(image, 0.75, 1.25)
... image = tf.image.random_hue(image, 0.1)
... image = tf.transpose(image, (2, 0, 1))
... return image
>>> def transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.transpose(image, (2, 0, 1))
... return image
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéã®ããããæºåãã 2 ã€ã®ååŠçé¢æ°ãäœæããŸãããããã®æ©èœãé©çšãããŸã
ç»åå€æãè¡ãã以åã«ããŒãããã `image_processor` ã䜿çšããŠç»åã `pixel_values` ã«å€æãã
`labels`ãžã®æ³šéã `ImageProcessor` ã¯ãç»åã®ãµã€ãºå€æŽãšæ£èŠåãåŠçããŸãã
```py
>>> def train_transforms(example_batch):
... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçå€æãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] é¢æ°ã䜿çšããŸãã
å€æã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[Mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) ã¡ããªãã¯ãããŒãããŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co) ãåç
§ããŠãã ãã) /docs/evaluate/a_quick_tour) ãåç
§ããŠãã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çŽ°ã確èªããŠãã ãã)ã
```py
>>> import evaluate
>>> metric = evaluate.load("mean_iou")
```
次ã«ãã¡ããªã¯ã¹ã [`~evaluate.EvaluationModule.compute`] ããé¢æ°ãäœæããŸããäºæž¬ã次ã®ããã«å€æããå¿
èŠããããŸã
æåã«ããžãããäœæãã次㫠[`~evaluate.EvaluationModule.compute`] ãåŒã³åºãåã«ã©ãã«ã®ãµã€ãºã«äžèŽããããã«å圢æããŸãã
<frameworkcontent>
<pt>
```py
>>> import numpy as np
>>> import torch
>>> from torch import nn
>>> def compute_metrics(eval_pred):
... with torch.no_grad():
... logits, labels = eval_pred
... logits_tensor = torch.from_numpy(logits)
... logits_tensor = nn.functional.interpolate(
... logits_tensor,
... size=labels.shape[-2:],
... mode="bilinear",
... align_corners=False,
... ).argmax(dim=1)
... pred_labels = logits_tensor.detach().cpu().numpy()
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=255,
... reduce_labels=False,
... )
... for key, value in metrics.items():
... if type(value) is np.ndarray:
... metrics[key] = value.tolist()
... return metrics
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
```py
>>> def compute_metrics(eval_pred):
... logits, labels = eval_pred
... logits = tf.transpose(logits, perm=[0, 2, 3, 1])
... logits_resized = tf.image.resize(
... logits,
... size=tf.shape(labels)[1:],
... method="bilinear",
... )
... pred_labels = tf.argmax(logits_resized, axis=-1)
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=-1,
... reduce_labels=image_processor.do_reduce_labels,
... )
... per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
... per_category_iou = metrics.pop("per_category_iou").tolist()
... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
... return {"val_" + k: v for k, v in metrics.items()}
```
</tf>
</frameworkcontent>
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#finetune-with-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒãããã©ãã« ID ãšã©ãã« ã¯ã©ã¹éã®ãããã³ã°ãã¢ãã«ã«æž¡ããŸãã
```py
>>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
>>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã `image` åãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ãã `image` åããªããšã`pixel_values` ãäœæã§ããŸããããã®åäœãé²ãã«ã¯ã`remove_unused_columns=False`ãèšå®ããŠãã ãããä»ã«å¿
èŠãªãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã ãã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] 㯠IoU ã¡ããªãã¯ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="segformer-b0-scene-parse-150",
... learning_rate=6e-5,
... num_train_epochs=50,
... per_device_train_batch_size=2,
... per_device_eval_batch_size=2,
... save_total_limit=3,
... eval_strategy="steps",
... save_strategy="steps",
... save_steps=20,
... eval_steps=20,
... logging_steps=1,
... eval_accumulation_steps=5,
... remove_unused_columns=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=train_ds,
... eval_dataset=test_ds,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ããŸã [åºæ¬ãã¥ãŒããªã¢ã«](./training#train-a-tensorflow-model-with-keras) ã確èªããŠãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ã次ã®æé ã«åŸããŸãã
1. ãã¬ãŒãã³ã°ã®ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ãããªããã£ãã€ã¶ãŒãšåŠç¿çã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
2. äºåãã¬ãŒãã³ã°ãããã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸãã
3. ð€ ããŒã¿ã»ããã `tf.data.Dataset` ã«å€æããŸãã
4. ã¢ãã«ãã³ã³ãã€ã«ããŸãã
5. ã³ãŒã«ããã¯ãè¿œå ããŠã¡ããªã¯ã¹ãèšç®ããã¢ãã«ã ð€ Hub ã«ã¢ããããŒãããŸã
6. `fit()` ã¡ãœããã䜿çšããŠãã¬ãŒãã³ã°ãå®è¡ããŸãã
ãŸãããã€ããŒãã©ã¡ãŒã¿ãŒããªããã£ãã€ã¶ãŒãåŠç¿çã¹ã±ãžã¥ãŒã«ãå®çŸ©ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 2
>>> num_epochs = 50
>>> num_train_steps = len(train_ds) * num_epochs
>>> learning_rate = 6e-5
>>> weight_decay_rate = 0.01
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
次ã«ãã©ãã« ãããã³ã°ãšãšãã« [`TFAutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒããããããã³ã³ãã€ã«ããŸãã
ãªããã£ãã€ã¶ã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
>>> model.compile(optimizer=optimizer) # No loss argument!
```
[`~datasets.Dataset.to_tf_dataset`] ãš [`DefaultDataCollatââor`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
>>> tf_train_dataset = train_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_eval_dataset = test_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
äºæž¬ãã粟床ãèšç®ããã¢ãã«ã ð€ ããã«ããã·ã¥ããã«ã¯ã[Keras callbacks](../main_classes/keras_callbacks) ã䜿çšããŸãã
`compute_metrics` é¢æ°ã [`KerasMetricCallback`] ã«æž¡ããŸãã
ãã㊠[`PushToHubCallback`] ã䜿çšããŠã¢ãã«ãã¢ããããŒãããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
>>> metric_callback = KerasMetricCallback(
... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
... )
>>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", image_processor=image_processor)
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããã`fit()`ãã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ã
ã¢ãã«ã埮調æŽããããã®ã³ãŒã«ããã¯:
```py
>>> model.fit(
... tf_train_dataset,
... validation_data=tf_eval_dataset,
... callbacks=callbacks,
... epochs=num_epochs,
... )
```
ããã§ãšãïŒã¢ãã«ã埮調æŽããð€ Hub ã§å
±æããŸãããããã§æšè«ã«äœ¿çšã§ããããã«ãªããŸããã
</tf>
</frameworkcontent>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ã®ããã«ç»åãããŒãããŸãã
```py
>>> image = ds[0]["image"]
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"/>
</div>
<frameworkcontent>
<pt>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³çšã® `pipeline` ãã€ã³ã¹ã¿ã³ã¹åããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model")
>>> segmenter(image)
[{'score': None,
'label': 'wall',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>},
{'score': None,
'label': 'sky',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>},
{'score': None,
'label': 'floor',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>},
{'score': None,
'label': 'ceiling',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>},
{'score': None,
'label': 'bed ',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>},
{'score': None,
'label': 'windowpane',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>},
{'score': None,
'label': 'cabinet',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>},
{'score': None,
'label': 'chair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>},
{'score': None,
'label': 'armchair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}]
```
å¿
èŠã«å¿ããŠã`pipeline` ã®çµæãæåã§è€è£œããããšãã§ããŸããç»åããã»ããµã§ç»åãåŠçãã`pixel_values`ã GPU ã«é
眮ããŸãã
```py
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
>>> encoding = image_processor(image, return_tensors="pt")
>>> pixel_values = encoding.pixel_values.to(device)
```
å
¥åãã¢ãã«ã«æž¡ãããlogitsããè¿ããŸãã
```py
>>> outputs = model(pixel_values=pixel_values)
>>> logits = outputs.logits.cpu()
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããŸãã
```py
>>> upsampled_logits = nn.functional.interpolate(
... logits,
... size=image.size[::-1],
... mode="bilinear",
... align_corners=False,
... )
>>> pred_seg = upsampled_logits.argmax(dim=1)[0]
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ç»åããã»ããµãããŒãããŠç»åãååŠçããå
¥åã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
>>> inputs = image_processor(image, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
>>> logits = model(**inputs).logits
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããã¯ã©ã¹æ¬¡å
ã« argmax ãé©çšããŸãã
```py
>>> logits = tf.transpose(logits, [0, 2, 3, 1])
>>> upsampled_logits = tf.image.resize(
... logits,
... # We reverse the shape of `image` because `image.size` returns width and height.
... image.size[::-1],
... )
>>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0]
```
</tf>
</frameworkcontent>
çµæãèŠèŠåããã«ã¯ã[ããŒã¿ã»ãã ã«ã©ãŒ ãã¬ãã](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) ãããããããããããã `ade_palette()` ãšããŠããŒãããŸããã¯ã©ã¹ã RGB å€ã«å€æããŸãã次ã«ãç»åãšäºæž¬ãããã»ã°ã¡ã³ããŒã·ã§ã³ ããããçµã¿åãããŠããããã§ããŸãã
```py
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
>>> palette = np.array(ade_palette())
>>> for label, color in enumerate(palette):
... color_seg[pred_seg == label, :] = color
>>> color_seg = color_seg[..., ::-1] # convert to BGR
>>> img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
>>> img = img.astype(np.uint8)
>>> plt.figure(figsize=(15, 10))
>>> plt.imshow(img)
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/>
</div>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/asr.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Automatic speech recognition
[[open-in-colab]]
<Youtube id="TksaY_FDgnk"/>
èªåé³å£°èªè (ASR) ã¯é³å£°ä¿¡å·ãããã¹ãã«å€æããäžé£ã®é³å£°å
¥åãããã¹ãåºåã«ãããã³ã°ããŸãã Siri ã Alexa ãªã©ã®ä»®æ³ã¢ã·ã¹ã¿ã³ã㯠ASR ã¢ãã«ã䜿çšããŠãŠãŒã¶ãŒãæ¥åžžçã«æ¯æŽããŠãããã©ã€ããã£ãã·ã§ã³ãäŒè°äžã®ã¡ã¢åããªã©ãä»ã«ã䟿å©ãªãŠãŒã¶ãŒåãã¢ããªã±ãŒã·ã§ã³ãæ°å€ããããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ããã® [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ã埮調æŽããŠãé³å£°ãããã¹ãã«æžãèµ·ãããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/automatic-speech-recognition) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install transformers datasets evaluate jiwer
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load MInDS-14 dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ããã®å°ãããµãã»ãããããŒãããŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]")
```
[`~Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
次ã«ãããŒã¿ã»ãããèŠãŠã¿ãŸãããã
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 16
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 4
})
})
```
ããŒã¿ã»ããã«ã¯`lang_id`ã`english_transcription`ãªã©ã®å€ãã®æçšãªæ
å ±ãå«ãŸããŠããŸããããã®ã¬ã€ãã§ã¯ã`audio`ããšã`transciption`ãã«çŠç¹ãåœãŠãŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠä»ã®åãåé€ããŸãã
```py
>>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"])
```
ããäžåºŠäŸãèŠãŠã¿ãŸãããã
```py
>>> minds["train"][0]
{'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414,
0.00024414, 0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 8000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
次㮠2 ã€ã®ãã£ãŒã«ãããããŸãã
- `audio`: é³å£°ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããããã«åŒã³åºãå¿
èŠãããé³å£°ä¿¡å·ã® 1 次å
ã® `array`ã
- `transcription`: ã¿ãŒã²ããããã¹ãã
## Preprocess
次ã®ã¹ãããã§ã¯ãWav2Vec2 ããã»ããµãããŒãããŠãªãŒãã£ãªä¿¡å·ãåŠçããŸãã
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
```
MInDS-14 ããŒã¿ã»ããã®ãµã³ããªã³ã° ã¬ãŒã㯠8000kHz ã§ã (ãã®æ
å ±ã¯ [ããŒã¿ã»ãã ã«ãŒã](https://huggingface.co/datasets/PolyAI/minds14) ã§ç¢ºèªã§ããŸã)ãã€ãŸããããŒã¿ã»ãããåãµã³ããªã³ã°ããå¿
èŠããããŸããäºåãã¬ãŒãã³ã°ããã Wav2Vec2 ã¢ãã«ã䜿çšããã«ã¯ã16000kHz ã«èšå®ããŸãã
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ...,
2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 16000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
äžã® `transcription` ã§ãããããã«ãããã¹ãã«ã¯å€§æåãšå°æåãæ··åšããŠããŸãã Wav2Vec2 ããŒã¯ãã€ã¶ãŒã¯å€§æåã®ã¿ã§ãã¬ãŒãã³ã°ããããããããã¹ããããŒã¯ãã€ã¶ãŒã®èªåœãšäžèŽããããšã確èªããå¿
èŠããããŸãã
```py
>>> def uppercase(example):
... return {"transcription": example["transcription"].upper()}
>>> minds = minds.map(uppercase)
```
次ã«ã次ã®ååŠçé¢æ°ãäœæããŸãã
1. `audio`åãåŒã³åºããŠããªãŒãã£ãª ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããŸãã
2. ãªãŒãã£ãª ãã¡ã€ã«ãã `input_values` ãæœåºããããã»ããµã䜿çšã㊠`transcription` åãããŒã¯ã³åããŸãã
```py
>>> def prepare_dataset(batch):
... audio = batch["audio"]
... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"])
... batch["input_length"] = len(batch["input_values"][0])
... return batch
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] é¢æ°ã䜿çšããŸãã `num_proc` ãã©ã¡ãŒã¿ã䜿çšããŠããã»ã¹ã®æ°ãå¢ããããšã§ã`map` ãé«éåã§ããŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠãäžèŠãªåãåé€ããŸãã
```py
>>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4)
```
ð€ Transformers ã«ã¯ ASR çšã®ããŒã¿ç
§ååšããªãããã[`DataCollatââorWithPadding`] ã調æŽããŠãµã³ãã«ã®ããããäœæããå¿
èŠããããŸãããŸããããã¹ããšã©ãã«ã (ããŒã¿ã»ããå
šäœã§ã¯ãªã) ãããå
ã®æãé·ãèŠçŽ ã®é·ãã«åãããŠåçã«åã蟌ãŸããåäžãªé·ãã«ãªããŸãã `padding=True` ãèšå®ãããšã`tokenizer` é¢æ°ã§ããã¹ããåã蟌ãããšãã§ããŸãããåçãªåã蟌ã¿ã®æ¹ãå¹ççã§ãã
ä»ã®ããŒã¿ç
§ååšãšã¯ç°ãªãããã®ç¹å®ã®ããŒã¿ç
§ååšã¯ã`input_values`ãš `labels`ãã«ç°ãªãããã£ã³ã°æ¹æ³ãé©çšããå¿
èŠããããŸãã
```py
>>> import torch
>>> from dataclasses import dataclass, field
>>> from typing import Any, Dict, List, Optional, Union
>>> @dataclass
... class DataCollatorCTCWithPadding:
... processor: AutoProcessor
... padding: Union[bool, str] = "longest"
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
... # split inputs and labels since they have to be of different lengths and need
... # different padding methods
... input_features = [{"input_values": feature["input_values"][0]} for feature in features]
... label_features = [{"input_ids": feature["labels"]} for feature in features]
... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt")
... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt")
... # replace padding with -100 to ignore loss correctly
... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
... batch["labels"] = labels
... return batch
```
次ã«ã`DataCollatââorForCTCWithPadding` ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
```
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[åèªãšã©ãŒç](https://huggingface.co/spaces/evaluate-metric/wer) (WER) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çŽ°ã確èªããŠãã ãã)ã
```py
>>> import evaluate
>>> wer = evaluate.load("wer")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ã㊠WER ãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(pred):
... pred_logits = pred.predictions
... pred_ids = np.argmax(pred_logits, axis=-1)
... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
... pred_str = processor.batch_decode(pred_ids)
... label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
... wer = wer.compute(predictions=pred_str, references=label_str)
... return {"wer": wer}
```
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForCTC`] 㧠Wav2Vec2 ãããŒãããŸãã `ctc_loss_reduction` ãã©ã¡ãŒã¿ã§é©çšããåæžãæå®ããŸããå€ãã®å Žåãããã©ã«ãã®åèšã§ã¯ãªãå¹³åã䜿çšããæ¹ãé©åã§ãã
```py
>>> from transformers import AutoModelForCTC, TrainingArguments, Trainer
>>> model = AutoModelForCTC.from_pretrained(
... "facebook/wav2vec2-base",
... ctc_loss_reduction="mean",
... pad_token_id=processor.tokenizer.pad_token_id,
... )
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`ãã¬ãŒããŒ`] 㯠WER ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_asr_mind_model",
... per_device_train_batch_size=8,
... gradient_accumulation_steps=2,
... learning_rate=1e-5,
... warmup_steps=500,
... max_steps=2000,
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... eval_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
... logging_steps=25,
... load_best_model_at_end=True,
... metric_for_best_model="wer",
... greater_is_better=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=processor,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
èªåé³å£°èªèçšã«ã¢ãã«ã埮調æŽããæ¹æ³ã®ãã詳现ãªäŸã«ã€ããŠã¯ãè±èª ASR ããã³è±èªã®ãã®ããã° [æçš¿](https://huggingface.co/blog/fine-tune-wav2vec2-english) ãåç
§ããŠãã ãããå€èšèª ASR ã«ã€ããŠã¯ããã® [æçš¿](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) ãåç
§ããŠãã ããã
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããé³å£°ãã¡ã€ã«ãããŒãããŸããå¿
èŠã«å¿ããŠããªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒããã¢ãã«ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽããããã«ãªãµã³ããªã³ã°ããããšãå¿ããªãã§ãã ããã
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠèªåé³å£°èªèçšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åãããªãŒãã£ãª ãã¡ã€ã«ãããã«æž¡ããŸãã
```py
>>> from transformers import pipeline
>>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model")
>>> transcriber(audio_file)
{'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'}
```
<Tip>
転åã¯ãŸããŸãã§ããããã£ãšè¯ããªãå¯èœæ§ããããŸããããã«è¯ãçµæãåŸãã«ã¯ãããå€ãã®äŸã§ã¢ãã«ã埮調æŽããŠã¿ãŠãã ããã
</Tip>
å¿
èŠã«å¿ããŠãããã€ãã©ã€ã³ãã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ããã»ããµãããŒãããŠãªãŒãã£ãª ãã¡ã€ã«ãšæåèµ·ãããååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
Pass your inputs to the model and return the logits:
```py
>>> from transformers import AutoModelForCTC
>>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§äºæž¬ããã `input_ids` ãååŸããããã»ããµã䜿çšããŠäºæž¬ããã `input_ids` ããã³ãŒãããŠããã¹ãã«æ»ããŸãã
```py
>>> import torch
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER']
```
</pt>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/image_to_image.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image-to-Image Task Guide
[[open-in-colab]]
Image-to-Image ã¿ã¹ã¯ã¯ãã¢ããªã±ãŒã·ã§ã³ãç»åãåä¿¡ããå¥ã®ç»åãåºåããã¿ã¹ã¯ã§ããããã«ã¯ãç»å匷å (è¶
解å床ãäœå
é匷åããã£ã¬ã€ã³ãªã©)ãç»å修埩ãªã©ãå«ãããŸããŸãªãµãã¿ã¹ã¯ããããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
- è¶
解å床ã¿ã¹ã¯ã«ç»åéã®ãã€ãã©ã€ã³ã䜿çšããŸãã
- ãã€ãã©ã€ã³ã䜿çšããã«ãåãã¿ã¹ã¯ã«å¯ŸããŠã€ã¡ãŒãžéã¢ãã«ãå®è¡ããŸãã
ãã®ã¬ã€ãããªãªãŒã¹ãããæç¹ã§ã¯ã`image-to-image`ãã€ãã©ã€ã³ã¯è¶
解å床ã¿ã¹ã¯ã®ã¿ããµããŒãããŠããããšã«æ³šæããŠãã ããã
å¿
èŠãªã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããããšããå§ããŸãããã
```bash
pip install transformers
```
[Swin2SR ã¢ãã«](https://huggingface.co/caidas/swin2SR-lightweight-x2-64) ã䜿çšããŠãã€ãã©ã€ã³ãåæåã§ããããã«ãªããŸããã次ã«ãã€ã¡ãŒãžã䜿çšããŠãã€ãã©ã€ã³ãåŒã³åºãããšã§ããã€ãã©ã€ã³ãæšè«ã§ããŸããçŸæç¹ã§ã¯ã[Swin2SR ã¢ãã«](https://huggingface.co/models?sort=trending&search=swin2sr) ã®ã¿ããã®ãã€ãã©ã€ã³ã§ãµããŒããããŠããŸãã
```python
from transformers import pipeline
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
pipe = pipeline(task="image-to-image", model="caidas/swin2SR-lightweight-x2-64", device=device)
```
ã§ã¯ãç»åãèªã¿èŸŒã¿ãŸãããã
```python
from PIL import Image
import requests
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg"
image = Image.open(requests.get(url, stream=True).raw)
print(image.size)
```
```bash
# (532, 432)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg" alt="Photo of a cat"/>
</div>
ããã§ããã€ãã©ã€ã³ã䜿çšããŠæšè«ãå®è¡ã§ããããã«ãªããŸãããç«ã®ç»åã®æ¡å€§ããŒãžã§ã³ãååŸããŸãã
```python
upscaled = pipe(image)
print(upscaled.size)
```
```bash
# (1072, 880)
```
ãã€ãã©ã€ã³ã䜿çšããã«èªåã§æšè«ãå®è¡ãããå Žåã¯ããã©ã³ã¹ãã©ãŒããŒã® `Swin2SRForImageSuperResolution` ã¯ã©ã¹ãš `Swin2SRImageProcessor` ã¯ã©ã¹ã䜿çšã§ããŸããããã«ã¯åãã¢ãã«ã®ãã§ãã¯ãã€ã³ãã䜿çšããŸããã¢ãã«ãšããã»ããµãåæåããŸãããã
```python
from transformers import Swin2SRForImageSuperResolution, Swin2SRImageProcessor
model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-lightweight-x2-64").to(device)
processor = Swin2SRImageProcessor("caidas/swin2SR-lightweight-x2-64")
```
`pipeline`ãã¯ãèªåã§è¡ãå¿
èŠãããååŠçãšåŸåŠçã®ã¹ããããæœè±¡åããã®ã§ãç»åãååŠçããŸããããç»åãããã»ããµã«æž¡ããŠããããã¯ã»ã«å€ã GPU ã«ç§»åããŸãã
```python
pixel_values = processor(image, return_tensors="pt").pixel_values
print(pixel_values.shape)
pixel_values = pixel_values.to(device)
```
ããã§ããã¯ã»ã«å€ãã¢ãã«ã«æž¡ãããšã§ç»åãæšæž¬ã§ããããã«ãªããŸããã
```python
import torch
with torch.no_grad():
outputs = model(pixel_values)
```
åºåã¯ã以äžã®ãã㪠`ImageSuperResolutionOutput` ã¿ã€ãã®ãªããžã§ã¯ãã§ã ð
```
(loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, ..., 0.7463, 0.7446, 0.7453],
[0.8287, 0.8278, 0.8283, ..., 0.7451, 0.7448, 0.7457],
[0.8280, 0.8273, 0.8269, ..., 0.7447, 0.7446, 0.7452],
...,
[0.5923, 0.5933, 0.5924, ..., 0.0697, 0.0695, 0.0706],
[0.5926, 0.5932, 0.5926, ..., 0.0673, 0.0687, 0.0705],
[0.5927, 0.5914, 0.5922, ..., 0.0664, 0.0694, 0.0718]]]],
device='cuda:0'), hidden_states=None, attentions=None)
```
`reconstruction`ãååŸãããããèŠèŠåããããã«åŸåŠçããå¿
èŠããããŸããã©ã®ããã«èŠãããèŠãŠã¿ãŸãããã
```python
outputs.reconstruction.data.shape
# torch.Size([1, 3, 880, 1072])
```
åºåãå§çž®ããŠè»ž 0 ãåé€ããå€ãã¯ãªããããŠãããããã numpy float ã«å€æããå¿
èŠããããŸãã次ã«ã軞ã [1072, 880] ã®åœ¢ç¶ã«ãªãããã«é
眮ããæåŸã«åºåãç¯å² [0, 255] ã«æ»ããŸãã
```python
import numpy as np
# squeeze, take to CPU and clip the values
output = outputs.reconstruction.data.squeeze().cpu().clamp_(0, 1).numpy()
# rearrange the axes
output = np.moveaxis(output, source=0, destination=-1)
# bring values back to pixel values range
output = (output * 255.0).round().astype(np.uint8)
Image.fromarray(output)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat_upscaled.png" alt="Upscaled photo of a cat"/>
</div> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/object_detection.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Object detection
[[open-in-colab]]
ãªããžã§ã¯ãæ€åºã¯ãç»åå
ã®ã€ã³ã¹ã¿ã³ã¹ (人éã建ç©ãè»ãªã©) ãæ€åºããã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã§ããç©äœæ€åºã¢ãã«ã¯ç»åãå
¥åããã³åºåãšããŠåãåããŸã
æ€åºããããªããžã§ã¯ãã®å¢çããã¯ã¹ãšé¢é£ããã©ãã«ã®åº§æšãç»åã«ã¯è€æ°ã®ãªããžã§ã¯ããå«ããããšãã§ããŸãã
ããããã«ç¬èªã®å¢çããã¯ã¹ãšã©ãã«ããã (äŸ: è»ãšå»ºç©ãæã€ããšãã§ããŸã)ãåãªããžã§ã¯ãã¯
ç»åã®ããŸããŸãªéšåã«ååšããå¿
èŠããããŸã (ããšãã°ãç»åã«ã¯è€æ°ã®è»ãå«ãŸããŠããå¯èœæ§ããããŸã)ã
ãã®ã¿ã¹ã¯ã¯ãæ©è¡è
ãéè·¯æšèãä¿¡å·æ©ãªã©ãæ€åºããããã«èªåé転ã§äžè¬çã«äœ¿çšãããŸãã
ä»ã®ã¢ããªã±ãŒã·ã§ã³ã«ã¯ãç»åå
ã®ãªããžã§ã¯ãã®ã«ãŠã³ããç»åæ€çŽ¢ãªã©ãå«ãŸããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ãåŠç¿ããŸãã
1. Finetune [DETR](https://huggingface.co/docs/transformers/model_doc/detr)ãç³ã¿èŸŒã¿ã¢ã«ãŽãªãºã ãçµã¿åãããã¢ãã«
[CPPE-5](https://huggingface.co/datasets/cppe-5) äžã®ãšã³ã³ãŒããŒ/ãã³ãŒã㌠ãã©ã³ã¹ãã©ãŒããŒãåããããã¯ããŒã³
ããŒã¿ã»ããã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/object-detection) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q datasets transformers evaluate timm albumentations
```
ð€ ããŒã¿ã»ããã䜿çšã㊠Hugging Face Hub ããããŒã¿ã»ãããããŒãããð€ ãã©ã³ã¹ãã©ãŒããŒã䜿çšããŠã¢ãã«ããã¬ãŒãã³ã°ããŸãã
ããŒã¿ãå¢åŒ·ããããã®`albumentations`ã `timm` ã¯çŸåšãDETR ã¢ãã«ã®ç³ã¿èŸŒã¿ããã¯ããŒã³ãããŒãããããã«å¿
èŠã§ãã
ã¢ãã«ãã³ãã¥ããã£ãšå
±æããããšããå§ãããŸãã Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãããã«ã¢ããããŒãããŸãã
ããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load the CPPE-5 dataset
[CPPE-5 ããŒã¿ã»ãã](https://huggingface.co/datasets/cppe-5) ã«ã¯ã次ã®ç»åãå«ãŸããŠããŸãã
æ°åã³ãããŠã€ã«ã¹ææçã®ãã³ãããã¯ã«ãããå»ççšå人ä¿è·å
· (PPE) ãèå¥ãã泚éã
ããŒã¿ã»ãããããŒãããããšããå§ããŸãã
```py
>>> from datasets import load_dataset
>>> cppe5 = load_dataset("cppe-5")
>>> cppe5
DatasetDict({
train: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 1000
})
test: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 29
})
})
```
ãã®ããŒã¿ã»ããã«ã¯ã1000 æã®ç»åãå«ããã¬ãŒãã³ã° ã»ãããš 29 æã®ç»åãå«ããã¹ã ã»ããããã§ã«ä»å±ããŠããããšãããããŸãã
ããŒã¿ã«æ
£ããããã«ãäŸãã©ã®ãããªãã®ãã調ã¹ãŠãã ããã
```py
>>> cppe5["train"][0]
{'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>,
'width': 943,
'height': 663,
'objects': {'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]],
'category': [4, 4, 0, 0]}}
```
ããŒã¿ã»ããå
ã®äŸã«ã¯æ¬¡ã®ãã£ãŒã«ãããããŸãã
- `image_id`: ãµã³ãã«ã®ç»åID
- `image`: ç»åãå«ã `PIL.Image.Image` ãªããžã§ã¯ã
- `width`: ç»åã®å¹
- `height`: ç»åã®é«ã
- `objects`: ç»åå
ã®ãªããžã§ã¯ãã®å¢çããã¯ã¹ã®ã¡ã¿ããŒã¿ãå«ãèŸæž:
- `id`: ã¢ãããŒã·ã§ã³ID
- `area`: å¢çããã¯ã¹ã®é å
- `bbox`: ãªããžã§ã¯ãã®å¢çããã¯ã¹ ([COCO 圢åŒ](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) )
- `category`: ãªããžã§ã¯ãã®ã«ããŽãªãŒãå¯èœãªå€ã«ã¯ã`Coverall (0)`ã`Face_Shield (1)`ã`Gloves (2)`ã`Goggles (3)`ãããã³ `Mask (4)` ãå«ãŸããŸãã
`bbox`ãã£ãŒã«ãã COCO 圢åŒã«åŸã£ãŠããããšã«æ°ã¥ããããããŸããããã㯠DETR ã¢ãã«ãäºæãã圢åŒã§ãã
ãã ããããªããžã§ã¯ããå
ã®ãã£ãŒã«ãã®ã°ã«ãŒãåã¯ãDETR ãå¿
èŠãšãã泚é圢åŒãšã¯ç°ãªããŸããããªãã¯ããã§ããã
ãã®ããŒã¿ããã¬ãŒãã³ã°ã«äœ¿çšããåã«ãããã€ãã®ååŠçå€æãé©çšããå¿
èŠããããŸãã
ããŒã¿ãããã«æ·±ãç解ããã«ã¯ãããŒã¿ã»ããå
ã®äŸãèŠèŠåããŸãã
```py
>>> import numpy as np
>>> import os
>>> from PIL import Image, ImageDraw
>>> image = cppe5["train"][0]["image"]
>>> annotations = cppe5["train"][0]["objects"]
>>> draw = ImageDraw.Draw(image)
>>> categories = cppe5["train"].features["objects"].feature["category"].names
>>> id2label = {index: x for index, x in enumerate(categories, start=0)}
>>> label2id = {v: k for k, v in id2label.items()}
>>> for i in range(len(annotations["id"])):
... box = annotations["bbox"][i]
... class_idx = annotations["category"][i]
... x, y, w, h = tuple(box)
... draw.rectangle((x, y, x + w, y + h), outline="red", width=1)
... draw.text((x, y), id2label[class_idx], fill="white")
>>> image
```
<div class="flex justify-center">
<img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"/>
</div>
é¢é£ä»ããããã©ãã«ã䜿çšããŠå¢çããã¯ã¹ãèŠèŠåããã«ã¯ãããŒã¿ã»ããã®ã¡ã¿ããŒã¿ããã©ãã«ãååŸããŸãã
`category`ãã£ãŒã«ãã
ãŸããã©ãã« ID ãã©ãã« ã¯ã©ã¹ã«ãããã³ã°ããèŸæž (`id2label`) ããã®é (`label2id`) ãäœæããããšãã§ããŸãã
ãããã¯ãåŸã§ã¢ãã«ãã»ããã¢ãããããšãã«äœ¿çšã§ããŸãããããã®ããããå«ãããšãå
±æããå Žåã«ä»ã®äººãã¢ãã«ãåå©çšã§ããããã«ãªããŸãã
ãã°ãã§ã€ã¹ããã«åãä»ããŸãã
ããŒã¿ã«æ
£ããããã®æåŸã®ã¹ããããšããŠãæœåšçãªåé¡ããªããããŒã¿ã調æ»ããŸããããŒã¿ã»ããã«é¢ããäžè¬çãªåé¡ã® 1 ã€ã¯ã
ãªããžã§ã¯ãæ€åºã¯ãç»åã®ç«¯ãè¶ããŠã䌞ã³ããå¢çããã¯ã¹ã§ãããã®ãããªãæŽèµ°ãå¢çããã¯ã¹ã¯ã
ãã¬ãŒãã³ã°äžã«ãšã©ãŒãçºçããããããã®æ®µéã§å¯ŸåŠããå¿
èŠããããŸãããã®ããŒã¿ã»ããã«ã¯ããã®åé¡ã«é¢ããäŸãããã€ããããŸãã
ãã®ã¬ã€ãã§ã¯å
容ãããããããããããã«ããããã®ç»åãããŒã¿ããåé€ããŸãã
```py
>>> remove_idx = [590, 821, 822, 875, 876, 878, 879]
>>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx]
>>> cppe5["train"] = cppe5["train"].select(keep)
```
## Preprocess the data
ã¢ãã«ã埮調æŽããã«ã¯ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ã«äœ¿çšãããã¢ãããŒããšæ£ç¢ºã«äžèŽããããã«ã䜿çšããäºå®ã®ããŒã¿ãååŠçããå¿
èŠããããŸãã
[`AutoImageProcessor`] ã¯ãç»åããŒã¿ãåŠçã㊠`pixel_values`ã`pixel_mask`ãããã³
DETR ã¢ãã«ããã¬ãŒãã³ã°ã§ãããã©ãã«ããç»åããã»ããµã«ã¯ãå¿é
ããå¿
èŠã®ãªãããã€ãã®å±æ§ããããŸãã
- `image_mean = [0.485, 0.456, 0.406 ]`
- `image_std = [0.229, 0.224, 0.225]`
ãããã¯ãã¢ãã«ã®äºåãã¬ãŒãã³ã°äžã«ç»åãæ£èŠåããããã«äœ¿çšãããå¹³åãšæšæºåå·®ã§ãããããã®äŸ¡å€èŠ³ã¯éåžžã«éèŠã§ã
äºåã«ãã¬ãŒãã³ã°ãããç»åã¢ãã«ãæšè«ãŸãã¯åŸ®èª¿æŽãããšãã«è€è£œããŸãã
埮調æŽããã¢ãã«ãšåããã§ãã¯ãã€ã³ãããã€ã¡ãŒãž ããã»ããµãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "facebook/detr-resnet-50"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint)
```
ç»åã`image_processor`ã«æž¡ãåã«ã2 ã€ã®ååŠçå€æãããŒã¿ã»ããã«é©çšããŸãã
- ç»åã®æ¡åŒµ
- DETR ã®æåŸ
ã«å¿ããããã®æ³šéã®åãã©ãŒããã
ãŸããã¢ãã«ããã¬ãŒãã³ã° ããŒã¿ã«ãªãŒããŒãã£ããããªãããã«ããããã«ãä»»æã®ããŒã¿æ¡åŒµã©ã€ãã©ãªã䜿çšããŠç»åæ¡åŒµãé©çšã§ããŸããããã§ã¯[Albumentations](https://albumentations.ai/docs/)ã䜿çšããŸã...
ãã®ã©ã€ãã©ãªã¯ãå€æãç»åã«åœ±é¿ãäžããããã«å¿ããŠå¢çããã¯ã¹ãæŽæ°ããããšãä¿èšŒããŸãã
ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªã®ããã¥ã¡ã³ãã«ã¯ã詳现㪠[ç©äœæ€åºçšã«ç»åãæ¡åŒµããæ¹æ³ã«é¢ããã¬ã€ã](https://huggingface.co/docs/datasets/object_detection) ãèšèŒãããŠããŸãã
äŸãšããŠãŸã£ããåãããŒã¿ã»ããã䜿çšããŠããŸããããã§ãåãã¢ãããŒããé©çšããåç»åã®ãµã€ãºã (480, 480) ã«å€æŽããŸãã
æ°Žå¹³ã«å転ããŠæããããŸãã
```py
>>> import albumentations
>>> import numpy as np
>>> import torch
>>> transform = albumentations.Compose(
... [
... albumentations.Resize(480, 480),
... albumentations.HorizontalFlip(p=1.0),
... albumentations.RandomBrightnessContrast(p=1.0),
... ],
... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]),
... )
```
`image_processor` ã¯ã泚éã次ã®åœ¢åŒã§ããããšãæåŸ
ããŸã: `{'image_id': int, 'annotations': List[Dict]}`,
ããã§ãåèŸæžã¯ COCO ãªããžã§ã¯ãã®æ³šéã§ãã 1 ã€ã®äŸãšããŠã泚éãåãã©ãŒãããããé¢æ°ãè¿œå ããŠã¿ãŸãããã
```py
>>> def formatted_anns(image_id, category, area, bbox):
... annotations = []
... for i in range(0, len(category)):
... new_ann = {
... "image_id": image_id,
... "category_id": category[i],
... "isCrowd": 0,
... "area": area[i],
... "bbox": list(bbox[i]),
... }
... annotations.append(new_ann)
... return annotations
```
ããã§ãç»åãšæ³šéã®å€æãçµã¿åãããŠãµã³ãã«ã®ãããã§äœ¿çšã§ããããã«ãªããŸããã
```py
>>> # transforming a batch
>>> def transform_aug_ann(examples):
... image_ids = examples["image_id"]
... images, bboxes, area, categories = [], [], [], []
... for image, objects in zip(examples["image"], examples["objects"]):
... image = np.array(image.convert("RGB"))[:, :, ::-1]
... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"])
... area.append(objects["area"])
... images.append(out["image"])
... bboxes.append(out["bboxes"])
... categories.append(out["category"])
... targets = [
... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)}
... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes)
... ]
... return image_processor(images=images, annotations=targets, return_tensors="pt")
```
ð€ Datasets [`~datasets.Dataset.with_transform`] ã¡ãœããã䜿çšããŠããã®ååŠçé¢æ°ãããŒã¿ã»ããå
šäœã«é©çšããŸãããã®æ¹æ³ãé©çšãããã®ã¯ã
ããŒã¿ã»ããã®èŠçŽ ãèªã¿èŸŒããšãã«ããã®å Žã§å€æããŸãã
ãã®æç¹ã§ãããŒã¿ã»ããã®äŸãå€æåŸã«ã©ã®ããã«ãªããã確èªã§ããŸãããã³ãœã«ã衚瀺ãããã¯ãã§ã
`pixel_values`ããã³ãœã«ãš `pixel_mask`ãããã³ `labels` ã䜿çšããŸãã
```py
>>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann)
>>> cppe5["train"][15]
{'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809],
[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809],
[ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638],
...,
[-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980],
[-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809],
[-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]],
[[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431],
[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431],
[ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256],
...,
[-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606],
[-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431],
[-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]],
[[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476],
[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476],
[ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302],
...,
[-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604],
[-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430],
[-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]),
'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
...,
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1]]),
'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}}
```
åã
ã®ç»åãæ£åžžã«æ¡åŒµãããããã®æ³šéãæºåããŸããããã ããååŠçã¯ããã§ã¯ãããŸããã
ãŸã å®æããŠããŸããæåŸã®ã¹ãããã§ã¯ãç»åããããåŠçããããã®ã«ã¹ã¿ã `collatââe_fn` ãäœæããŸãã
ç»å (çŸåšã¯ `pixel_values`) ããããå
ã®æ倧ã®ç»åã«ããã£ã³ã°ãã察å¿ãã `pixel_mask` ãäœæããŸã
ã©ã®ãã¯ã»ã«ãå®æ° (1) ã§ãã©ã®ãã¯ã»ã«ãããã£ã³ã° (0) ã§ãããã瀺ããŸãã
```py
>>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch]
... batch = {}
... batch["pixel_values"] = encoding["pixel_values"]
... batch["pixel_mask"] = encoding["pixel_mask"]
... batch["labels"] = labels
... return batch
```
## Training the DETR model
åã®ã»ã¯ã·ã§ã³ã§éåŽåã®ã»ãšãã©ãå®äºããã®ã§ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããã
ãã®ããŒã¿ã»ããå
ã®ç»åã¯ããµã€ãºãå€æŽããåŸã§ãäŸç¶ãšããŠéåžžã«å€§ããã§ããããã¯ããã®ã¢ãã«ã埮調æŽãããšã
å°ãªããšã 1 ã€ã® GPU ãå¿
èŠã§ãã
ãã¬ãŒãã³ã°ã«ã¯æ¬¡ã®æé ãå«ãŸããŸãã
1. ååŠçãšåããã§ãã¯ãã€ã³ãã䜿çšããŠã[`AutoModelForObjectDetection`] ã§ã¢ãã«ãèªã¿èŸŒã¿ãŸãã
2. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã
3. ãã¬ãŒãã³ã°åŒæ°ãã¢ãã«ãããŒã¿ã»ãããç»åããã»ããµãããŒã¿ç
§ååšãšãšãã« [`Trainer`] ã«æž¡ããŸãã
4. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
ååŠçã«äœ¿çšããã®ãšåããã§ãã¯ãã€ã³ãããã¢ãã«ãããŒããããšãã¯ãå¿
ã`label2id`ãæž¡ããŠãã ããã
ããã³ `id2label` ãããã¯ã以åã«ããŒã¿ã»ããã®ã¡ã¿ããŒã¿ããäœæãããã®ã§ããããã«ã`ignore_mismatched_sizes=True`ãæå®ããŠãæ¢åã®åé¡é éšãæ°ããåé¡é éšã«çœ®ãæããŸãã
```py
>>> from transformers import AutoModelForObjectDetection
>>> model = AutoModelForObjectDetection.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... ignore_mismatched_sizes=True,
... )
```
[`TrainingArguments`] ã§ã`output_dir` ã䜿çšããŠã¢ãã«ã®ä¿åå Žæãæå®ããå¿
èŠã«å¿ããŠãã€ããŒãã©ã¡ãŒã¿ãŒãæ§æããŸãã
ç»ååãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ããç»ååããªããšã
`pixel_values` ãäœæã§ããŸããããã®ããã`remove_unused_columns`ã`False`ã«èšå®ããŸãã
ããã«ããã·ã¥ããŠã¢ãã«ãå
±æãããå Žåã¯ã`push_to_hub` ã `True` ã«èšå®ããŸã (Hugging ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)
é¡ã«åãã£ãŠã¢ãã«ãã¢ããããŒãããŸãïŒã
```py
>>> from transformers import TrainingArguments
>>> training_args = TrainingArguments(
... output_dir="detr-resnet-50_finetuned_cppe5",
... per_device_train_batch_size=8,
... num_train_epochs=10,
... fp16=True,
... save_steps=200,
... logging_steps=50,
... learning_rate=1e-5,
... weight_decay=1e-4,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
æåŸã«ããã¹ãŠããŸãšããŠã[`~transformers.Trainer.train`] ãåŒã³åºããŸãã
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=collate_fn,
... train_dataset=cppe5["train"],
... tokenizer=image_processor,
... )
>>> trainer.train()
```
`training_args`ã§`push_to_hub`ã`True`ã«èšå®ããå Žåããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ãã¯
ãã°ãã§ã€ã¹ããããã¬ãŒãã³ã°ãå®äºãããã[`~transformers.Trainer.push_to_hub`] ã¡ãœãããåŒã³åºããŠãæçµã¢ãã«ãããã«ããã·ã¥ããŸãã
```py
>>> trainer.push_to_hub()
```
## Evaluate
ç©äœæ€åºã¢ãã«ã¯éåžžãäžé£ã® <a href="https://cocodataset.org/#detection-eval">COCO ã¹ã¿ã€ã«ã®ææš</a>ã䜿çšããŠè©äŸ¡ãããŸãã
æ¢åã®ã¡ããªã¯ã¹å®è£
ã®ããããã䜿çšã§ããŸãããããã§ã¯`torchvision`ã®ã¡ããªã¯ã¹å®è£
ã䜿çšããŠæçµçãªã¡ããªã¯ã¹ãè©äŸ¡ããŸãã
ããã«ããã·ã¥ããã¢ãã«ã
`torchvision`ãšããªã¥ãšãŒã¿ãŒã䜿çšããã«ã¯ãã°ã©ãŠã³ã ãã¥ã«ãŒã¹ COCO ããŒã¿ã»ãããæºåããå¿
èŠããããŸãã COCO ããŒã¿ã»ãããæ§ç¯ããããã® API
ããŒã¿ãç¹å®ã®åœ¢åŒã§ä¿åããå¿
èŠããããããæåã«ç»åãšæ³šéããã£ã¹ã¯ã«ä¿åããå¿
èŠããããŸãããšåãããã«
ãã¬ãŒãã³ã°çšã«ããŒã¿ãæºåãããšãã`cppe5["test"]` ããã®æ³šéããã©ãŒãããããå¿
èŠããããŸãããã ããç»å
ãã®ãŸãŸã§ããã¹ãã§ãã
è©äŸ¡ã¹ãããã«ã¯å°ãäœæ¥ãå¿
èŠã§ããã倧ãã 3 ã€ã®ã¹ãããã«åããããšãã§ããŸãã
ãŸãã`cppe5["test"]` ã»ãããæºåããŸãã泚éããã©ãŒãããããããŒã¿ããã£ã¹ã¯ã«ä¿åããŸãã
```py
>>> import json
>>> # format annotations the same as for training, no need for data augmentation
>>> def val_formatted_anns(image_id, objects):
... annotations = []
... for i in range(0, len(objects["id"])):
... new_ann = {
... "id": objects["id"][i],
... "category_id": objects["category"][i],
... "iscrowd": 0,
... "image_id": image_id,
... "area": objects["area"][i],
... "bbox": objects["bbox"][i],
... }
... annotations.append(new_ann)
... return annotations
>>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects
>>> def save_cppe5_annotation_file_images(cppe5):
... output_json = {}
... path_output_cppe5 = f"{os.getcwd()}/cppe5/"
... if not os.path.exists(path_output_cppe5):
... os.makedirs(path_output_cppe5)
... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json")
... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label]
... output_json["images"] = []
... output_json["annotations"] = []
... for example in cppe5:
... ann = val_formatted_anns(example["image_id"], example["objects"])
... output_json["images"].append(
... {
... "id": example["image_id"],
... "width": example["image"].width,
... "height": example["image"].height,
... "file_name": f"{example['image_id']}.png",
... }
... )
... output_json["annotations"].extend(ann)
... output_json["categories"] = categories_json
... with open(path_anno, "w") as file:
... json.dump(output_json, file, ensure_ascii=False, indent=4)
... for im, img_id in zip(cppe5["image"], cppe5["image_id"]):
... path_img = os.path.join(path_output_cppe5, f"{img_id}.png")
... im.save(path_img)
... return path_output_cppe5, path_anno
```
次ã«ã`cocoevaluator`ã§å©çšã§ãã`CocoDetection`ã¯ã©ã¹ã®ã€ã³ã¹ã¿ã³ã¹ãçšæããŸãã
```py
>>> import torchvision
>>> class CocoDetection(torchvision.datasets.CocoDetection):
... def __init__(self, img_folder, image_processor, ann_file):
... super().__init__(img_folder, ann_file)
... self.image_processor = image_processor
... def __getitem__(self, idx):
... # read in PIL image and target in COCO format
... img, target = super(CocoDetection, self).__getitem__(idx)
... # preprocess image and target: converting target to DETR format,
... # resizing + normalization of both image and target)
... image_id = self.ids[idx]
... target = {"image_id": image_id, "annotations": target}
... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt")
... pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension
... target = encoding["labels"][0] # remove batch dimension
... return {"pixel_values": pixel_values, "labels": target}
>>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
>>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"])
>>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno)
```
æåŸã«ãã¡ããªã¯ã¹ãããŒãããŠè©äŸ¡ãå®è¡ããŸãã
```py
>>> import evaluate
>>> from tqdm import tqdm
>>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
>>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco)
>>> val_dataloader = torch.utils.data.DataLoader(
... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn
... )
>>> with torch.no_grad():
... for idx, batch in enumerate(tqdm(val_dataloader)):
... pixel_values = batch["pixel_values"]
... pixel_mask = batch["pixel_mask"]
... labels = [
... {k: v for k, v in t.items()} for t in batch["labels"]
... ] # these are in DETR format, resized + normalized
... # forward pass
... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask)
... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0)
... results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to Pascal VOC format (xmin, ymin, xmax, ymax)
... module.add(prediction=results, reference=labels)
... del batch
>>> results = module.compute()
>>> print(results)
Accumulating evaluation results...
DONE (t=0.08s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590
```
ãããã®çµæã¯ã[`~transformers.TrainingArguments`] ã®ãã€ããŒãã©ã¡ãŒã¿ã調æŽããããšã§ããã«æ¹åã§ããŸããè©ŠããŠãããïŒ
## Inference
DETR ã¢ãã«ã埮調æŽããŠè©äŸ¡ããHugging Face Hub ã«ã¢ããããŒãããã®ã§ããããæšè«ã«äœ¿çšã§ããŸãã
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åãã
ã¢ãã«ã䜿çšããŠãªããžã§ã¯ããæ€åºããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> import requests
>>> url = "https://i.imgur.com/2lnWoly.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5")
>>> obj_detector(image)
```
å¿
èŠã«å¿ããŠããã€ãã©ã€ã³ã®çµæãæåã§è€è£œããããšãã§ããŸãã
```py
>>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
>>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5")
>>> with torch.no_grad():
... inputs = image_processor(images=image, return_tensors="pt")
... outputs = model(**inputs)
... target_sizes = torch.tensor([image.size[::-1]])
... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0]
>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08]
Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9]
```
çµæãããããããŠã¿ãŸããã:
```py
>>> draw = ImageDraw.Draw(image)
>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... x, y, x2, y2 = tuple(box)
... draw.rectangle((x, y, x2, y2), outline="red", width=1)
... draw.text((x, y), model.config.id2label[label.item()], fill="white")
>>> image
```
<div class="flex justify-center">
<img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"/>
</div>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/audio_classification.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Audio classification
[[open-in-colab]]
<Youtube id="KWwzcmG98Ds"/>
é³å£°åé¡ã§ã¯ãããã¹ããšåæ§ã«ãå
¥åããŒã¿ããåºåãããã¯ã©ã¹ ã©ãã«ãå²ãåœãŠãŸããå¯äžã®éãã¯ãããã¹ãå
¥åã®ä»£ããã«çã®ãªãŒãã£ãªæ³¢åœ¢ãããããšã§ããé³å£°åé¡ã®å®éçãªå¿çšäŸã«ã¯ã話è
ã®æå³ãèšèªåé¡ãããã«ã¯é³ã«ããåç©ã®çš®é¡ã®èå¥ãªã©ããããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ããŒã¿ã»ãã㧠[Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) ã埮調æŽããŠè©±è
ã®æå³ãåé¡ããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/audio-classification) ã確èªããããšããå§ãããŸãã
</Tip>
```bash
pip install transformers datasets evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load MInDS-14 dataset
ãŸããð€ ããŒã¿ã»ãã ã©ã€ãã©ãªãã MInDS-14 ããŒã¿ã»ãããããŒãããŸãã
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` ãããå°ããªãã¬ã€ã³ãšãã¹ã ã»ããã«åå²ããŸããããã«ãããå®å
šãªããŒã¿ã»ããã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
次ã«ãããŒã¿ã»ãããèŠãŠã¿ãŸãããã
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 450
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 113
})
})
```
ããŒã¿ã»ããã«ã¯`lang_id`ã`english_transcription`ãªã©ã®å€ãã®æçšãªæ
å ±ãå«ãŸããŠããŸããããã®ã¬ã€ãã§ã¯`audio`ãš`intent_class`ã«çŠç¹ãåœãŠãŸãã [`~datasets.Dataset.remove_columns`] ã¡ãœããã䜿çšããŠä»ã®åãåé€ããŸãã
```py
>>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"])
```
ããã§äŸãèŠãŠã¿ãŸãããã
```py
>>> minds["train"][0]
{'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828,
-0.00024414, -0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 8000},
'intent_class': 2}
```
次㮠2 ã€ã®ãã£ãŒã«ãããããŸãã
- `audio`: é³å£°ãã¡ã€ã«ãããŒãããŠãªãµã³ããªã³ã°ããããã«åŒã³åºãå¿
èŠãããé³å£°ä¿¡å·ã® 1 次å
ã® `array`ã
- `intent_class`: ã¹ããŒã«ãŒã®ã€ã³ãã³ãã®ã¯ã©ã¹ ID ãè¡šããŸãã
ã¢ãã«ãã©ãã« ID ããã©ãã«åãååŸããããããããã«ãã©ãã«åãæŽæ°ã«ããŸãã¯ãã®éã«ãããããèŸæžãäœæããŸãã
```py
>>> labels = minds["train"].features["intent_class"].names
>>> label2id, id2label = dict(), dict()
>>> for i, label in enumerate(labels):
... label2id[label] = str(i)
... id2label[str(i)] = label
```
ããã§ãã©ãã« ID ãã©ãã«åã«å€æã§ããããã«ãªããŸããã
```py
>>> id2label[str(2)]
'app_error'
```
## Preprocess
次ã®ã¹ãããã§ã¯ãWav2Vec2 ç¹åŸŽæœåºããã°ã©ã ãããŒãããŠãªãŒãã£ãªä¿¡å·ãåŠçããŸãã
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
MInDS-14 ããŒã¿ã»ããã®ãµã³ããªã³ã° ã¬ãŒã㯠8000khz ã§ã (ãã®æ
å ±ã¯ [ããŒã¿ã»ãã ã«ãŒã](https://huggingface.co/datasets/PolyAI/minds14) ã§ç¢ºèªã§ããŸã)ãã€ãŸããããŒã¿ã»ãããåãµã³ããªã³ã°ããå¿
èŠããããŸããäºåãã¬ãŒãã³ã°ããã Wav2Vec2 ã¢ãã«ã䜿çšããã«ã¯ã16000kHz ã«èšå®ããŸãã
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ...,
-2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
'sampling_rate': 16000},
'intent_class': 2}
```
次ã«ã次ã®ååŠçé¢æ°ãäœæããŸãã
1. `audio`åãåŒã³åºããŠããŒãããå¿
èŠã«å¿ããŠãªãŒãã£ãª ãã¡ã€ã«ããªãµã³ããªã³ã°ããŸãã
2. ãªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒãããã¢ãã«ãäºåãã¬ãŒãã³ã°ããããªãŒãã£ãª ããŒã¿ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽãããã©ããã確èªããŸãããã®æ
å ±ã¯ãWav2Vec2 [ã¢ãã« ã«ãŒã](https://huggingface.co/facebook/wav2vec2-base) ã§èŠã€ããããšãã§ããŸãã
3. å
¥åã®æ倧é·ãèšå®ããŠãé·ãå
¥åãåãæšãŠãã«ãããåŠçããŸãã
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
... )
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçé¢æ°ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.map`] é¢æ°ã䜿çšããŸãã `batched=True` ãèšå®ããŠããŒã¿ã»ããã®è€æ°ã®èŠçŽ ãäžåºŠã«åŠçããããšã§ã`map` ãé«éåã§ããŸããäžèŠãªåãåé€ãã`intent_class` ã®ååã `label` ã«å€æŽããŸããããã¯ã¢ãã«ãæåŸ
ããååã§ããããã§ãã
```py
>>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True)
>>> encoded_minds = encoded_minds.rename_column("intent_class", "label")
```
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ã¡ããªã¯ã¹ãèªã¿èŸŒã¿ãŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã ãã) ã¡ããªã¯ã¹ã®èªã¿èŸŒã¿ãšèšç®æ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã次ãåç
§ããŠãã ããã
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
次ã«ãäºæž¬ãšã©ãã«ã [`~evaluate.EvaluationModule.compute`] ã«æž¡ããŠç²ŸåºŠãèšç®ããé¢æ°ãäœæããŸãã
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions = np.argmax(eval_pred.predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids)
```
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã¡ã](../training#train-with-pytorch-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForAudioClassification`] ã䜿çšããŠãäºæãããã©ãã«ã®æ°ãšã©ãã« ãããã³ã°ã䜿çšã㊠Wav2Vec2 ãèªã¿èŸŒã¿ãŸãã
```py
>>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
>>> num_labels = len(id2label)
>>> model = AutoModelForAudioClassification.from_pretrained(
... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label
... )
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸããå¯äžã®å¿
é ãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`ãã¬ãŒããŒ`] ã¯ç²ŸåºŠãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_mind_model",
... eval_strategy="epoch",
... save_strategy="epoch",
... learning_rate=3e-5,
... per_device_train_batch_size=32,
... gradient_accumulation_steps=4,
... per_device_eval_batch_size=32,
... num_train_epochs=10,
... warmup_ratio=0.1,
... logging_steps=10,
... load_best_model_at_end=True,
... metric_for_best_model="accuracy",
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=feature_extractor,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
é³å£°åé¡çšã®ã¢ãã«ã埮調æŽããæ¹æ³ã®è©³çŽ°ãªäŸã«ã€ããŠã¯ã察å¿ãã [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
</Tip>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ãå®è¡ãããé³å£°ãã¡ã€ã«ãããŒãããŸããå¿
èŠã«å¿ããŠããªãŒãã£ãª ãã¡ã€ã«ã®ãµã³ããªã³ã° ã¬ãŒããã¢ãã«ã®ãµã³ããªã³ã° ã¬ãŒããšäžèŽããããã«ãªãµã³ããªã³ã°ããããšãå¿ããªãã§ãã ããã
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠé³å£°åé¡çšã®`pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«é³å£°ãã¡ã€ã«ãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model")
>>> classifier(audio_file)
[
{'score': 0.09766869246959686, 'label': 'cash_deposit'},
{'score': 0.07998877018690109, 'label': 'app_error'},
{'score': 0.0781070664525032, 'label': 'joint_account'},
{'score': 0.07667109370231628, 'label': 'pay_bill'},
{'score': 0.0755252093076706, 'label': 'balance'}
]
```
å¿
èŠã«å¿ããŠã`pipeline` ã®çµæãæåã§è€è£œããããšãã§ããŸãã
<frameworkcontent>
<pt>
ç¹åŸŽæœåºåšãããŒãããŠãªãŒãã£ãª ãã¡ã€ã«ãååŠçãã`input`ã PyTorch ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model")
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ããããžãããè¿ããŸãã
```py
>>> from transformers import AutoModelForAudioClassification
>>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
æãé«ã確çã§ã¯ã©ã¹ãååŸããã¢ãã«ã® `id2label` ãããã³ã°ã䜿çšããŠãããã©ãã«ã«å€æããŸãã
```py
>>> import torch
>>> predicted_class_ids = torch.argmax(logits).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]
>>> predicted_label
'cash_deposit'
```
</pt>
</frameworkcontent> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/zero_shot_image_classification.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Zero-shot image classification
[[open-in-colab]]
ãŒãã·ã§ããç»ååé¡ã¯ã次ã®ã¢ãã«ã䜿çšããŠç»åãããŸããŸãªã«ããŽãªã«åé¡ããã¿ã¹ã¯ã§ãã
ãããã®ç¹å®ã®ã«ããŽãªã®ã©ãã«ä»ãã®äŸãå«ãããŒã¿ã«å¯ŸããŠæ瀺çã«ãã¬ãŒãã³ã°ãããŠããªãã
åŸæ¥ãç»ååé¡ã«ã¯ãã©ãã«ä»ãç»åã®ç¹å®ã®ã»ããã§ã¢ãã«ããã¬ãŒãã³ã°ããå¿
èŠãããããã®ã¢ãã«ã¯æ¬¡ã®ããšãåŠç¿ããŸãã
ç¹å®ã®ç»åã®ç¹åŸŽãã©ãã«ã«ããããã³ã°ãããŸããåé¡ã¿ã¹ã¯ã«ãã®ãããªã¢ãã«ã䜿çšããå¿
èŠãããå Žåã
æ°ããã©ãã«ã®ã»ããã§ã¯ãã¢ãã«ã "å調æŽ" ããããã«åŸ®èª¿æŽãå¿
ââèŠã§ãã
察ç
§çã«ããŒãã·ã§ãããŸãã¯ãªãŒãã³èªåœç»ååé¡ã¢ãã«ã¯ãéåžžã倧èŠæš¡ãªã·ã¹ãã ã§ãã¬ãŒãã³ã°ããããã«ãã¢ãŒãã« ã¢ãã«ã§ãã
ç»åãšé¢é£ãã説æã®ããŒã¿ã»ããããããã®ã¢ãã«ã¯ããŒãã·ã§ããç»ååé¡ãå«ãå€ãã®äžæµã¿ã¹ã¯ã«äœ¿çšã§ããã調æŽãããèŠèŠèšèªè¡šçŸãåŠç¿ããŸãã
ããã¯ãç»ååé¡ã«å¯Ÿããããæè»ãªã¢ãããŒãã§ãããã¢ãã«ãæ°ãããŸã èŠãããšã®ãªãã«ããŽãªã«äžè¬åã§ããããã«ãªããŸãã
è¿œå ã®ãã¬ãŒãã³ã° ããŒã¿ãå¿
èŠãšããããŠãŒã¶ãŒã¯ã¿ãŒã²ãã ãªããžã§ã¯ãã®èªç±åœ¢åŒã®ããã¹ã説æãå«ãç»åãã¯ãšãªã§ããããã«ãªããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ãåŠã³ãŸãã
* ãŒãã·ã§ããç»ååé¡ãã€ãã©ã€ã³ãäœæãã
* æåã§ãŒãã·ã§ããç»ååé¡æšè«ãå®è¡ããŸã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers
```
## Zero-shot image classification pipeline
ãŒãã·ã§ããç»ååé¡ããµããŒãããã¢ãã«ã§æšè«ãè©Šãæãç°¡åãªæ¹æ³ã¯ã察å¿ãã [`ãã€ãã©ã€ã³`] ã䜿çšããããšã§ãã
[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads) ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```python
>>> from transformers import pipeline
>>> checkpoint = "openai/clip-vit-large-patch14"
>>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification")
```
次ã«ãåé¡ãããç»åãéžæããŸãã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/>
</div>
ç»åãšåè£ãªããžã§ã¯ãã®ã©ãã«ããã€ãã©ã€ã³ã«æž¡ããŸããããã§ã¯ç»åãçŽæ¥æž¡ããŸããä»ã®é©åãªãªãã·ã§ã³
ç»åãžã®ããŒã«ã« ãã¹ãŸãã¯ç»å URL ãå«ããŸãã
åè£ã©ãã«ã¯ããã®äŸã®ããã«åçŽãªåèªã«ããããšãããã説æçãªåèªã«ããããšãã§ããŸãã
```py
>>> predictions = detector(image, candidate_labels=["fox", "bear", "seagull", "owl"])
>>> predictions
[{'score': 0.9996670484542847, 'label': 'owl'},
{'score': 0.000199399160919711, 'label': 'seagull'},
{'score': 7.392891711788252e-05, 'label': 'fox'},
{'score': 5.96074532950297e-05, 'label': 'bear'}]
```
## Zero-shot image classification by hand
ãŒãã·ã§ããç»ååé¡ãã€ãã©ã€ã³ã®äœ¿çšæ¹æ³ãç解ãããšããã§ããŒãã·ã§ãããå®è¡ããæ¹æ³ãèŠãŠã¿ãŸãããã
ç»åãæåã§åé¡ããŸãã
ãŸãã[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads) ããã¢ãã«ãšé¢é£ããã»ããµãããŒãããŸãã
ããã§ã¯ãåãšåããã§ãã¯ãã€ã³ãã䜿çšããŸãã
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
æ°åãå€ããŠãå¥ã®ç»åãæ®ã£ãŠã¿ãŸãããã
```py
>>> from PIL import Image
>>> import requests
>>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/>
</div>
ããã»ããµã䜿çšããŠã¢ãã«ã®å
¥åãæºåããŸããããã»ããµãŒã¯ã
ãµã€ãºå€æŽãšæ£èŠåã«ããã¢ãã«ã®ç»åãããã³ããã¹ãå
¥åãåŠçããããŒã¯ãã€ã¶ãŒã
```py
>>> candidate_labels = ["tree", "car", "bike", "cat"]
>>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True)
```
å
¥åãã¢ãã«ã«æž¡ããçµæãåŸåŠçããŸãã
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> logits = outputs.logits_per_image[0]
>>> probs = logits.softmax(dim=-1).numpy()
>>> scores = probs.tolist()
>>> result = [
... {"score": score, "label": candidate_label}
... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0])
... ]
>>> result
[{'score': 0.998572, 'label': 'car'},
{'score': 0.0010570387, 'label': 'bike'},
{'score': 0.0003393686, 'label': 'tree'},
{'score': 3.1572064e-05, 'label': 'cat'}]
```
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/document_question_answering.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Document Question Answering
[[open-in-colab]]
ææžã«ãã質åå¿çã¯ãææžã«ããèŠèŠçãªè³ªåå¿çãšãåŒã°ãã以äžãæäŸããã¿ã¹ã¯ã§ãã
ããã¥ã¡ã³ãç»åã«é¢ãã質åãžã®åçããã®ã¿ã¹ã¯ããµããŒãããã¢ãã«ãžã®å
¥åã¯éåžžãç»åãšç»åã®çµã¿åããã§ãã
質åããããåºåã¯èªç¶èšèªã§è¡šçŸãããåçã§ãããããã®ã¢ãã«ã¯ã以äžãå«ãè€æ°ã®ã¢ããªãã£ãå©çšããŸãã
ããã¹ããåèªã®äœçœ® (å¢çããã¯ã¹)ãããã³ç»åèªäœã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
- [DocVQA ããŒã¿ã»ãã](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut) ã® [LayoutLMv2](../model_doc/layoutlmv2) ã埮調æŽããŸãã
- 埮調æŽãããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/image-to-text) ã確èªããããšããå§ãããŸãã
</Tip>
LayoutLMv2 ã¯ãæåŸã®é衚瀺ã®ããããŒã®äžã«è³ªåå¿çããããè¿œå ããããšã§ãããã¥ã¡ã³ãã®è³ªåå¿çã¿ã¹ã¯ã解決ããŸãã
ããŒã¯ã³ã®ç¶æ
ã調ã¹ãŠãããŒã¯ã³ã®éå§ããŒã¯ã³ãšçµäºããŒã¯ã³ã®äœçœ®ãäºæž¬ããŸãã
çããèšãæããã°ãåé¡ã¯æœåºç質åå¿çãšããŠæ±ãããŸããã€ãŸããã³ã³ããã¹ããèæ
®ããŠãã©ã®éšåãæœåºããããšããããšã§ãã
ã®æ
å ±ã質åã«çããŸããã³ã³ããã¹ã㯠OCR ãšã³ãžã³ã®åºåããååŸãããŸããããã§ã¯ Google ã® Tesseract ã§ãã
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã LayoutLMv2 㯠detectron2ãtorchvisionãtesseract ã«äŸåããŸãã
```bash
pip install -q transformers datasets
```
```bash
pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install torchvision
```
```bash
sudo apt install tesseract-ocr
pip install -q pytesseract
```
ãã¹ãŠã®äŸåé¢ä¿ãã€ã³ã¹ããŒã«ããããã©ã³ã¿ã€ã ãåèµ·åããŸãã
ã¢ãã«ãã³ãã¥ããã£ãšå
±æããããšããå§ãããŸãã Hugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããŠãð€ ããã«ã¢ããããŒãããŸãã
ããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
ããã€ãã®ã°ããŒãã«å€æ°ãå®çŸ©ããŸãããã
```py
>>> model_checkpoint = "microsoft/layoutlmv2-base-uncased"
>>> batch_size = 4
```
## Load the data
ãã®ã¬ã€ãã§ã¯ãð€ Hub ã«ããååŠçããã DocVQA ã®å°ããªãµã³ãã«ã䜿çšããŸãããã«ã«äœ¿ãããå Žåã¯ã
DocVQA ããŒã¿ã»ããã¯ã[DocVQA ããŒã ããŒãž](https://rrc.cvc.uab.es/?ch=17) ã§ç»é²ããŠããŠã³ããŒãã§ããŸããããããã°ã
ãã®ã¬ã€ããé²ããŠã[ð€ ããŒã¿ã»ããã«ãã¡ã€ã«ãããŒãããæ¹æ³](https://huggingface.co/docs/datasets/loading#local-and-remote-files) ã確èªããŠãã ããã
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("nielsr/docvqa_1200_examples")
>>> dataset
DatasetDict({
train: Dataset({
features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'],
num_rows: 1000
})
test: Dataset({
features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'],
num_rows: 200
})
})
```
ã芧ã®ãšãããããŒã¿ã»ããã¯ãã§ã«ãã¬ãŒãã³ã° ã»ãããšãã¹ã ã»ããã«åå²ãããŠããŸããç解ããããã«ã©ã³ãã ãªäŸãèŠãŠã¿ãŸããã
æ©èœãåããèªåèªèº«ã
```py
>>> dataset["train"].features
```
åã
ã®ãã£ãŒã«ããè¡šãå
容ã¯æ¬¡ã®ãšããã§ãã
* `id`: ãµã³ãã«ã®ID
* `image`: ããã¥ã¡ã³ãç»åãå«ã PIL.Image.Image ãªããžã§ã¯ã
* `query`: 質åæåå - ããã€ãã®èšèªã§ã®èªç¶èšèªã«ãã質å
* `answers`: ãã¥ãŒãã³ ã¢ãããŒã¿ãŒã«ãã£ãŠæäŸãããæ£è§£ã®ãªã¹ã
* `words` ãš `bounding_boxes`: OCR ã®çµæãããã§ã¯äœ¿çšããŸããã
* `answer`: å¥ã®ã¢ãã«ãšäžèŽããçããããã§ã¯äœ¿çšããŸããã
è±èªã®è³ªåã ããæ®ããå¥ã®ã¢ãã«ã«ããäºæž¬ãå«ãŸããŠãããšæããã`answer`æ©èœãåé€ããŸãããã
ãŸããã¢ãããŒã¿ãŒã«ãã£ãŠæäŸãããã»ããããæåã®åçãååŸããŸãããããã¯ãã©ã³ãã ã«ãµã³ããªã³ã°ããããšãã§ããŸãã
```py
>>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"])
>>> updated_dataset = updated_dataset.map(
... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"]
... )
```
ãã®ã¬ã€ãã§äœ¿çšãã LayoutLMv2 ãã§ãã¯ãã€ã³ãã¯ã`max_position_embeddings = 512` ã§ãã¬ãŒãã³ã°ãããŠããããšã«æ³šæããŠãã ãã (
ãã®æ
å ±ã¯ã[ãã§ãã¯ãã€ã³ãã® `config.json` ãã¡ã€ã«](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)) ã§èŠã€ããŠãã ããã
äŸãçç¥ããããšãã§ããŸãããçãã倧ããªææžã®æåŸã«ãããçµå±çç¥ãããŠããŸããšããç¶æ³ãé¿ããããã«ã
ããã§ã¯ãåã蟌ã¿ã 512 ãè¶
ããå¯èœæ§ãããããã€ãã®äŸãåé€ããŸãã
ããŒã¿ã»ããå
ã®ã»ãšãã©ã®ããã¥ã¡ã³ããé·ãå Žåã¯ãã¹ã©ã€ãã£ã³ã° ãŠã£ã³ããŠæŠç¥ãå®è£
ã§ããŸãã詳现ã«ã€ããŠã¯ã[ãã®ããŒãããã¯](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) ã確èªããŠãã ããã ã
```py
>>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512)
```
ãã®æç¹ã§ããã®ããŒã¿ã»ãããã OCR æ©èœãåé€ããŸãããããããã¯ãç°ãªãããŒã¿ã埮調æŽããããã® OCR ã®çµæã§ãã
ã¢ãã«ããããã¯å
¥åèŠä»¶ãšäžèŽããªãããã䜿çšãããå Žåã¯ããã«åŠçãå¿
èŠã«ãªããŸãã
ãã®ã¬ã€ãã§äœ¿çšããã¢ãã«ã®ã代ããã«ãOCR ãš OCR ã®äž¡æ¹ã®å
ã®ããŒã¿ã«å¯Ÿã㊠[`LayoutLMv2Processor`] ã䜿çšã§ããŸãã
ããŒã¯ã³åããã®ããã«ããŠãã¢ãã«ã®äºæ³ãããå
¥åãšäžèŽããå
¥åãååŸããŸããç»åãæåã§å å·¥ãããå Žåã¯ã
ã¢ãã«ãã©ã®ãããªå
¥å圢åŒãæ³å®ããŠããããç¥ãã«ã¯ã[`LayoutLMv2` ã¢ãã«ã®ããã¥ã¡ã³ã](../model_doc/layoutlmv2) ã確èªããŠãã ããã
```py
>>> updated_dataset = updated_dataset.remove_columns("words")
>>> updated_dataset = updated_dataset.remove_columns("bounding_boxes")
```
æåŸã«ãç»åãµã³ãã«ã確èªããªããšããŒã¿æ¢çŽ¢ã¯å®äºããŸããã
```py
>>> updated_dataset["train"][11]["image"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"/>
</div>
## Preprocess the data
ææžã®è³ªåã«çããã¿ã¹ã¯ã¯ãã«ãã¢ãŒãã« ã¿ã¹ã¯ã§ãããããåã¢ããªãã£ããã®å
¥åã確å®ã«è¡ãããããã«ããå¿
èŠããããŸãã
ã¢ãã«ã®æåŸ
ã«åŸã£ãŠååŠçãããŸãããŸãã[`LayoutLMv2Processor`] ãããŒãããŸããããã¯ãç»åããŒã¿ãåŠçã§ããç»åããã»ããµãšããã¹ã ããŒã¿ããšã³ã³ãŒãã§ããããŒã¯ãã€ã¶ãŒãå
éšã§çµã¿åãããŠããŸãã
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
```
### Preprocessing document images
ãŸããããã»ããµããã® `image_processor` ãå©çšããŠãã¢ãã«ã®ããã¥ã¡ã³ãç»åãæºåããŸãããã
ããã©ã«ãã§ã¯ãç»åããã»ããµã¯ç»åã®ãµã€ãºã 224x224 ã«å€æŽããã«ã©ãŒ ãã£ãã«ã®é åºãæ£ããããšã確èªããŸãã
tesseract ã䜿çšã㊠OCR ãé©çšããåèªãšæ£èŠåãããå¢çããã¯ã¹ãååŸããŸãããã®ãã¥ãŒããªã¢ã«ã§ã¯ããããã®ããã©ã«ãã¯ãã¹ãŠããŸãã«å¿
èŠãªãã®ã§ãã
ããã©ã«ãã®ç»ååŠçãç»åã®ãããã«é©çšããOCR ã®çµæãè¿ãé¢æ°ãäœæããŸãã
```py
>>> image_processor = processor.image_processor
>>> def get_ocr_words_and_boxes(examples):
... images = [image.convert("RGB") for image in examples["image"]]
... encoded_inputs = image_processor(images)
... examples["image"] = encoded_inputs.pixel_values
... examples["words"] = encoded_inputs.words
... examples["boxes"] = encoded_inputs.boxes
... return examples
```
ãã®ååŠçãããŒã¿ã»ããå
šäœã«é«éã«é©çšããã«ã¯ã[`~datasets.Dataset.map`] ã䜿çšããŸãã
```py
>>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2)
```
### Preprocessing text data
ç»åã« OCR ãé©çšããããããŒã¿ã»ããã®ããã¹ãéšåããšã³ã³ãŒãããŠã¢ãã«çšã«æºåããå¿
èŠããããŸãã
ããã«ã¯ãåã®ã¹ãããã§ååŸããåèªãšããã¯ã¹ãããŒã¯ã³ã¬ãã«ã® `input_ids`ã`attention_mask`ã
`token_type_ids`ãš`bbox`ãããã¹ããååŠçããã«ã¯ãããã»ããµããã®`Tokenizer`ãå¿
èŠã«ãªããŸãã
```py
>>> tokenizer = processor.tokenizer
```
åè¿°ã®ååŠçã«å ããŠãã¢ãã«ã®ã©ãã«ãè¿œå ããå¿
èŠããããŸãã `xxxForQuestionAnswering` ã¢ãã«ã®å Žå
ð€ Transformers ã§ã¯ãã©ãã«ã¯ `start_positions` ãš `end_positions` ã§æ§æãããã©ã®ããŒã¯ã³ããã®äœçœ®ã«ãããã瀺ããŸãã
éå§ç¹ãšãã©ã®ããŒã¯ã³ãåçã®æåŸã«ãããã
ããããå§ããŸãããããã倧ããªãªã¹ã (åèªãªã¹ã) å
ã®ãµããªã¹ã (åèªã«åå²ãããåç) ãæ€çŽ¢ã§ãããã«ããŒé¢æ°ãå®çŸ©ããŸãã
ãã®é¢æ°ã¯ã`words_list` ãš `answer_list` ãšãã 2 ã€ã®ãªã¹ããå
¥åãšããŠåãåããŸãã次ã«ã`words_list`ãå埩åŠçããŠãã§ãã¯ããŸãã
`words_list` (words_list[i]) å
ã®çŸåšã®åèªããanswer_list (answer_list[0]) ã®æåã®åèªãšçãããã©ãããããã³
çŸåšã®åèªããå§ãŸãã`answer_list` ãšåãé·ãã® `words_list` ã®ãµããªã¹ãã¯ã`to answer_list` ãšçãããªããŸãã
ãã®æ¡ä»¶ã true ã®å ŽåãäžèŽãèŠã€ãã£ãããšãæå³ããé¢æ°ã¯äžèŽãšãã®éå§ã€ã³ããã¯ã¹ (idx) ãèšé²ããŸãã
ãšãã®çµäºã€ã³ããã¯ã¹ (idx + len(answer_list) - 1)ãè€æ°ã®äžèŽãèŠã€ãã£ãå Žåãé¢æ°ã¯æåã®ãã®ã®ã¿ãè¿ããŸãã
äžèŽãããã®ãèŠã€ãããªãå Žåãé¢æ°ã¯ (`None`ã0ãããã³ 0) ãè¿ããŸãã
```py
>>> def subfinder(words_list, answer_list):
... matches = []
... start_indices = []
... end_indices = []
... for idx, i in enumerate(range(len(words_list))):
... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list:
... matches.append(answer_list)
... start_indices.append(idx)
... end_indices.append(idx + len(answer_list) - 1)
... if matches:
... return matches[0], start_indices[0], end_indices[0]
... else:
... return None, 0, 0
```
ãã®é¢æ°ãçãã®äœçœ®ãèŠã€ããæ¹æ³ã説æããããã«ãäŸã§äœ¿çšããŠã¿ãŸãããã
```py
>>> example = dataset_with_ocr["train"][1]
>>> words = [word.lower() for word in example["words"]]
>>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split())
>>> print("Question: ", example["question"])
>>> print("Words:", words)
>>> print("Answer: ", example["answer"])
>>> print("start_index", word_idx_start)
>>> print("end_index", word_idx_end)
Question: Who is in cc in this letter?
Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498']
Answer: T.F. Riehl
start_index 17
end_index 18
```
ãã ãããµã³ãã«ããšã³ã³ãŒãããããšã次ã®ããã«ãªããŸãã
```py
>>> encoding = tokenizer(example["question"], example["words"], example["boxes"])
>>> tokenizer.decode(encoding["input_ids"])
[CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ...
```
ãšã³ã³ãŒããããå
¥åå
ã§çãã®äœçœ®ãèŠã€ããå¿
èŠããããŸãã
* `token_type_ids` ã¯ãã©ã®ããŒã¯ã³ã質åã®äžéšã§ãããã©ã®ããŒã¯ã³ãææžã®åèªã®äžéšã§ãããã瀺ããŸãã
* `tokenizer.cls_token_id` ã¯ãå
¥åã®å
é ã§ç¹å¥ãªããŒã¯ã³ãèŠã€ããã®ã«åœ¹ç«ã¡ãŸãã
* `word_ids` ã¯ãå
ã® `words` ã§èŠã€ãã£ãåçããå®å
šã«ãšã³ã³ãŒããããå
¥åå
ã®åãåçãšç
§åããŠå€æããã®ã«åœ¹ç«ã¡ãŸãã
ãšã³ã³ãŒããããå
¥åå
ã®å¿çã®éå§/çµäºäœçœ®ã
ããã念é ã«çœ®ããŠãããŒã¿ã»ããå
ã®ãµã³ãã«ã®ãããããšã³ã³ãŒãããé¢æ°ãäœæããŸãããã
```py
>>> def encode_dataset(examples, max_length=512):
... questions = examples["question"]
... words = examples["words"]
... boxes = examples["boxes"]
... answers = examples["answer"]
... # encode the batch of examples and initialize the start_positions and end_positions
... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True)
... start_positions = []
... end_positions = []
... # loop through the examples in the batch
... for i in range(len(questions)):
... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id)
... # find the position of the answer in example's words
... words_example = [word.lower() for word in words[i]]
... answer = answers[i]
... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split())
... if match:
... # if match is found, use `token_type_ids` to find where words start in the encoding
... token_type_ids = encoding["token_type_ids"][i]
... token_start_index = 0
... while token_type_ids[token_start_index] != 1:
... token_start_index += 1
... token_end_index = len(encoding["input_ids"][i]) - 1
... while token_type_ids[token_end_index] != 1:
... token_end_index -= 1
... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1]
... start_position = cls_index
... end_position = cls_index
... # loop over word_ids and increase `token_start_index` until it matches the answer position in words
... # once it matches, save the `token_start_index` as the `start_position` of the answer in the encoding
... for id in word_ids:
... if id == word_idx_start:
... start_position = token_start_index
... else:
... token_start_index += 1
... # similarly loop over `word_ids` starting from the end to find the `end_position` of the answer
... for id in word_ids[::-1]:
... if id == word_idx_end:
... end_position = token_end_index
... else:
... token_end_index -= 1
... start_positions.append(start_position)
... end_positions.append(end_position)
... else:
... start_positions.append(cls_index)
... end_positions.append(cls_index)
... encoding["image"] = examples["image"]
... encoding["start_positions"] = start_positions
... encoding["end_positions"] = end_positions
... return encoding
```
ãã®ååŠçé¢æ°ãå®æããã®ã§ãããŒã¿ã»ããå
šäœããšã³ã³ãŒãã§ããŸãã
```py
>>> encoded_train_dataset = dataset_with_ocr["train"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names
... )
>>> encoded_test_dataset = dataset_with_ocr["test"].map(
... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names
... )
```
ãšã³ã³ãŒããããããŒã¿ã»ããã®ç¹åŸŽãã©ã®ãããªãã®ãã確èªããŠã¿ãŸãããã
```py
>>> encoded_train_dataset.features
{'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None),
'start_positions': Value(dtype='int64', id=None),
'end_positions': Value(dtype='int64', id=None)}
```
## Evaluation
ææžã®è³ªååçã®è©äŸ¡ã«ã¯ã倧éã®åŸåŠçãå¿
èŠã§ããéå°æåãé¿ããããã«
çŸæç¹ã§ã¯ããã®ã¬ã€ãã§ã¯è©äŸ¡ã¹ããããçç¥ããŠããŸãã [`Trainer`] ã¯ãã¬ãŒãã³ã°äžã«è©äŸ¡æ倱ãèšç®ããããã
ã¢ãã«ã®ããã©ãŒãã³ã¹ã«ã€ããŠãŸã£ããããããªãããã§ã¯ãããŸãããæœåºç質åå¿çã¯éåžžãF1/å®å
šäžèŽã䜿çšããŠè©äŸ¡ãããŸãã
èªåã§å®è£
ãããå Žåã¯ã[質åå¿çã®ç« ](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ã確èªããŠãã ããã
ã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãããã«ãã°ãã§ã€ã¹ã³ãŒã¹ã®ã
## Train
ããã§ãšãïŒãã®ã¬ã€ãã®æãé£ããéšåãç¡äºã«ããã²ãŒãã§ããã®ã§ãç¬èªã®ã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããã
ãã¬ãŒãã³ã°ã«ã¯æ¬¡ã®æé ãå«ãŸããŸãã
* ååŠçãšåããã§ãã¯ãã€ã³ãã䜿çšããŠã[`AutoModelForDocumentQuestionAnswering`] ã§ã¢ãã«ãèªã¿èŸŒã¿ãŸãã
* [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã
* ãµã³ãã«ããããåŠçããé¢æ°ãå®çŸ©ããŸããããã§ã¯ [`DefaultDataCollatââor`] ãé©åã«æ©èœããŸãã
* ã¢ãã«ãããŒã¿ã»ãããããŒã¿ç
§ååšãšãšãã«ãã¬ãŒãã³ã°åŒæ°ã [`Trainer`] ã«æž¡ããŸãã
* [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> from transformers import AutoModelForDocumentQuestionAnswering
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)
```
[`TrainingArguments`] ã§ã`output_dir` ã䜿çšããŠã¢ãã«ã®ä¿åå Žæãæå®ããå¿
èŠã«å¿ããŠãã€ããŒãã©ã¡ãŒã¿ãŒãæ§æããŸãã
ã¢ãã«ãã³ãã¥ããã£ãšå
±æãããå Žåã¯ã`push_to_hub`ã`True`ã«èšå®ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ã
ãã®å Žåã`output_dir`ã¯ã¢ãã«ã®ãã§ãã¯ãã€ã³ããããã·ã¥ããããªããžããªã®ååã«ããªããŸãã
```py
>>> from transformers import TrainingArguments
>>> # REPLACE THIS WITH YOUR REPO ID
>>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa"
>>> training_args = TrainingArguments(
... output_dir=repo_id,
... per_device_train_batch_size=4,
... num_train_epochs=20,
... save_steps=200,
... logging_steps=50,
... eval_strategy="steps",
... learning_rate=5e-5,
... save_total_limit=2,
... remove_unused_columns=False,
... push_to_hub=True,
... )
```
ãµã³ãã«ããŸãšããŠãããåŠçããããã®åçŽãªããŒã¿ç
§ååšãå®çŸ©ããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
æåŸã«ããã¹ãŠããŸãšããŠã[`~Trainer.train`] ãåŒã³åºããŸãã
```py
>>> from transformers import Trainer
>>> trainer = Trainer(
... model=model,
... args=training_args,
... data_collator=data_collator,
... train_dataset=encoded_train_dataset,
... eval_dataset=encoded_test_dataset,
... tokenizer=processor,
... )
>>> trainer.train()
```
æçµã¢ãã«ã ð€ Hub ã«è¿œå ããã«ã¯ãã¢ãã« ã«ãŒããäœæãã`push_to_hub` ãåŒã³åºããŸãã
```py
>>> trainer.create_model_card()
>>> trainer.push_to_hub()
```
## Inference
LayoutLMv2 ã¢ãã«ã埮調æŽããð€ ããã«ã¢ããããŒãããã®ã§ããããæšè«ã«äœ¿çšã§ããŸãããã£ãšãåçŽãª
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæ¹æ³ã¯ãããã [`Pipeline`] ã§äœ¿çšããããšã§ãã
äŸãæããŠã¿ãŸããã:
```py
>>> example = dataset["test"][2]
>>> question = example["query"]["en"]
>>> image = example["image"]
>>> print(question)
>>> print(example["answers"])
'Who is âpresidingâ TRRF GENERAL SESSION (PART 1)?'
['TRRF Vice President', 'lee a. waller']
```
次ã«ããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åããŸãã
ã¢ãã«ã䜿çšããŠè³ªåãžã®åçãææžåããç»åãšè³ªåã®çµã¿åãããã¢ãã«ã«æž¡ããŸãã
```py
>>> from transformers import pipeline
>>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> qa_pipeline(image, question)
[{'score': 0.9949808120727539,
'answer': 'Lee A. Waller',
'start': 55,
'end': 57}]
```
å¿
èŠã«å¿ããŠããã€ãã©ã€ã³ã®çµæãæåã§è€è£œããããšãã§ããŸãã
1. ç»åãšè³ªåãååŸããã¢ãã«ã®ããã»ããµã䜿çšããŠã¢ãã«çšã«æºåããŸãã
2. ã¢ãã«ãéããŠçµæãŸãã¯ååŠçã転éããŸãã
3. ã¢ãã«ã¯`start_logits`ãš`end_logits`ãè¿ããŸãããããã¯ãã©ã®ããŒã¯ã³ãå¿çã®å
é ã«ããã®ãã瀺ãã
ã©ã®ããŒã¯ã³ãåçã®æåŸã«ãããŸãããã©ã¡ããåœ¢ç¶ (batch_sizeãsequence_length) ãæã¡ãŸãã
4. `start_logits` ãš `end_logits` ã®äž¡æ¹ã®æåŸã®æ¬¡å
㧠argmax ãååŸããäºæž¬ããã `start_idx` ãš `end_idx` ãååŸããŸãã
5. ããŒã¯ãã€ã¶ãŒã䜿çšããŠåçããã³ãŒãããŸãã
```py
>>> import torch
>>> from transformers import AutoProcessor
>>> from transformers import AutoModelForDocumentQuestionAnswering
>>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa")
>>> with torch.no_grad():
... encoding = processor(image.convert("RGB"), question, return_tensors="pt")
... outputs = model(**encoding)
... start_logits = outputs.start_logits
... end_logits = outputs.end_logits
... predicted_start_idx = start_logits.argmax(-1).item()
... predicted_end_idx = end_logits.argmax(-1).item()
>>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1])
'lee a. waller'
```
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/semantic_segmentation.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Semantic segmentation
[[open-in-colab]]
<Youtube id="dKE8SIt9C-w"/>
ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ãç»åã®åã
ã®ãã¯ã»ã«ã«ã©ãã«ãŸãã¯ã¯ã©ã¹ãå²ãåœãŠãŸããã»ã°ã¡ã³ããŒã·ã§ã³ã«ã¯ããã€ãã®ã¿ã€ãããããŸãããã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®å Žåãåããªããžã§ã¯ãã®äžæã®ã€ã³ã¹ã¿ã³ã¹éã®åºå¥ã¯è¡ãããŸãããäž¡æ¹ã®ãªããžã§ã¯ãã«åãã©ãã«ãä»ããããŸã (ããšãã°ã`car-1`ãš`car-2`ã®ä»£ããã«`car`)ãã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã®äžè¬çãªçŸå®äžçã®ã¢ããªã±ãŒã·ã§ã³ã«ã¯ãæ©è¡è
ãéèŠãªäº€éæ
å ±ãèå¥ããããã®èªåé転è»ã®ãã¬ãŒãã³ã°ãå»çç»åå
ã®çŽ°èãšç°åžžã®èå¥ãè¡æç»åããã®ç°å¢å€åã®ç£èŠãªã©ãå«ãŸããŸãã
ãã®ã¬ã€ãã§ã¯ã次ã®æ¹æ³ã説æããŸãã
1. [SceneParse150](https://huggingface.co/datasets/scene_parse_150) ããŒã¿ã»ããã® [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) ã埮調æŽããŸãã
2. 埮調æŽããã¢ãã«ãæšè«ã«äœ¿çšããŸãã
<Tip>
ãã®ã¿ã¹ã¯ãšäºææ§ã®ãããã¹ãŠã®ã¢ãŒããã¯ãã£ãšãã§ãã¯ãã€ã³ãã確èªããã«ã¯ã[ã¿ã¹ã¯ããŒãž](https://huggingface.co/tasks/image-segmentation) ã確èªããããšããå§ãããŸãã
</Tip>
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q datasets transformers evaluate
```
ã¢ãã«ãã¢ããããŒãããŠã³ãã¥ããã£ãšå
±æã§ããããã«ãHugging Face ã¢ã«ãŠã³ãã«ãã°ã€ã³ããããšããå§ãããŸããããã³ããã衚瀺ãããããããŒã¯ã³ãå
¥åããŠãã°ã€ã³ããŸãã
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SceneParse150 dataset
ãŸããSceneParse150 ããŒã¿ã»ããã®å°ãããµãã»ããã ð€ ããŒã¿ã»ãã ã©ã€ãã©ãªããèªã¿èŸŒã¿ãŸããããã«ãããå®å
šãªããŒã¿ã»ããã®ãã¬ãŒãã³ã°ã«ããã«æéãè²»ããåã«ãå®éšããŠãã¹ãŠãæ©èœããããšã確èªããæ©äŒãåŸãããŸãã
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("scene_parse_150", split="train[:50]")
```
[`~datasets.Dataset.train_test_split`] ã¡ãœããã䜿çšããŠãããŒã¿ã»ããã® `train` åå²ããã¬ã€ã³ ã»ãããšãã¹ã ã»ããã«åå²ããŸãã
```py
>>> ds = ds.train_test_split(test_size=0.2)
>>> train_ds = ds["train"]
>>> test_ds = ds["test"]
```
次ã«ãäŸãèŠãŠã¿ãŸãããã
```py
>>> train_ds[0]
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>,
'scene_category': 368}
```
- `image`: ã·ãŒã³ã® PIL ã€ã¡ãŒãžã
- `annotation`: ã»ã°ã¡ã³ããŒã·ã§ã³ ãããã® PIL ã€ã¡ãŒãžãã¢ãã«ã®ã¿ãŒã²ããã§ããããŸãã
- `scene_category`: "kitchen"ã"office"ãªã©ã®ç»åã·ãŒã³ã説æããã«ããŽãª IDããã®ã¬ã€ãã§ã¯ã`image`ãš`annotation`ã®ã¿ãå¿
èŠã«ãªããŸããã©ã¡ãã PIL ã€ã¡ãŒãžã§ãã
ãŸããã©ãã« ID ãã©ãã« ã¯ã©ã¹ã«ãããããèŸæžãäœæããããšãã§ããŸããããã¯ãåŸã§ã¢ãã«ãèšå®ãããšãã«åœ¹ç«ã¡ãŸãããããããããã³ã°ãããŠã³ããŒããã`id2label` ããã³ `label2id` ãã£ã¯ã·ã§ããªãäœæããŸãã
```py
>>> import json
>>> from huggingface_hub import cached_download, hf_hub_url
>>> repo_id = "huggingface/label-files"
>>> filename = "ade20k-id2label.json"
>>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
>>> id2label = {int(k): v for k, v in id2label.items()}
>>> label2id = {v: k for k, v in id2label.items()}
>>> num_labels = len(id2label)
```
## Preprocess
次ã®ã¹ãããã§ã¯ãSegFormer ç»åããã»ããµãããŒãããŠãã¢ãã«ã®ç»åãšæ³šéãæºåããŸãããã®ããŒã¿ã»ããã®ãããªäžéšã®ããŒã¿ã»ããã¯ãããã¯ã°ã©ãŠã³ã ã¯ã©ã¹ãšããŠãŒãã€ã³ããã¯ã¹ã䜿çšããŸãããã ããå®éã«ã¯èæ¯ã¯ã©ã¹ã¯ 150 åã®ã¯ã©ã¹ã«å«ãŸããŠããªãããã`reduce_labels=True`ãèšå®ããŠãã¹ãŠã®ã©ãã«ãã 1 ã€ãåŒãå¿
èŠããããŸãããŒãã€ã³ããã¯ã¹ã¯ `255` ã«çœ®ãæãããããããSegFormer ã®æ倱é¢æ°ã«ãã£ãŠç¡èŠãããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> checkpoint = "nvidia/mit-b0"
>>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
```
<frameworkcontent>
<pt>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãããã®ã¬ã€ãã§ã¯ã[torchvision](https://pytorch.org/vision/stable/index.html) ã® [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html) é¢æ°ã䜿çšããŸãã ) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ç»åã©ã€ãã©ãªã䜿çšããããšãã§ããŸãã
```py
>>> from torchvision.transforms import ColorJitter
>>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéãæºåããããã® 2 ã€ã®ååŠçé¢æ°ãäœæããŸãããããã®é¢æ°ã¯ãç»åã`pixel_values`ã«å€æãã泚éã`labels`ã«å€æããŸãããã¬ãŒãã³ã° ã»ããã®å Žåãç»åãç»åããã»ããµã«æäŸããåã« `jitter` ãé©çšãããŸãããã¹ã ã»ããã®å Žåããã¹ãäžã«ããŒã¿æ¡åŒµãé©çšãããªããããç»åããã»ããµã¯`images`ãåãåã£ãŠæ£èŠåãã`ã©ãã«`ã®ã¿ãåãåããŸãã
```py
>>> def train_transforms(example_batch):
... images = [jitter(x) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [x for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«`jitter`ãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] é¢æ°ã䜿çšããŸããå€æã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ã¢ãã«ãéåŠç¿ã«å¯ŸããŠããå
ç¢ã«ããããã«ãç»åããŒã¿ã»ããã«ããã€ãã®ããŒã¿æ¡åŒµãé©çšããã®ãäžè¬çã§ãã
ãã®ã¬ã€ãã§ã¯ã[`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image) ã䜿çšããŠç»åã®è²ã®ããããã£ãã©ã³ãã ã«å€æŽããŸãããä»»æã®ããããã£ã䜿çšããããšãã§ããŸããç»å
奜ããªå³æžé€šã
2 ã€ã®å¥ã
ã®å€æé¢æ°ãå®çŸ©ããŸãã
- ç»åæ¡åŒµãå«ããã¬ãŒãã³ã° ããŒã¿å€æ
- ð€ Transformers ã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¢ãã«ã¯ãã£ãã«åªå
ã®ã¬ã€ã¢ãŠããæ³å®ããŠãããããç»åã転眮ããã ãã®æ€èšŒããŒã¿å€æ
```py
>>> import tensorflow as tf
>>> def aug_transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.image.random_brightness(image, 0.25)
... image = tf.image.random_contrast(image, 0.5, 2.0)
... image = tf.image.random_saturation(image, 0.75, 1.25)
... image = tf.image.random_hue(image, 0.1)
... image = tf.transpose(image, (2, 0, 1))
... return image
>>> def transforms(image):
... image = tf.keras.utils.img_to_array(image)
... image = tf.transpose(image, (2, 0, 1))
... return image
```
次ã«ãã¢ãã«ã®ç»åãšæ³šéã®ããããæºåãã 2 ã€ã®ååŠçé¢æ°ãäœæããŸãããããã®æ©èœãé©çšãããŸã
ç»åå€æãè¡ãã以åã«ããŒãããã `image_processor` ã䜿çšããŠç»åã `pixel_values` ã«å€æãã
`labels`ãžã®æ³šéã `ImageProcessor` ã¯ãç»åã®ãµã€ãºå€æŽãšæ£èŠåãåŠçããŸãã
```py
>>> def train_transforms(example_batch):
... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
>>> def val_transforms(example_batch):
... images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
... labels = [x for x in example_batch["annotation"]]
... inputs = image_processor(images, labels)
... return inputs
```
ããŒã¿ã»ããå
šäœã«ååŠçå€æãé©çšããã«ã¯ãð€ Datasets [`~datasets.Dataset.set_transform`] é¢æ°ã䜿çšããŸãã
å€æã¯ãªã³ã¶ãã©ã€ã§é©çšããããããé«éã§æ¶è²»ãããã£ã¹ã¯å®¹éãå°ãªããªããŸãã
```py
>>> train_ds.set_transform(train_transforms)
>>> test_ds.set_transform(val_transforms)
```
</tf>
</frameworkcontent>
## Evaluate
ãã¬ãŒãã³ã°äžã«ã¡ããªã¯ã¹ãå«ãããšãå€ãã®å Žåãã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã ð€ [Evaluate](https://huggingface.co/docs/evaluate/index) ã©ã€ãã©ãªã䜿çšããŠãè©äŸ¡ã¡ãœããããã°ããããŒãã§ããŸãããã®ã¿ã¹ã¯ã§ã¯ã[Mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) ã¡ããªãã¯ãããŒãããŸã (ð€ Evaluate [ã¯ã€ã㯠ãã¢ãŒ](https://huggingface.co/docs/evaluate/a_quick_tour) ãåç
§ããŠãã¡ããªã¯ã¹ãããŒãããŠèšç®ããæ¹æ³ã®è©³çŽ°ã確èªããŠãã ãã)ã
```py
>>> import evaluate
>>> metric = evaluate.load("mean_iou")
```
次ã«ãã¡ããªã¯ã¹ã [`~evaluate.EvaluationModule.compute`] ããé¢æ°ãäœæããŸããäºæž¬ã次ã®ããã«å€æããå¿
èŠããããŸã
æåã«ããžãããäœæãã次㫠[`~evaluate.EvaluationModule.compute`] ãåŒã³åºãåã«ã©ãã«ã®ãµã€ãºã«äžèŽããããã«å圢æããŸãã
<frameworkcontent>
<pt>
```py
>>> import numpy as np
>>> import torch
>>> from torch import nn
>>> def compute_metrics(eval_pred):
... with torch.no_grad():
... logits, labels = eval_pred
... logits_tensor = torch.from_numpy(logits)
... logits_tensor = nn.functional.interpolate(
... logits_tensor,
... size=labels.shape[-2:],
... mode="bilinear",
... align_corners=False,
... ).argmax(dim=1)
... pred_labels = logits_tensor.detach().cpu().numpy()
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=255,
... reduce_labels=False,
... )
... for key, value in metrics.items():
... if type(value) is np.ndarray:
... metrics[key] = value.tolist()
... return metrics
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
```py
>>> def compute_metrics(eval_pred):
... logits, labels = eval_pred
... logits = tf.transpose(logits, perm=[0, 2, 3, 1])
... logits_resized = tf.image.resize(
... logits,
... size=tf.shape(labels)[1:],
... method="bilinear",
... )
... pred_labels = tf.argmax(logits_resized, axis=-1)
... metrics = metric.compute(
... predictions=pred_labels,
... references=labels,
... num_labels=num_labels,
... ignore_index=-1,
... reduce_labels=image_processor.do_reduce_labels,
... )
... per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
... per_category_iou = metrics.pop("per_category_iou").tolist()
... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
... return {"val_" + k: v for k, v in metrics.items()}
```
</tf>
</frameworkcontent>
ããã§`compute_metrics`é¢æ°ã®æºåãæŽããŸããããã¬ãŒãã³ã°ãã»ããã¢ãããããšãã«ãã®é¢æ°ã«æ»ããŸãã
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ã[ãã](../training#finetune-with-trainer) ã®åºæ¬çãªãã¥ãŒããªã¢ã«ãã芧ãã ããã
</Tip>
ããã§ã¢ãã«ã®ãã¬ãŒãã³ã°ãéå§ããæºåãæŽããŸããã [`AutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒãããã©ãã« ID ãšã©ãã« ã¯ã©ã¹éã®ãããã³ã°ãã¢ãã«ã«æž¡ããŸãã
```py
>>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
>>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
```
ãã®æç¹ã§æ®ã£ãŠããæé ã¯æ¬¡ã® 3 ã€ã ãã§ãã
1. [`TrainingArguments`] ã§ãã¬ãŒãã³ã° ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ããŸãã `image` åãåé€ããããããæªäœ¿çšã®åãåé€ããªãããšãéèŠã§ãã `image` åããªããšã`pixel_values` ãäœæã§ããŸããããã®åäœãé²ãã«ã¯ã`remove_unused_columns=False`ãèšå®ããŠãã ãããä»ã«å¿
èŠãªãã©ã¡ãŒã¿ã¯ãã¢ãã«ã®ä¿åå Žæãæå®ãã `output_dir` ã ãã§ãã `push_to_hub=True`ãèšå®ããŠããã®ã¢ãã«ãããã«ããã·ã¥ããŸã (ã¢ãã«ãã¢ããããŒãããã«ã¯ãHugging Face ã«ãµã€ã³ã€ã³ããå¿
èŠããããŸã)ãåãšããã¯ã®çµäºæã«ã[`Trainer`] 㯠IoU ã¡ããªãã¯ãè©äŸ¡ãããã¬ãŒãã³ã° ãã§ãã¯ãã€ã³ããä¿åããŸãã
2. ãã¬ãŒãã³ã°åŒæ°ããã¢ãã«ãããŒã¿ã»ãããããŒã¯ãã€ã¶ãŒãããŒã¿ç
§ååšãããã³ `compute_metrics` é¢æ°ãšãšãã« [`Trainer`] ã«æž¡ããŸãã
3. [`~Trainer.train`] ãåŒã³åºããŠã¢ãã«ã埮調æŽããŸãã
```py
>>> training_args = TrainingArguments(
... output_dir="segformer-b0-scene-parse-150",
... learning_rate=6e-5,
... num_train_epochs=50,
... per_device_train_batch_size=2,
... per_device_eval_batch_size=2,
... save_total_limit=3,
... eval_strategy="steps",
... save_strategy="steps",
... save_steps=20,
... eval_steps=20,
... logging_steps=1,
... eval_accumulation_steps=5,
... remove_unused_columns=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=train_ds,
... eval_dataset=test_ds,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ãã¬ãŒãã³ã°ãå®äºãããã [`~transformers.Trainer.push_to_hub`] ã¡ãœããã䜿çšããŠã¢ãã«ãããã«å
±æãã誰ããã¢ãã«ã䜿çšã§ããããã«ããŸãã
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
<Tip>
Keras ã䜿çšããã¢ãã«ã®åŸ®èª¿æŽã«æ
£ããŠããªãå Žåã¯ããŸã [åºæ¬ãã¥ãŒããªã¢ã«](./training#train-a-tensorflow-model-with-keras) ã確èªããŠãã ããã
</Tip>
TensorFlow ã§ã¢ãã«ã埮調æŽããã«ã¯ã次ã®æé ã«åŸããŸãã
1. ãã¬ãŒãã³ã°ã®ãã€ããŒãã©ã¡ãŒã¿ãå®çŸ©ãããªããã£ãã€ã¶ãŒãšåŠç¿çã¹ã±ãžã¥ãŒã«ãèšå®ããŸãã
2. äºåãã¬ãŒãã³ã°ãããã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããŸãã
3. ð€ ããŒã¿ã»ããã `tf.data.Dataset` ã«å€æããŸãã
4. ã¢ãã«ãã³ã³ãã€ã«ããŸãã
5. ã³ãŒã«ããã¯ãè¿œå ããŠã¡ããªã¯ã¹ãèšç®ããã¢ãã«ã ð€ Hub ã«ã¢ããããŒãããŸã
6. `fit()` ã¡ãœããã䜿çšããŠãã¬ãŒãã³ã°ãå®è¡ããŸãã
ãŸãããã€ããŒãã©ã¡ãŒã¿ãŒããªããã£ãã€ã¶ãŒãåŠç¿çã¹ã±ãžã¥ãŒã«ãå®çŸ©ããŸãã
```py
>>> from transformers import create_optimizer
>>> batch_size = 2
>>> num_epochs = 50
>>> num_train_steps = len(train_ds) * num_epochs
>>> learning_rate = 6e-5
>>> weight_decay_rate = 0.01
>>> optimizer, lr_schedule = create_optimizer(
... init_lr=learning_rate,
... num_train_steps=num_train_steps,
... weight_decay_rate=weight_decay_rate,
... num_warmup_steps=0,
... )
```
次ã«ãã©ãã« ãããã³ã°ãšãšãã« [`TFAutoModelForSemanticSegmentation`] ã䜿çšã㊠SegFormer ãããŒããããããã³ã³ãã€ã«ããŸãã
ãªããã£ãã€ã¶ã Transformers ã¢ãã«ã«ã¯ãã¹ãŠããã©ã«ãã®ã¿ã¹ã¯é¢é£ã®æ倱é¢æ°ãããããã次ã®å Žåãé€ããæ倱é¢æ°ãæå®ããå¿
èŠã¯ãªãããšã«æ³šæããŠãã ããã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
... checkpoint,
... id2label=id2label,
... label2id=label2id,
... )
>>> model.compile(optimizer=optimizer) # No loss argument!
```
[`~datasets.Dataset.to_tf_dataset`] ãš [`DefaultDataCollatââor`] ã䜿çšããŠãããŒã¿ã»ããã `tf.data.Dataset` 圢åŒã«å€æããŸãã
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
>>> tf_train_dataset = train_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_eval_dataset = test_ds.to_tf_dataset(
... columns=["pixel_values", "label"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
äºæž¬ãã粟床ãèšç®ããã¢ãã«ã ð€ ããã«ããã·ã¥ããã«ã¯ã[Keras callbacks](../main_classes/keras_callbacks) ã䜿çšããŸãã
`compute_metrics` é¢æ°ã [`KerasMetricCallback`] ã«æž¡ããŸãã
ãã㊠[`PushToHubCallback`] ã䜿çšããŠã¢ãã«ãã¢ããããŒãããŸãã
```py
>>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
>>> metric_callback = KerasMetricCallback(
... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
... )
>>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor)
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ã€ãã«ãã¢ãã«ããã¬ãŒãã³ã°ããæºåãæŽããŸããããã¬ãŒãã³ã°ããã³æ€èšŒããŒã¿ã»ããããšããã¯æ°ã
ã¢ãã«ã埮調æŽããããã®ã³ãŒã«ããã¯:
```py
>>> model.fit(
... tf_train_dataset,
... validation_data=tf_eval_dataset,
... callbacks=callbacks,
... epochs=num_epochs,
... )
```
ããã§ãšãïŒã¢ãã«ã埮調æŽããð€ Hub ã§å
±æããŸãããããã§æšè«ã«äœ¿çšã§ããããã«ãªããŸããã
</tf>
</frameworkcontent>
## Inference
ã¢ãã«ã埮調æŽããã®ã§ããããæšè«ã«äœ¿çšã§ããããã«ãªããŸããã
æšè«ã®ããã«ç»åãããŒãããŸãã
```py
>>> image = ds[0]["image"]
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"/>
</div>
<frameworkcontent>
<pt>
æšè«çšã«åŸ®èª¿æŽãããã¢ãã«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãããã [`pipeline`] ã§äœ¿çšããããšã§ããã¢ãã«ã䜿çšããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³çšã® `pipeline`ãã€ã³ã¹ã¿ã³ã¹åããããã«ç»åãæž¡ããŸãã
```py
>>> from transformers import pipeline
>>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model")
>>> segmenter(image)
[{'score': None,
'label': 'wall',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>},
{'score': None,
'label': 'sky',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>},
{'score': None,
'label': 'floor',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>},
{'score': None,
'label': 'ceiling',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>},
{'score': None,
'label': 'bed ',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>},
{'score': None,
'label': 'windowpane',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>},
{'score': None,
'label': 'cabinet',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>},
{'score': None,
'label': 'chair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>},
{'score': None,
'label': 'armchair',
'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}]
```
å¿
èŠã«å¿ããŠã`pipeline`ã®çµæãæåã§è€è£œããããšãã§ããŸããç»åãç»åããã»ããµã§åŠçãã`pixel_values` ã GPU ã«é
眮ããŸãã
```py
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
>>> encoding = image_processor(image, return_tensors="pt")
>>> pixel_values = encoding.pixel_values.to(device)
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> outputs = model(pixel_values=pixel_values)
>>> logits = outputs.logits.cpu()
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããŸãã
```py
>>> upsampled_logits = nn.functional.interpolate(
... logits,
... size=image.size[::-1],
... mode="bilinear",
... align_corners=False,
... )
>>> pred_seg = upsampled_logits.argmax(dim=1)[0]
```
</pt>
</frameworkcontent>
<frameworkcontent>
<tf>
ç»åããã»ããµãããŒãããŠç»åãååŠçããå
¥åã TensorFlow ãã³ãœã«ãšããŠè¿ããŸãã
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
>>> inputs = image_processor(image, return_tensors="tf")
```
å
¥åãã¢ãã«ã«æž¡ãã`logits`ãè¿ããŸãã
```py
>>> from transformers import TFAutoModelForSemanticSegmentation
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
>>> logits = model(**inputs).logits
```
次ã«ãããžãããå
ã®ç»åãµã€ãºã«åã¹ã±ãŒã«ããã¯ã©ã¹æ¬¡å
ã« argmax ãé©çšããŸãã
```py
>>> logits = tf.transpose(logits, [0, 2, 3, 1])
>>> upsampled_logits = tf.image.resize(
... logits,
... # We reverse the shape of `image` because `image.size` returns width and height.
... image.size[::-1],
... )
>>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0]
```
</tf>
</frameworkcontent>
çµæãèŠèŠåããã«ã¯ã[ããŒã¿ã»ãã ã«ã©ãŒ ãã¬ãã](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) ãããããããããããã `ade_palette()` ãšããŠããŒãããŸããã¯ã©ã¹ã RGB å€ã«å€æããŸãã次ã«ãç»åãšäºæž¬ãããã»ã°ã¡ã³ããŒã·ã§ã³ ããããçµã¿åãããŠããããã§ããŸãã
```py
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
>>> palette = np.array(ade_palette())
>>> for label, color in enumerate(palette):
... color_seg[pred_seg == label, :] = color
>>> color_seg = color_seg[..., ::-1] # convert to BGR
>>> img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
>>> img = img.astype(np.uint8)
>>> plt.figure(figsize=(15, 10))
>>> plt.imshow(img)
>>> plt.show()
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/>
</div>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/tasks/zero_shot_object_detection.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Zero-shot object detection
[[open-in-colab]]
åŸæ¥ã[ãªããžã§ã¯ãæ€åº](object_detection) ã«äœ¿çšãããã¢ãã«ã«ã¯ããã¬ãŒãã³ã°çšã®ã©ãã«ä»ãç»åããŒã¿ã»ãããå¿
èŠã§ããã
ãã¬ãŒãã³ã° ããŒã¿ããã®ã¯ã©ã¹ã®ã»ããã®æ€åºã«éå®ãããŸãã
ãŒãã·ã§ãããªããžã§ã¯ãæ€åºã¯ãå¥ã®ã¢ãããŒãã䜿çšãã [OWL-ViT](../model_doc/owlvit) ã¢ãã«ã«ãã£ãŠãµããŒããããŠããŸãã OWL-ViT
ãªãŒãã³èªåœãªããžã§ã¯ãæ€åºåšã§ããããã¯ãããªãŒããã¹ãã¯ãšãªã«åºã¥ããŠç»åå
ã®ãªããžã§ã¯ããæ€åºã§ããããšãæå³ããŸãã
ã©ãã«ä»ãããŒã¿ã»ããã§ã¢ãã«ã埮調æŽããå¿
èŠæ§ã
OWL-ViTã¯ããã«ãã¢ãŒãã«è¡šçŸãå©çšããŠãªãŒãã³èªåœã®æ€åºãå®è¡ããŸãã [CLIP](../model_doc/clip) ãšãçµã¿åãããŸãã
軜éã®ãªããžã§ã¯ãåé¡ããã³äœçœ®ç¹å®ãããããªãŒãã³èªåœã®æ€åºã¯ãCLIP ã®ããã¹ã ãšã³ã³ãŒããŒã䜿çšããŠããªãŒããã¹ã ã¯ãšãªãåã蟌ã¿ããããããªããžã§ã¯ãåé¡ããã³ããŒã«ãªãŒãŒã·ã§ã³ ããããžã®å
¥åãšããŠäœ¿çšããããšã«ãã£ãŠå®çŸãããŸãã
ç»åãšããã«å¯Ÿå¿ããããã¹ãã®èª¬æãé¢é£ä»ããViT ã¯ç»åããããå
¥åãšããŠåŠçããŸããäœå®¶ãã¡
ã®OWL-ViTã¯ããŸãCLIPããŒããããã¬ãŒãã³ã°ãã次ã«æšæºã®ç©äœæ€åºããŒã¿ã»ããã䜿çšããŠOWL-ViTããšã³ãããŒãšã³ãã§åŸ®èª¿æŽããŸããã
äºéšãããã³ã°æ倱ã
ãã®ã¢ãããŒãã䜿çšãããšãã¢ãã«ã¯ã©ãã«ä»ãããŒã¿ã»ããã§äºåã«ãã¬ãŒãã³ã°ããªããŠããããã¹ãã®èª¬æã«åºã¥ããŠãªããžã§ã¯ããæ€åºã§ããŸãã
ãã®ã¬ã€ãã§ã¯ãOWL-ViT ã®äœ¿çšæ¹æ³ãåŠç¿ããŸãã
- ããã¹ãããã³ããã«åºã¥ããŠãªããžã§ã¯ããæ€åºããŸã
- ããããªããžã§ã¯ãæ€åºçš
- ç»åèªå°ç©äœæ€åºçš
å§ããåã«ãå¿
èŠãªã©ã€ãã©ãªããã¹ãŠã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```bash
pip install -q transformers
```
## Zero-shot object detection pipeline
OWL-ViTã«ããæšè«ãè©Šãæãç°¡åãªæ¹æ³ã¯ãOWL-ViTã[`pipeline`]ã§äœ¿çšããããšã§ãããã€ãã©ã€ã³ãã€ã³ã¹ã¿ã³ã¹åãã
[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?other=owlvit) ããã®ãŒãã·ã§ãã ãªããžã§ã¯ãæ€åºã®å Žå:
```python
>>> from transformers import pipeline
>>> checkpoint = "google/owlvit-base-patch32"
>>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
```
次ã«ãç©äœãæ€åºãããç»åãéžæããŸããããã§ã¯ãå®å®é£è¡å£«ã¢ã€ãªãŒã³ã»ã³ãªã³ãºã®ç»åã䜿çšããŸãã
[NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ããŒã¿ã»ããã®äžéšã
```py
>>> import skimage
>>> import numpy as np
>>> from PIL import Image
>>> image = skimage.data.astronaut()
>>> image = Image.fromarray(np.uint8(image)).convert("RGB")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/>
</div>
æ€çŽ¢ããç»åãšåè£ãªããžã§ã¯ãã®ã©ãã«ããã€ãã©ã€ã³ã«æž¡ããŸãã
ããã§ã¯ç»åãçŽæ¥æž¡ããŸããä»ã®é©åãªãªãã·ã§ã³ã«ã¯ãç»åãžã®ããŒã«ã« ãã¹ãŸãã¯ç»å URL ãå«ãŸããŸãããŸããç»åãã¯ãšãªãããã¹ãŠã®ã¢ã€ãã ã®ããã¹ã説æãæž¡ããŸãã
```py
>>> predictions = detector(
... image,
... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"],
... )
>>> predictions
[{'score': 0.3571370542049408,
'label': 'human face',
'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}},
{'score': 0.28099656105041504,
'label': 'nasa badge',
'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}},
{'score': 0.2110239565372467,
'label': 'rocket',
'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}},
{'score': 0.13790413737297058,
'label': 'star-spangled banner',
'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}},
{'score': 0.11950037628412247,
'label': 'nasa badge',
'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}},
{'score': 0.10649408400058746,
'label': 'rocket',
'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}]
```
äºæž¬ãèŠèŠåããŠã¿ãŸãããã
```py
>>> from PIL import ImageDraw
>>> draw = ImageDraw.Draw(image)
>>> for prediction in predictions:
... box = prediction["box"]
... label = prediction["label"]
... score = prediction["score"]
... xmin, ymin, xmax, ymax = box.values()
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/>
</div>
## Text-prompted zero-shot object detection by hand
ãŒãã·ã§ããç©äœæ€åºãã€ãã©ã€ã³ã®äœ¿çšæ¹æ³ã確èªããã®ã§ãåãããšãåçŸããŠã¿ãŸãããã
æåã§çµæãååŸããŸãã
ãŸãã[Hugging Face Hub ã®ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?other=owlvit) ããã¢ãã«ãšé¢é£ããã»ããµãããŒãããŸãã
ããã§ã¯ãåãšåããã§ãã¯ãã€ã³ãã䜿çšããŸãã
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
æ°åãå€ããŠãå¥ã®ç»åãæ®ã£ãŠã¿ãŸãããã
```py
>>> import requests
>>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"
>>> im = Image.open(requests.get(url, stream=True).raw)
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/>
</div>
ããã»ããµã䜿çšããŠã¢ãã«ã®å
¥åãæºåããŸããããã»ããµãŒã¯ã
ãµã€ãºå€æŽãšæ£èŠåã«ããã¢ãã«ã®ç»åãšãããã¹ãå
¥åãåŠçãã [`CLIPTokenizer`] ã§ãã
```py
>>> text_queries = ["hat", "book", "sunglasses", "camera"]
>>> inputs = processor(text=text_queries, images=im, return_tensors="pt")
```
å
¥åãã¢ãã«ã«æž¡ããåŸåŠçããçµæãèŠèŠåããŸãã以åã¯ç»åããã»ããµã«ãã£ãŠç»åã®ãµã€ãºãå€æŽãããŠããããã
ããããã¢ãã«ã«ãã£ãŒãããã«ã¯ã[`~OwlViTImageProcessor.post_process_object_detection`] ã¡ãœããã䜿çšããŠãäºæž¬ãããå¢çã確èªããå¿
èŠããããŸãã
ããã¯ã¹ã¯å
ã®ç»åãåºæºãšããæ£ãã座æšãæã¡ãŸãã
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = torch.tensor([im.size[::-1]])
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(im)
>>> scores = results["scores"].tolist()
>>> labels = results["labels"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white")
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## Batch processing
è€æ°ã®ç»åã»ãããšããã¹ã ã¯ãšãªãæž¡ããŠãè€æ°ã®ç»åå
ã®ç°ãªã (ãŸãã¯åã) ãªããžã§ã¯ããæ€çŽ¢ã§ããŸãã
å®å®é£è¡å£«ã®ç»åãšããŒãã®ç»åãçµã¿åãããŠã¿ãŸãããã
ãããåŠçã®å Žåãããã¹ã ã¯ãšãªããã¹ãããããªã¹ããšããŠããã»ããµã«æž¡ããç»åã PIL ã€ã¡ãŒãžã®ãªã¹ããšããŠæž¡ãå¿
èŠããããŸãã
PyTorch ãã³ãœã«ããŸã㯠NumPy é
åã
```py
>>> images = [image, im]
>>> text_queries = [
... ["human face", "rocket", "nasa badge", "star-spangled banner"],
... ["hat", "book", "sunglasses", "camera"],
... ]
>>> inputs = processor(text=text_queries, images=images, return_tensors="pt")
```
以åã¯åŸåŠçã®ããã«åäžã®ç»åã®ãµã€ãºããã³ãœã«ãšããŠæž¡ããŠããŸããããã¿ãã«ãæž¡ãããšãã§ããŸãã
è€æ°ã®ç»åã®ã¿ãã«ã®ãªã¹ãã 2 ã€ã®äŸã®äºæž¬ãäœæãã2 çªç®ã®äŸ (`image_idx = 1`) ãèŠèŠåããŸãããã
```py
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = [x.size[::-1] for x in images]
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)
>>> image_idx = 1
>>> draw = ImageDraw.Draw(images[image_idx])
>>> scores = results[image_idx]["scores"].tolist()
>>> labels = results[image_idx]["labels"].tolist()
>>> boxes = results[image_idx]["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white")
>>> images[image_idx]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## Image-guided object detection
ããã¹ãã¯ãšãªã«ãããŒãã·ã§ãããªããžã§ã¯ãæ€åºã«å ããŠãOWL-ViTã¯ç»åã¬ã€ãã«ãããªããžã§ã¯ãæ€åºãæäŸããŸããããã¯ã€ãŸã
ç»åã¯ãšãªã䜿çšããŠãã¿ãŒã²ããç»åå
ã®é¡äŒŒãããªããžã§ã¯ããæ€çŽ¢ã§ããŸãã
ããã¹ã ã¯ãšãªãšã¯ç°ãªãã䜿çšã§ãããµã³ãã«ç»å㯠1 ã€ã ãã§ãã
察象ç»åãšããŠãœãã¡ã«2å¹ã®ç«ãããç»åãšã1å¹ã®ç«ã®ç»åãæ®åœ±ããŸããã
ã¯ãšãªãšããŠ:
```py
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image_target = Image.open(requests.get(url, stream=True).raw)
>>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg"
>>> query_image = Image.open(requests.get(query_url, stream=True).raw)
```
ç»åãç°¡åã«èŠãŠã¿ãŸãããã
```py
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots(1, 2)
>>> ax[0].imshow(image_target)
>>> ax[1].imshow(query_image)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/>
</div>
ååŠçã¹ãããã§ã¯ãããã¹ã ã¯ãšãªã®ä»£ããã« `query_images` ã䜿çšããå¿
èŠããããŸãã
```py
>>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt")
```
äºæž¬ã®å Žåãå
¥åãã¢ãã«ã«æž¡ã代ããã«ã[`~OwlViTForObjectDetection.image_guided_detection`] ã«æž¡ããŸããäºæž¬ãæã
ã©ãã«ããªãããšãé€ããŠã¯ä»¥åãšåæ§ã§ãã
```py
>>> with torch.no_grad():
... outputs = model.image_guided_detection(**inputs)
... target_sizes = torch.tensor([image_target.size[::-1]])
... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(image_target)
>>> scores = results["scores"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4)
>>> image_target
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/>
</div>
OWL-ViTã«ããæšè«ãã€ã³ã¿ã©ã¯ãã£ãã«è©Šãããå Žåã¯ããã®ãã¢ããã§ãã¯ããŠãã ããã
<iframe
src="https://adirik-owl-vit.hf.space"
frameborder="0"
width="850"
height="450"
></iframe> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/file_utils.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# äžè¬çãªãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ããã¡ã€ã« `utils.py` ã«ãã Transformers ã®äžè¬çãªãŠãŒãã£ãªãã£é¢æ°ããã¹ãŠãªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªã§äžè¬çãªã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## åæåãšååä»ãã¿ãã«
[[autodoc]] utils.ExplicitEnum
[[autodoc]] utils.PaddingStrategy
[[autodoc]] utils.TensorType
## ç¹å¥ãªãã³ã¬ãŒã¿ãŒ
[[autodoc]] utils.add_start_docstrings
[[autodoc]] utils.add_start_docstrings_to_model_forward
[[autodoc]] utils.add_end_docstrings
[[autodoc]] utils.add_code_sample_docstrings
[[autodoc]] utils.replace_return_docstrings
## ç¹æ®ãªããããã£
[[autodoc]] utils.cached_property
## ãã®ä»ã®ãŠãŒãã£ãªãã£
[[autodoc]] utils._LazyModule
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/pipelines_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãã€ãã©ã€ã³çšã®ãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãã©ã€ãã©ãªããã€ãã©ã€ã³ã«æäŸãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ã¢ãã«ã®ã³ãŒããç 究ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
## Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
## Utilities
[[autodoc]] pipelines.PipelineException
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/tokenization_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Utilities for Tokenizers
ãã®ããŒãžã«ã¯ãããŒã¯ãã€ã¶ãŒã«ãã£ãŠäœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ° (äž»ã«ã¯ã©ã¹) ããªã¹ããããŸãã
[`~tokenization_utils_base.PreTrainedTokenizerBase`] éã®å
±éã¡ãœãããå®è£
ããŸãã
[`PreTrainedTokenizer`] ãš [`PreTrainedTokenizerFast`] ããã³ããã¯ã¹ã€ã³
[`~tokenization_utils_base.SpecialTokensMixin`]ã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ããŒã¯ãã€ã¶ãŒã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## PreTrainedTokenizerBase
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
- __call__
- all
## SpecialTokensMixin
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
## Enums and namedtuples
[[autodoc]] tokenization_utils_base.TruncationStrategy
[[autodoc]] tokenization_utils_base.CharSpan
[[autodoc]] tokenization_utils_base.TokenSpan
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/generation_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# çºé»çšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã[`~generation.GenerationMixin.generate`] ã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
## åºåãçæãã
[`~generation.GenerationMixin.generate`] ã®åºåã¯ã次ã®ãµãã¯ã©ã¹ã®ã€ã³ã¹ã¿ã³ã¹ã§ãã
[`~utils.ModelOutput`]ããã®åºåã¯ãè¿ããããã¹ãŠã®æ
å ±ãå«ãããŒã¿æ§é ã§ãã
[`~generation.GenerationMixin.generate`] ã«ãã£ãŠäœæãããŸãããã¿ãã«ãŸãã¯èŸæžãšããŠã䜿çšã§ããŸãã
以äžã«äŸã瀺ããŸãã
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
```
`generation_output` ãªããžã§ã¯ãã¯ãã§ããéã [`~generation.GenerateDecoderOnlyOutput`] ã§ãã
以äžã®ãã®ã¯ã©ã¹ã®ããã¥ã¡ã³ããåç
§ããŠãã ãããããã¯ã次ã®å±æ§ãããããšãæå³ããŸãã
- `sequences`: çæãããããŒã¯ã³ã®ã·ãŒã±ã³ã¹
- `scores` (ãªãã·ã§ã³): åçæã¹ãããã®èšèªã¢ããªã³ã° ãããã®äºæž¬ã¹ã³ã¢
- `hidden_ââstates` (ãªãã·ã§ã³): çæã¹ãããããšã®ã¢ãã«ã®é ããç¶æ
- `attentions` (ãªãã·ã§ã³): çæã¹ãããããšã®ã¢ãã«ã®ã¢ãã³ã·ã§ã³ã®éã¿
ããã§ã¯ã`output_scores=True`ãæž¡ããã®ã§ `scores` ããããŸããã`hidden_ââstates` ã¯ãããŸããã
`attentions` ã¯ã`output_hidden_ââstates=True`ãŸãã¯`output_attentions=True`ãæž¡ããªãã£ãããã§ãã
éåžžãšåãããã«åå±æ§ã«ã¢ã¯ã»ã¹ã§ããŸãããã®å±æ§ãã¢ãã«ããè¿ãããªãã£ãå Žåã¯ã
ã¯ããªãããååŸããŸããããã§ãããšãã°`generation_output.scores`ã¯ãçæããããã¹ãŠã®äºæž¬ã¹ã³ã¢ã§ãã
èšèªã¢ããªã³ã°ã®ãããã§ããã`generation_output.attentions`ã¯`None`ã§ãã
`generation_output` ãªããžã§ã¯ããã¿ãã«ãšããŠäœ¿çšããå Žåã`None` å€ãæããªãå±æ§ã®ã¿ãä¿æãããŸãã
ããšãã°ãããã«ã¯ 2 ã€ã®èŠçŽ ã`loss`ã次ã«`logits`ããããŸãã
```python
generation_output[:2]
```
ããšãã°ãã¿ãã« `(generation_output.sequences,generation_output.scores)` ãè¿ããŸãã
`generation_output` ãªããžã§ã¯ããèŸæžãšããŠäœ¿çšããå Žåã`None` ãæããªãå±æ§ã®ã¿ãä¿æãããŸãã
ããã§ã¯ãããšãã°ã`sequences`ãš`scores`ãšãã 2 ã€ã®ããŒããããŸãã
ããã§ã¯ãã¹ãŠã®åºåã¿ã€ããææžåããŸãã
### PyTorch
[[autodoc]] generation.GenerateDecoderOnlyOutput
[[autodoc]] generation.GenerateEncoderDecoderOutput
[[autodoc]] generation.GenerateBeamDecoderOnlyOutput
[[autodoc]] generation.GenerateBeamEncoderDecoderOutput
### TensorFlow
[[autodoc]] generation.TFGreedySearchEncoderDecoderOutput
[[autodoc]] generation.TFGreedySearchDecoderOnlyOutput
[[autodoc]] generation.TFSampleEncoderDecoderOutput
[[autodoc]] generation.TFSampleDecoderOnlyOutput
[[autodoc]] generation.TFBeamSearchEncoderDecoderOutput
[[autodoc]] generation.TFBeamSearchDecoderOnlyOutput
[[autodoc]] generation.TFBeamSampleEncoderDecoderOutput
[[autodoc]] generation.TFBeamSampleDecoderOnlyOutput
[[autodoc]] generation.TFContrastiveSearchEncoderDecoderOutput
[[autodoc]] generation.TFContrastiveSearchDecoderOnlyOutput
### FLAX
[[autodoc]] generation.FlaxSampleOutput
[[autodoc]] generation.FlaxGreedySearchOutput
[[autodoc]] generation.FlaxBeamSearchOutput
## LogitsProcessor
[`LogitsProcessor`] ã䜿çšããŠãèšèªã¢ãã«ã®ãããã®äºæž¬ã¹ã³ã¢ãå€æŽã§ããŸãã
äžä»£ã
### PyTorch
[[autodoc]] AlternatingCodebooksLogitsProcessor
- __call__
[[autodoc]] ClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] EncoderNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] EncoderRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] EpsilonLogitsWarper
- __call__
[[autodoc]] EtaLogitsWarper
- __call__
[[autodoc]] ExponentialDecayLengthPenalty
- __call__
[[autodoc]] ForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] ForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] ForceTokensLogitsProcessor
- __call__
[[autodoc]] HammingDiversityLogitsProcessor
- __call__
[[autodoc]] InfNanRemoveLogitsProcessor
- __call__
[[autodoc]] LogitNormalization
- __call__
[[autodoc]] LogitsProcessor
- __call__
[[autodoc]] LogitsProcessorList
- __call__
[[autodoc]] LogitsWarper
- __call__
[[autodoc]] MinLengthLogitsProcessor
- __call__
[[autodoc]] MinNewTokensLengthLogitsProcessor
- __call__
[[autodoc]] NoBadWordsLogitsProcessor
- __call__
[[autodoc]] NoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] PrefixConstrainedLogitsProcessor
- __call__
[[autodoc]] RepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] SequenceBiasLogitsProcessor
- __call__
[[autodoc]] SuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] SuppressTokensLogitsProcessor
- __call__
[[autodoc]] TemperatureLogitsWarper
- __call__
[[autodoc]] TopKLogitsWarper
- __call__
[[autodoc]] TopPLogitsWarper
- __call__
[[autodoc]] TypicalLogitsWarper
- __call__
[[autodoc]] UnbatchedClassifierFreeGuidanceLogitsProcessor
- __call__
[[autodoc]] WhisperTimeStampLogitsProcessor
- __call__
### TensorFlow
[[autodoc]] TFForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] TFForceTokensLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessor
- __call__
[[autodoc]] TFLogitsProcessorList
- __call__
[[autodoc]] TFLogitsWarper
- __call__
[[autodoc]] TFMinLengthLogitsProcessor
- __call__
[[autodoc]] TFNoBadWordsLogitsProcessor
- __call__
[[autodoc]] TFNoRepeatNGramLogitsProcessor
- __call__
[[autodoc]] TFRepetitionPenaltyLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] TFSuppressTokensLogitsProcessor
- __call__
[[autodoc]] TFTemperatureLogitsWarper
- __call__
[[autodoc]] TFTopKLogitsWarper
- __call__
[[autodoc]] TFTopPLogitsWarper
- __call__
### FLAX
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
- __call__
[[autodoc]] FlaxForceTokensLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessor
- __call__
[[autodoc]] FlaxLogitsProcessorList
- __call__
[[autodoc]] FlaxLogitsWarper
- __call__
[[autodoc]] FlaxMinLengthLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensAtBeginLogitsProcessor
- __call__
[[autodoc]] FlaxSuppressTokensLogitsProcessor
- __call__
[[autodoc]] FlaxTemperatureLogitsWarper
- __call__
[[autodoc]] FlaxTopKLogitsWarper
- __call__
[[autodoc]] FlaxTopPLogitsWarper
- __call__
[[autodoc]] FlaxWhisperTimeStampLogitsProcessor
- __call__
## StoppingCriteria
[`StoppingCriteria`] ã䜿çšããŠã(EOS ããŒã¯ã³ä»¥å€ã®) çæãåæ¢ããã¿ã€ãã³ã°ãå€æŽã§ããŸãããã㯠PyTorch å®è£
ã§ã®ã¿å©çšå¯èœã§ããããšã«æ³šæããŠãã ããã
[[autodoc]] StoppingCriteria
- __call__
[[autodoc]] StoppingCriteriaList
- __call__
[[autodoc]] MaxLengthCriteria
- __call__
[[autodoc]] MaxTimeCriteria
- __call__
## Constraints
[`Constraint`] ã䜿çšãããšãçææã«åºåã«ç¹å®ã®ããŒã¯ã³ãŸãã¯ã·ãŒã±ã³ã¹ãå«ãŸããããã«åŒ·å¶ã§ããŸãããã㯠PyTorch å®è£
ã§ã®ã¿å©çšå¯èœã§ããããšã«æ³šæããŠãã ããã
[[autodoc]] Constraint
[[autodoc]] PhrasalConstraint
[[autodoc]] DisjunctiveConstraint
[[autodoc]] ConstraintListState
## BeamSearch
[[autodoc]] BeamScorer
- process
- finalize
[[autodoc]] BeamSearchScorer
- process
- finalize
[[autodoc]] ConstrainedBeamSearchScorer
- process
- finalize
## Streamers
[[autodoc]] TextStreamer
[[autodoc]] TextIteratorStreamer
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/trainer_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãã¬ãŒããŒçšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã[`Trainer`] ã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ãã¬ãŒããŒã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Utilities
[[autodoc]] EvalPrediction
[[autodoc]] IntervalStrategy
[[autodoc]] enable_full_determinism
[[autodoc]] set_seed
[[autodoc]] torch_distributed_zero_first
## Callbacks internals
[[autodoc]] trainer_callback.CallbackHandler
## Distributed Evaluation
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
## Trainer Argument Parser
[[autodoc]] HfArgumentParser
## Debug Utilities
[[autodoc]] debug_utils.DebugUnderflowOverflow
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/time_series_utils.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# æç³»åãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãæç³»åããŒã¹ã®ã¢ãã«ã«äœ¿çšã§ãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ãšã¯ã©ã¹ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãæç³»åã¢ãã«ã®ã³ãŒããç 究ããŠããå ŽåããŸãã¯åæ£åºåã¯ã©ã¹ã®ã³ã¬ã¯ã·ã§ã³ã«è¿œå ãããå Žåã«ã®ã¿åœ¹ç«ã¡ãŸãã
## Distributional Output
[[autodoc]] time_series_utils.NormalOutput
[[autodoc]] time_series_utils.StudentTOutput
[[autodoc]] time_series_utils.NegativeBinomialOutput | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/audio_utils.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# `FeatureExtractor` çšã®ãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ã*çæéããŒãªãšå€æ* ã *ãã° ã¡ã« ã¹ãã¯ããã°ã©ã * ãªã©ã®äžè¬çãªã¢ã«ãŽãªãºã ã䜿çšããŠçã®ãªãŒãã£ãªããç¹å¥ãªç¹åŸŽãèšç®ããããã«ããªãŒãã£ãª [`FeatureExtractor`] ã§äœ¿çšã§ãããã¹ãŠã®ãŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŠããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ãªãŒãã£ãª ããã»ããµã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## ãªãŒãã£ãªå€æ
[[autodoc]] audio_utils.hertz_to_mel
[[autodoc]] audio_utils.mel_to_hertz
[[autodoc]] audio_utils.mel_filter_bank
[[autodoc]] audio_utils.optimal_fft_length
[[autodoc]] audio_utils.window_function
[[autodoc]] audio_utils.spectrogram
[[autodoc]] audio_utils.power_to_db
[[autodoc]] audio_utils.amplitude_to_db
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/modeling_utils.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ã«ã¹ã¿ã ã¬ã€ã€ãŒãšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãã©ã€ãã©ãªã§äœ¿çšããããã¹ãŠã®ã«ã¹ã¿ã ã¬ã€ã€ãŒãšãã¢ããªã³ã°ã«æäŸããããŠãŒãã£ãªãã£é¢æ°ããªã¹ããããŸãã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ã¢ãã«ã®ã³ãŒããç 究ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Pytorch custom modules
[[autodoc]] pytorch_utils.Conv1D
[[autodoc]] modeling_utils.PoolerStartLogits
- forward
[[autodoc]] modeling_utils.PoolerEndLogits
- forward
[[autodoc]] modeling_utils.PoolerAnswerClass
- forward
[[autodoc]] modeling_utils.SquadHeadOutput
[[autodoc]] modeling_utils.SQuADHead
- forward
[[autodoc]] modeling_utils.SequenceSummary
- forward
## PyTorch Helper Functions
[[autodoc]] pytorch_utils.apply_chunking_to_forward
[[autodoc]] pytorch_utils.find_pruneable_heads_and_indices
[[autodoc]] pytorch_utils.prune_layer
[[autodoc]] pytorch_utils.prune_conv1d_layer
[[autodoc]] pytorch_utils.prune_linear_layer
## TensorFlow custom layers
[[autodoc]] modeling_tf_utils.TFConv1D
[[autodoc]] modeling_tf_utils.TFSequenceSummary
## TensorFlow loss functions
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
## TensorFlow Helper Functions
[[autodoc]] modeling_tf_utils.get_initializer
[[autodoc]] modeling_tf_utils.keras_serializable
[[autodoc]] modeling_tf_utils.shape_list
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/internal/image_processing_utils.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ç»åããã»ããµçšãŠãŒãã£ãªãã£
ãã®ããŒãžã«ã¯ãç»åããã»ããµãŒã§äœ¿çšããããã¹ãŠã®ãŠãŒãã£ãªãã£ãŒé¢æ°ããªã¹ããããŠããŸããäž»ã«æ©èœçãªãã®ã§ãã
ç»åãåŠçããããã«äœ¿çšãããå€æã
ãããã®ã»ãšãã©ã¯ãã©ã€ãã©ãªå
ã®ç»åããã»ããµã®ã³ãŒããåŠç¿ããå Žåã«ã®ã¿åœ¹ã«ç«ã¡ãŸãã
## Image Transformations
[[autodoc]] image_transforms.center_crop
[[autodoc]] image_transforms.center_to_corners_format
[[autodoc]] image_transforms.corners_to_center_format
[[autodoc]] image_transforms.id_to_rgb
[[autodoc]] image_transforms.normalize
[[autodoc]] image_transforms.pad
[[autodoc]] image_transforms.rgb_to_id
[[autodoc]] image_transforms.rescale
[[autodoc]] image_transforms.resize
[[autodoc]] image_transforms.to_pil_image
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/deepspeed.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed Integration
[DeepSpeed](https://github.com/microsoft/DeepSpeed) ã¯ã[ZeRO è«æ](https://arxiv.org/abs/1910.02054) ã§èª¬æãããŠãããã¹ãŠãå®è£
ããŸããçŸåšã次ã®ãã®ãå®å
šã«ãµããŒãããŠããŸãã
1. ãªããã£ãã€ã¶ãŒã®ç¶æ
åå² (ZeRO ã¹ããŒãž 1)
2. åŸé
åå² (ZeRO ã¹ããŒãž 2)
3. ãã©ã¡ãŒã¿ãŒã®åå² (ZeRO ã¹ããŒãž 3)
4. ã«ã¹ã¿ã æ··å粟床ãã¬ãŒãã³ã°åŠç
5. äžé£ã®é«é CUDA æ¡åŒµããŒã¹ã®ãªããã£ãã€ã¶ãŒ
6. CPU ããã³ NVMe ãžã® ZeRO ãªãããŒã
ZeRO-Offload ã«ã¯ç¬èªã®å°çšããŒããŒããããŸã: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)ã NVMe ãµããŒãã«ã€ããŠã¯ãè«æ [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)ã
DeepSpeed ZeRO-2 ã¯ããã®æ©èœãæšè«ã«ã¯åœ¹ã«ç«ããªããããäž»ã«ãã¬ãŒãã³ã°ã®ã¿ã«äœ¿çšãããŸãã
DeepSpeed ZeRO-3 ã¯ã巚倧ãªã¢ãã«ãè€æ°ã® GPU ã«ããŒãã§ãããããæšè«ã«ã䜿çšã§ããŸãã
åäžã® GPU ã§ã¯äžå¯èœã§ãã
ð€ Transformers ã¯ã2 ã€ã®ãªãã·ã§ã³ãä»ã㊠[DeepSpeed](https://github.com/microsoft/DeepSpeed) ãçµ±åããŸãã
1. [`Trainer`] ã«ããã³ã¢ DeepSpeed æ©èœã®çµ±åãäœã§ããã£ãŠãããã¿ã€ãã§ã
çµ±åã®å Žå - ã«ã¹ã¿ã æ§æãã¡ã€ã«ãæå®ãããããã³ãã¬ãŒãã䜿çšããã ãã§ãä»ã«äœãããå¿
èŠã¯ãããŸããããããŠãã®
ãã®ããã¥ã¡ã³ãã§ã¯ãã®æ©èœã«çŠç¹ãåœãŠãŠããŸãã
2. [`Trainer`] ã䜿çšãããDeepSpeed ãçµ±åããç¬èªã®ãã¬ãŒããŒã䜿çšãããå Žå
`from_pretrained` ã `from_config` ãªã©ã®ã³ã¢æ©èœã«ã¯ãéèŠãªæ©èœã®çµ±åãå«ãŸããŠããŸãã
ZeRO ã¹ããŒãž 3 以éã® `zero.Init`ãªã©ã® DeepSpeed ã®éšåããã®æ©èœã掻çšããã«ã¯ã次ã®ããã¥ã¡ã³ãããèªã¿ãã ããã
[éãã¬ãŒã㌠DeepSpeed çµ±å](#nontrainer-deepspeed-integration)ã
çµ±åãããŠãããã®:
ãã¬ãŒãã³ã°ïŒ
1. DeepSpeed ZeRO ãã¬ãŒãã³ã°ã¯ãZeRO-Infinity (CPU ããã³ NVME ãªãããŒã) ã䜿çšããŠå®å
šãª ZeRO ã¹ããŒãž 1ã2ãããã³ 3 ããµããŒãããŸãã
æšè«ïŒ
1. DeepSpeed ZeRO Inference ã¯ãZeRO-Infinity ã«ãã ZeRO ã¹ããŒãž 3 ããµããŒãããŸãããã¬ãŒãã³ã°ãšåã ZeRO ãããã³ã«ã䜿çšããŸããã
ãªããã£ãã€ã¶ãš lr ã¹ã±ãžã¥ãŒã©ã¯äœ¿çšãããã¹ããŒãž 3 ã®ã¿ãé¢é£ããŸãã詳现ã«ã€ããŠã¯ã以äžãåç
§ããŠãã ããã
[ãŒãæšè«](#zero-inference)ã
DeepSpeed Inference ããããŸããããã¯ãTensor Parallelism ã®ä»£ããã« Tensor Parallelism ã䜿çšãããŸã£ããç°ãªããã¯ãããžãŒã§ãã
ZeRO (è¿æ¥å
¬é)ã
<a id='deepspeed-trainer-integration'></a>
## Trainer Deepspeed Integration
<a id='deepspeed-installation'></a>
### Installation
pypi çµç±ã§ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŸãã
```bash
pip install deepspeed
```
ãŸãã¯`tansformers`, `extras`çµç±:
```bash
pip install transformers[deepspeed]
```
ãŸãã¯ã[DeepSpeed ã® GitHub ããŒãž](https://github.com/microsoft/deepspeed#installation) ã§è©³çŽ°ã確èªããŠãã ããã
[é«åºŠãªã€ã³ã¹ããŒã«](https://www.deepspeed.ai/tutorials/advanced-install/)ã
ããã§ããã«ãã«èŠåŽããå Žåã¯ããŸã [CUDA æ¡åŒµæ©èœã®ã€ã³ã¹ããŒã« ããŒã](trainer#cuda-extension-installation-notes) ãå¿
ãèªãã§ãã ããã
æ¡åŒµæ©èœãäºåãã«ããããå®è¡æã«æ¡åŒµæ©èœããã«ããããããšã«äŸåããŠãããäžèšã®è§£æ±ºçããã¹ãŠè©Šããå Žå
ããã圹ã«ç«ããªãã£ãå Žåã次ã«è©Šãã¹ãããšã¯ãã¢ãžã¥ãŒã«ãã€ã³ã¹ããŒã«ããåã«ã¢ãžã¥ãŒã«ãäºåã«ãã«ãããããšã§ãã
DeepSpeed ã®ããŒã«ã« ãã«ããäœæããã«ã¯:
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \
--global-option="build_ext" --global-option="-j8" --no-cache -v \
--disable-pip-version-check 2>&1 | tee build.log
```
NVMe ãªãããŒãã䜿çšããå Žåã¯ãäžèšã®æé ã«`DS_BUILD_AIO=1`ãå«ããå¿
èŠããããŸã (ãŸãã
*libaio-dev* ã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ããŸã)ã
`TORCH_CUDA_ARCH_LIST` ãç·šéããŠã䜿çšãã GPU ã«ãŒãã®ã¢ãŒããã¯ãã£ã®ã³ãŒããæ¿å
¥ããŸãããã¹ãŠãä»®å®ãããš
ããªãã®ã«ãŒãã¯åãã§ã次ã®æ¹æ³ã§ã¢ãŒããååŸã§ããŸãã
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())"
```
ãããã£ãŠã`8, 6`ãååŸããå Žåã¯ã`TORCH_CUDA_ARCH_LIST="8.6"`ã䜿çšããŸããè€æ°ã®ç°ãªãã«ãŒãããæã¡ã®å Žåã¯ããã¹ãŠããªã¹ãããããšãã§ããŸã
ãããã®ãã¡ã`TORCH_CUDA_ARCH_LIST="6.1;8.6"`ã奜ãã§ã
è€æ°ã®ãã·ã³ã§åãã»ããã¢ããã䜿çšããå¿
èŠãããå Žåã¯ããã€ã㪠ãã€ãŒã«ãäœæããŸãã
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \
python setup.py build_ext -j8 bdist_wheel
```
`dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ã®ãããªãã®ãçæãããã®ã§ããããã€ã³ã¹ããŒã«ã§ããŸã
`pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ãšããŠããŒã«ã«ãŸãã¯ä»ã®ãã·ã³ã«ã€ã³ã¹ããŒã«ããŸãã
ç¹°ãè¿ããŸããã`TORCH_CUDA_ARCH_LIST`ãã¿ãŒã²ãã ã¢ãŒããã¯ãã£ã«åãããŠèª¿æŽããããšãå¿ããªãã§ãã ããã
NVIDIA GPU ã®å®å
šãªãªã¹ããšãããã«å¯Ÿå¿ãã **ã³ã³ãã¥ãŒãã£ã³ã°æ©èœ** (ãã®èšäºã® Arch ãšåã) ãèŠã€ããããšãã§ããŸãã
ã³ã³ããã¹ã) [ãã](https://developer.nvidia.com/cuda-gpus)ã
以äžã䜿çšããŠãpytorch ãæ§ç¯ãããã¢ãŒãã確èªã§ããŸãã
```bash
python -c "import torch; print(torch.cuda.get_arch_list())"
```
ããã§ã¯ãã€ã³ã¹ããŒã«ãããŠãã GPU ã® 1 ã€ã®ã¢ãŒããèŠã€ããæ¹æ³ã説æããŸããããšãã°ãGPU 0 ã®å Žå:
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; \
print(torch.cuda.get_device_properties(torch.device('cuda')))"
```
åºåã次ã®å Žå:
```bash
_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)
```
ããããã°ããã®ã«ãŒãã®ã¢ãŒãã`8.6`ã§ããããšãããããŸãã
`TORCH_CUDA_ARCH_LIST` ãå®å
šã«çç¥ããããšãã§ããŸããããããã°ããã«ã ããã°ã©ã ãèªåçã«ã¯ãšãªãå®è¡ããŸãã
ãã«ããè¡ããã GPU ã®ã¢ãŒããã¯ãã£ãããã¯ãã¿ãŒã²ãã ãã·ã³ã® GPU ãšäžèŽããå Žåãããã°ãäžèŽããªãå ŽåããããŸãã
ç®çã®ã¢ãŒããæ瀺çã«æå®ããããšããå§ãããŸãã
ææ¡ãããããšããã¹ãŠè©ŠããŠããŸã ãã«ãã®åé¡ãçºçããå Žåã¯ãGitHub ã®åé¡ã«é²ãã§ãã ããã
[ãã£ãŒãã¹ããŒã](https://github.com/microsoft/DeepSpeed/issues)ã
<a id='deepspeed-multi-gpu'></a>
### Deployment with multiple GPUs
DeepSpeed çµ±åããããã€ããã«ã¯ã[`Trainer`] ã³ãã³ã ã©ã€ã³åŒæ°ã調æŽããŠæ°ããåŒæ° `--deepspeed ds_config.json` ãå«ããŸããããã§ã`ds_config.json` 㯠DeepSpeed æ§æãã¡ã€ã«ã§ãã
[ãã¡ã](https://www.deepspeed.ai/docs/config-json/)ã«èšèŒãããŠããŸãããã¡ã€ã«åã¯ããªã次第ã§ãã
DeepSpeed ã®`add_config_arguments`ãŠãŒãã£ãªãã£ã䜿çšããŠãå¿
èŠãªã³ãã³ã ã©ã€ã³åŒæ°ãã³ãŒãã«è¿œå ããããšããå§ãããŸãã
詳现ã«ã€ããŠã¯ã[DeepSpeed ã®åŒæ°è§£æ](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) ããã¥ã¡ã³ããåç
§ããŠãã ããã
ããã§éžæããã©ã³ãã£ãŒã䜿çšã§ããŸãã pytorch ã©ã³ãã£ãŒãåŒãç¶ã䜿çšã§ããŸãã
```bash
torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ãŸãã¯ã`deepspeed`ã«ãã£ãŠæäŸãããã©ã³ãã£ãŒã䜿çšããŸãã
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ã芧ã®ãšãããåŒæ°ã¯åãã§ã¯ãããŸããããã»ãšãã©ã®ããŒãºã§ã¯ã©ã¡ãã§ãæ©èœããŸããã®
ããŸããŸãªããŒããš GPU ãæ§æããæ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã[ãã¡ã](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ãåç
§ããŠãã ããã
`deepspeed`ã©ã³ãã£ãŒã䜿çšããå©çšå¯èœãªãã¹ãŠã® GPU ã䜿çšãããå Žåã¯ã`--num_gpus`ãã©ã°ãçç¥ããã ãã§ãã
以äžã¯ãå©çšå¯èœãªãã¹ãŠã® GPU ããããã€ãã DeepSpeed ã§`run_translation.py`ãå®è¡ããäŸã§ãã
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
DeepSpeed ã®ããã¥ã¡ã³ãã«ã¯ã`--deepspeed --deepspeed_config ds_config.json`ã衚瀺ãããå¯èœæ§ãé«ãããšã«æ³šæããŠãã ããã
DeepSpeed é¢é£ã®åŒæ°ã 2 ã€ãããŸãããç°¡åã«ããããã§ãããåŠçãã¹ãåŒæ°ããã§ã«éåžžã«å€ãããã§ãã
ãã® 2 ã€ã 1 ã€ã®åŒæ°ã«çµåããŸããã
å®éã®äœ¿çšäŸã«ã€ããŠã¯ããã® [æçš¿](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) ãåç
§ããŠãã ããã
<a id='deepspeed-one-gpu'></a>
### Deployment with one GPU
1 ã€ã® GPU 㧠DeepSpeed ããããã€ããã«ã¯ã[`Trainer`] ã³ãã³ã ã©ã€ã³åŒæ°ã次ã®ããã«èª¿æŽããŸãã
```bash
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
ããã¯è€æ°ã® GPU ã®å Žåãšã»ãŒåãã§ãããããã§ã¯ãDeepSpeed ã« 1 ã€ã® GPU ã ãã䜿çšããããã«æ瀺çã«æ瀺ããŸãã
`--num_gpus=1`ãããã©ã«ãã§ã¯ãDeepSpeed ã¯æå®ãããããŒãäžã§èªèã§ãããã¹ãŠã® GPU ããããã€ããŸããèµ·åãã GPU ã 1 ã€ã ãã®å Žå
ã®å Žåããã®åŒæ°ã¯å¿
èŠãããŸããã次㮠[ããã¥ã¡ã³ã](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ã§ã¯ãã©ã³ãã£ãŒ ãªãã·ã§ã³ã«ã€ããŠèª¬æããŠããŸãã
1 ã€ã® GPU ã ã㧠DeepSpeed ã䜿çšãããã®ã¯ãªãã§ãã?
1. äžéšã®èšç®ãšã¡ã¢ãªããã¹ãã® CPU ãš RAM ã«å§ä»»ã§ãã ZeRO ãªãããŒãæ©èœãåããŠããããã
ã¢ãã«ã®ããŒãºã«åãããŠããå€ãã® GPU ãªãœãŒã¹ãæ®ããŠãããŸãããã倧ããªããã ãµã€ãºããŸãã¯éåžžã«å€§ããªã¢ãã«ã®ãã£ããã£ã³ã°ãå¯èœã«ãã
æ®éã¯åããªãã§ãããã
2. ã¹ããŒã㪠GPU ã¡ã¢ãªç®¡çã·ã¹ãã ãæäŸããã¡ã¢ãªã®æçåãæå°éã«æããŸãã
ãã倧ããªã¢ãã«ãšããŒã¿ ãããã
次ã«æ§æã«ã€ããŠè©³ãã説æããŸãããåäžã® GPU ã§å€§å¹
ãªæ¹åãå®çŸããããã®éµã¯æ¬¡ã®ãšããã§ãã
DeepSpeed ã䜿çšããã«ã¯ãæ§æãã¡ã€ã«ã«å°ãªããšã次ã®æ§æãå¿
èŠã§ãã
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true
}
}
```
ããã«ããããªããã£ãã€ã¶ãŒã®ãªãããŒãããã®ä»ã®éèŠãªæ©èœãæå¹ã«ãªããŸãããããã¡ ãµã€ãºãè©ŠããŠã¿ããšããã§ãããã
詳现ã«ã€ããŠã¯ã以äžã®ãã£ã¹ã«ãã·ã§ã³ãåç
§ããŠãã ããã
ãã®ã¿ã€ãã®ãããã€ã¡ã³ãã®å®éçãªäœ¿çšäŸã«ã€ããŠã¯ããã® [æçš¿](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) ãåç
§ããŠãã ããã
ãã®ããã¥ã¡ã³ãã§è©³ãã説æãããŠããããã«ãCPU ããã³ NVMe ãªãããŒããåãã ZeRO-3 ãè©Šãããšãã§ããŸãã
ããŒãïŒ
- GPU 0 ãšã¯ç°ãªãç¹å®ã® GPU ã§å®è¡ããå¿
èŠãããå Žåã`CUDA_VISIBLE_DEVICES` ã䜿çšããŠå¶éããããšã¯ã§ããŸããã
å©çšå¯èœãª GPU ã®è¡šç€ºç¯å²ã代ããã«ã次ã®æ§æã䜿çšããå¿
èŠããããŸãã
```bash
deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ...
```
ãã®äŸã§ã¯ãDeepSpeed ã« GPU 1 (2 çªç®ã® GPU) ã䜿çšããããã«æ瀺ããŸãã
<a id='deepspeed-multi-node'></a>
### è€æ°ã®ããŒãã䜿çšãããããã€ã¡ã³ã
ãã®ã»ã¯ã·ã§ã³ã®æ
å ±ã¯ DeepSpeed çµ±åã«åºæã®ãã®ã§ã¯ãªãããããããã«ãããŒã ããã°ã©ã ã«é©çšã§ããŸãããã ããDeepSpeed ã¯ãSLURM ç°å¢ã§ãªãéããä»ã®ã©ã³ãã£ãŒããã䜿ãããã`deepspeed`ã©ã³ãã£ãŒãæäŸããŸãã
ãã®ã»ã¯ã·ã§ã³ã§ã¯ããããã 8 GPU ãåãã 2 ã€ã®ããŒãããããšä»®å®ããŸãããŸããæåã®ããŒãã«ã¯ `ssh hostname1` ã䜿çšããŠã2 çªç®ã®ããŒãã«ã¯ `ssh hostname2` ã䜿çšããŠæ¥ç¶ã§ããŸããäž¡æ¹ãšããã¹ã¯ãŒããªãã§ããŒã«ã«ã® ssh çµç±ã§çžäºã«æ¥ç¶ã§ããå¿
èŠããããŸãããã¡ããããããã®ãã¹ã (ããŒã) åããäœæ¥ããŠããå®éã®ãã¹ãåã«å€æŽããå¿
èŠããããŸãã
#### The torch.distributed.run launcher
ããšãã°ã`torch.distributed.run` ã䜿çšããã«ã¯ã次ã®ããã«ããŸãã
```bash
python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
```
åããŒãã« SSH ã§æ¥ç¶ããããããã®ããŒãã§åãã³ãã³ããå®è¡ããå¿
èŠããããŸããæ¥ãå¿
èŠã¯ãããŸãããã©ã³ãã£ãŒã¯äž¡æ¹ã®ããŒããåæãããŸã§åŸ
æ©ããŸãã
詳现ã«ã€ããŠã¯ã[torchrun](https://pytorch.org/docs/stable/elastic/run.html) ãåç
§ããŠãã ãããã¡ãªã¿ã«ããã㯠pytorch ã®æ°ããŒãžã§ã³åã®`torch.distributed.launch`ã眮ãæããã©ã³ãã£ãŒã§ããããŸãã
#### ãã£ãŒãã¹ããŒã ã©ã³ãã£ãŒ
代ããã«`deepspeed`ã©ã³ãã£ãŒã䜿çšããã«ã¯ããŸã`hostfile`ãã¡ã€ã«ãäœæããå¿
èŠããããŸãã
```
hostname1 slots=8
hostname2 slots=8
```
ãããŠã次ã®ããã«èµ·åã§ããŸãã
```bash
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
```
`torch.distributed.run`ã©ã³ãã£ãŒãšã¯ç°ãªãã`deepspeed`ã¯äž¡æ¹ã®ããŒãã§ãã®ã³ãã³ããèªåçã«èµ·åããŸãã
詳现ã«ã€ããŠã¯ã[ãªãœãŒã¹æ§æ (ãã«ãããŒã)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ãåç
§ããŠãã ããã
#### Launching in a SLURM environment
SLURM ç°å¢ã§ã¯ã次ã®ã¢ãããŒãã䜿çšã§ããŸãã以äžã¯ãç¹å®ã® SLURM ç°å¢ã«é©åãããããã«å¿
èŠãª slurm ã¹ã¯ãªãã `launch.slurm` ã§ãã
```bash
#SBATCH --job-name=test-nodes # name
#SBATCH --nodes=2 # nodes
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
#SBATCH --cpus-per-task=10 # number of cores per tasks
#SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
#SBATCH --output=%x-%j.out # output file name
export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901
srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \
--nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \
--master_addr $MASTER_ADDR --master_port $MASTER_PORT \
your_program.py <normal cl args> --deepspeed ds_config.json'
```
ããšã¯å®è¡ãã¹ã±ãžã¥ãŒã«ããã ãã§ãã
```bash
sbatch launch.slurm
```
#### Use of Non-shared filesystem
ããã©ã«ãã§ã¯ãDeepSpeed ã¯ãã«ãããŒãç°å¢ãå
±æã¹ãã¬ãŒãžã䜿çšããããšãæ³å®ããŠããŸãããããåœãŠã¯ãŸãããåããŒããããŒã«ã« ãã¡ã€ã«ã·ã¹ãã ããåç
§ã§ããªãå Žåã¯ãèšå®ãã¡ã€ã«ã調æŽã㊠[`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#) ãå«ããå¿
èŠããããŸãããã§ãã¯ãã€ã³ã ãªãã·ã§ã³) ã次ã®èšå®ã§æå®ããŸãã
```json
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
ãããã¯ã[`Trainer`] ã® `--save_on_each_node` åŒæ°ã䜿çšããããšãã§ããäžèšã®èšå®ã¯èªåçã«è¿œå ãããŸãã
<a id='deepspeed-notebook'></a>
### Deployment in Notebooks
ããŒãããã¯ã®ã»ã«ãã¹ã¯ãªãããšããŠå®è¡ããå Žåã®åé¡ã¯ãäŸåããéåžžã®`deepspeed`ã©ã³ãã£ãŒããªãããšã§ãã
ç¹å®ã®èšå®ã§ã¯ãããããšãã¥ã¬ãŒãããå¿
èŠããããŸãã
GPU ã 1 ã€ã ã䜿çšããŠããå ŽåãDeepSpeed ã䜿çšããããã«ããŒãããã¯å
ã®ãã¬ãŒãã³ã° ã³ãŒãã調æŽããå¿
èŠãããæ¹æ³ã¯æ¬¡ã®ãšããã§ãã
```python
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
# Now proceed as normal, plus pass the deepspeed config file
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
```
泚: `...` ã¯ãé¢æ°ã«æž¡ãéåžžã®åŒæ°ãè¡šããŸãã
è€æ°ã® GPU ã䜿çšããå ŽåãDeepSpeed ãåäœããã«ã¯ãã«ãããã»ã¹ç°å¢ã䜿çšããå¿
èŠããããŸããã€ãŸããããªãã¯æã£ãŠããŸã
ãã®ç®çã§ã©ã³ãã£ãŒã䜿çšããããšã¯ã§ããŸããããããã¯ãæ瀺ãããåæ£ç°å¢ããšãã¥ã¬ãŒãããããšã«ãã£ãŠã¯å®çŸã§ããŸããã
ãã®ã»ã¯ã·ã§ã³ã®åé ã§ã
çŸåšã®ãã£ã¬ã¯ããªã®ããŒãããã¯ã«ãã®å Žã§æ§æãã¡ã€ã«ãäœæãããå Žåã¯ãå°çšã®
ã»ã«ã®å
容:
```python no-style
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
```
ãã¬ãŒãã³ã° ã¹ã¯ãªãããããŒãããã¯ã®ã»ã«ã§ã¯ãªãéåžžã®ãã¡ã€ã«ã«ããå Žåã¯ã次ã®ããã«ããŠ`deepspeed`ãéåžžã©ããèµ·åã§ããŸãã
现èããã®ã·ã§ã«ãããšãã°ã`run_translation.py` ã䜿çšããã«ã¯ã次ã®ããã«èµ·åããŸãã
```python no-style
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
```
ãŸãã¯ã`%%bash` ããžãã¯ã䜿çšãããšãã·ã§ã« ããã°ã©ã ãå®è¡ããããã®è€æ°è¡ã®ã³ãŒããèšè¿°ããããšãã§ããŸãã
```python no-style
%%bash
git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
```
ãã®ãããªå Žåããã®ã»ã¯ã·ã§ã³ã®æåã«ç€ºããã³ãŒãã¯å¿
èŠãããŸããã
泚: `%%bash` ããžãã¯ã¯åªããŠããŸãããçŸæç¹ã§ã¯åºåããããã¡ãªã³ã°ãããããããã»ã¹ãçµäºãããŸã§ãã°ã¯è¡šç€ºãããŸããã
å®äºããŸãã
<a id='deepspeed-config'></a>
### Configuration
èšå®ãã¡ã€ã«ã§äœ¿çšã§ãã DeepSpeed èšå®ãªãã·ã§ã³ã®å®å
šãªã¬ã€ãã«ã€ããŠã¯ã次ãåç
§ããŠãã ããã
[次ã®ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/) ã«ã¢ã¯ã»ã¹ããŠãã ããã
ããŸããŸãªå®éã®ããŒãºã«å¯Ÿå¿ããæ°åã® DeepSpeed æ§æäŸã [DeepSpeedExamples](https://github.com/microsoft/DeepSpeedExamples)ã§èŠã€ããããšãã§ããŸãã
ãªããžããª:
```bash
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
```
äžèšã®ã³ãŒããç¶ããŠãLamb ãªããã£ãã€ã¶ãŒãæ§æããããšããŠãããšããŸãããããã£ãŠã次ã®äžããæ€çŽ¢ã§ããŸã
`.json` ãã¡ã€ã«ã®äŸ:
```bash
grep -i Lamb $(find . -name '*json')
```
ããã«ããã€ãã®äŸã [ã¡ã€ã³ ãªããžããª](https://github.com/microsoft/DeepSpeed) ã«ããããŸãã
DeepSpeed ã䜿çšããå Žåã¯ãåžžã« DeepSpeed æ§æãã¡ã€ã«ãæå®ããå¿
èŠããããŸãããäžéšã®æ§æãã©ã¡ãŒã¿ã«ã¯
ã³ãã³ãã©ã€ã³çµç±ã§èšå®ããŸãã埮åŠãªéãã«ã€ããŠã¯ããã®ã¬ã€ãã®æ®ãã®éšåã§èª¬æããŸãã
DeepSpeed æ§æãã¡ã€ã«ãã©ã®ãããªãã®ããç解ããããã«ãZeRO ã¹ããŒãž 2 æ©èœãæå¹ã«ããæ§æãã¡ã€ã«ã次ã«ç€ºããŸãã
ãªããã£ãã€ã¶ãŒç¶æ
ã® CPU ãªãããŒããå«ã¿ã`AdamW`ãªããã£ãã€ã¶ãŒãš`WarmupLR`ã¹ã±ãžã¥ãŒã©ãŒã䜿çšããæ··åãæå¹ã«ããŸãã
`--fp16` ãæž¡ãããå Žåã®ç²ŸåºŠãã¬ãŒãã³ã°:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
ããã°ã©ã ãå®è¡ãããšãDeepSpeed 㯠[`Trainer`] ããåãåã£ãèšå®ããã°ã«èšé²ããŸãã
ã³ã³ãœãŒã«ã«æž¡ããããããæçµçã«ã©ã®ãããªèšå®ãæž¡ãããã®ããæ£ç¢ºã«ç¢ºèªã§ããŸãã
<a id='deepspeed-config-passing'></a>
### Passing Configuration
ãã®ããã¥ã¡ã³ãã§èª¬æããããã«ãéåžžãDeepSpeed èšå®ã¯ json ãã¡ã€ã«ãžã®ãã¹ãšããŠæž¡ãããŸããã
ãã¬ãŒãã³ã°ã®èšå®ã«ã³ãã³ã ã©ã€ã³ ã€ã³ã¿ãŒãã§ã€ã¹ã䜿çšããã代ããã«ã€ã³ã¹ã¿ã³ã¹ãäœæããŸãã
[`Trainer`] via [`TrainingArguments`] ãã®åŸã`deepspeed` åŒæ°ã«ã€ããŠã¯æ¬¡ã®ããšãã§ããŸã
ãã¹ãããã `dict` ãæž¡ããŸããããã«ããããã®å Žã§æ§æãäœæã§ãããããæžã蟌ãå¿
èŠããããŸããã
[`TrainingArguments`] ã«æž¡ãåã«ãã¡ã€ã« ã·ã¹ãã ãå€æŽããŸãã
èŠçŽãããšã次ã®ããšãã§ããŸãã
```python
TrainingArguments(..., deepspeed="/path/to/ds_config.json")
```
ãŸãã¯ïŒ
```python
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deepspeed=ds_config_dict)
```
<a id='deepspeed-config-shared'></a>
### Shared Configuration
<Tip warning={true}>
ãã®ã»ã¯ã·ã§ã³ã¯å¿
èªã§ã
</Tip>
[`Trainer`] ãš DeepSpeed ã®äž¡æ¹ãæ£ããæ©èœããã«ã¯ãããã€ãã®èšå®å€ãå¿
èŠã§ãã
ãããã£ãŠãæ€åºãå°é£ãªãšã©ãŒã«ã€ãªããå¯èœæ§ã®ããå®çŸ©ã®ç«¶åãé²ãããã«ãããããæ§æããããšã«ããŸããã
[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°çµç±ã
ããã«ãäžéšã®æ§æå€ã¯ã¢ãã«ã®æ§æã«åºã¥ããŠèªåçã«å°åºãããŸãã
è€æ°ã®å€ãæåã§èª¿æŽããããšãå¿ããªãã§ãã ããã[`Trainer`] ã«å€§éšåãä»»ããã®ãæåã§ã
ã®èšå®ãè¡ããŸãã
ãããã£ãŠããã®ã¬ã€ãã®æ®ãã®éšåã§ã¯ãç¹å¥ãªèšå®å€ `auto` ã衚瀺ãããŸãããããèšå®ãããšã
æ£ããå€ãŸãã¯æãå¹ççãªå€ã«èªåçã«çœ®ãæããããŸãããããç¡èŠããããšãèªç±ã«éžæããŠãã ãã
æšå¥šäºé
ãåç
§ããå€ãæ瀺çã«èšå®ããŸãããã®å Žåã次ã®ç¹ã«åå泚æããŠãã ããã
[`Trainer`] åŒæ°ãš DeepSpeed èšå®ã¯äžèŽããŸããããšãã°ãåããã®ã䜿çšããŠããŸãã
åŠç¿çãããããµã€ãºããŸãã¯åŸé
环ç©èšå®?ããããäžèŽããªãå Žåããã¬ãŒãã³ã°ã¯éåžžã«å€±æããå¯èœæ§ããããŸã
æ¹æ³ãæ€åºããã®ãé£ãããããªãã¯èŠåãåããŸããã
DeepSpeed ã®ã¿ã«åºæã®å€ããããã«åãããŠæåã§èšå®ããå¿
èŠãããå€ãä»ã«ãè€æ°ãããŸãã
ããªãã®èŠæã
ç¬èªã®ããã°ã©ã ã§ãDeepSpeed æ§æããã¹ã¿ãŒãšããŠå€æŽãããå Žåã¯ã次ã®ã¢ãããŒãã䜿çšããããšãã§ããŸãã
ããã«åºã¥ã㊠[`TrainingArguments`] ãèšå®ããŸããæé ã¯æ¬¡ã®ãšããã§ãã
1. ãã¹ã¿ãŒæ§æãšããŠäœ¿çšãã DeepSpeed æ§æãäœæãŸãã¯ããŒãããŸã
2. ãããã®å€ã«åºã¥ã㊠[`TrainingArguments`] ãªããžã§ã¯ããäœæããŸã
`scheduler.params.total_num_steps`ãªã©ã®äžéšã®å€ã¯æ¬¡ã®ããã«èšç®ãããããšã«æ³šæããŠãã ããã
`train` äžã« [`Trainer`] ãå®è¡ããŸããããã¡ããèªåã§èšç®ããããšãã§ããŸãã
<a id='deepspeed-zero'></a>
### ZeRO
[Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) ã¯ãDeepSpeed ã®äž»å補åã§ãããã
3 ã€ã®ç°ãªãã¬ãã« (段é) ã®æé©åããµããŒãããŸããæåã®ãã®ã¯ãã¹ã±ãŒã©ããªãã£ã®èŠ³ç¹ããã¯ããŸãèå³æ·±ããã®ã§ã¯ãããŸããã
ãããã£ãŠããã®ããã¥ã¡ã³ãã§ã¯ã¹ããŒãž 2 ãš 3 ã«çŠç¹ãåœãŠãŸããã¹ããŒãž 3 ã¯ãææ°ã® ZeRO-Infinity ã®è¿œå ã«ãã£ãŠããã«æ¹åãããŠããŸãã
詳现ã«ã€ããŠã¯ãDeepSpeed ã®ããã¥ã¡ã³ããåç
§ããŠãã ããã
æ§æãã¡ã€ã«ã® `zero_optimization` ã»ã¯ã·ã§ã³ã¯æãéèŠãªéšåã§ã ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training))ãããã§å®çŸ©ããŸã
ã©ã® ZeRO ã¹ããŒãžãæå¹ã«ãããããããŠããããã©ã®ããã«æ§æããããåãã©ã¡ãŒã¿ã®èª¬æã¯ã
DeepSpeed ã®ããã¥ã¡ã³ãã
ãã®ã»ã¯ã·ã§ã³ã¯ãDeepSpeed èšå®ãä»ããŠã®ã¿èšå®ããå¿
èŠããããŸã - [`Trainer`] ãæäŸããŸã
åçã®ã³ãã³ãã©ã€ã³åŒæ°ã¯ãããŸããã
泚: çŸåšãDeepSpeed ã¯ãã©ã¡ãŒã¿ãŒåãæ€èšŒããªããããã¹ãã«ãééãããšãããã©ã«ãèšå®ã䜿çšãããŸãã
ã¹ãã«ãééã£ãŠãããã©ã¡ãŒã¿ã DeepSpeed ãšã³ãžã³ã®èµ·åãã° ã¡ãã»ãŒãžãèŠãŠããã®å€ã確èªã§ããŸãã
䜿çšããã€ããã§ãã
<a id='deepspeed-zero2-config'></a>
#### ZeRO-2 Config
以äžã¯ãZeRO ã¹ããŒãž 2 ã®æ§æäŸã§ãã
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
}
}
```
**æ§èœèª¿æŽïŒ**
- `offload_optimizer` ãæå¹ã«ãããšãGPU RAM ã®äœ¿çšéãåæžãããŸã (`"stage": 2` ãå¿
èŠã§ã)
- `"overlap_comm": true` ã¯ãGPU RAM 䜿çšéã®å¢å ãšãã¬ãŒããªãããŠãé
延ããã¹ãŠåæžããŸãã `overlap_comm`㯠4.5x ã䜿çšããŸã
`allgather_bucket_size`ãš`reduce_bucket_size`ã®å€ããããã£ãŠã5e8 ã«èšå®ãããŠããå Žåã9GB ãå¿
èŠã«ãªããŸãã
ãããããªã³ã (`5e8 x 2Bytes x 2 x 4.5`)ããããã£ãŠã8GB 以äžã® RAM ãæèŒãã GPU ã䜿çšããŠããå Žåã
OOM ãšã©ãŒãçºçããå Žåã¯ããããã®ãã©ã¡ãŒã¿ã`2e8`çšåºŠã«æžããå¿
èŠããããããã«ã¯ 3.6GB ãå¿
èŠã«ãªããŸãããããããªãã§ããã
OOM ã«éãå§ããŠããå Žåã¯ããã倧容éã® GPU ã§ãåæ§ã§ãã
- ãããã®ãããã¡ãæžãããšãããå€ãã® GPU RAM ãå©çšããããã«éä¿¡é床ãç ç²ã«ããããšã«ãªããŸãããããã¡ãµã€ãºãå°ããã»ã©ã
éä¿¡ãé
ããªããä»ã®ã¿ã¹ã¯ã§äœ¿çšã§ãã GPU RAM ãå¢ããŸãããããã£ãŠãããããµã€ãºã倧ããå Žåã¯ã
éèŠãªã®ã¯ããã¬ãŒãã³ã°æéãå°ãé
ãããããšã¯è¯ããã¬ãŒãã«ãªãå¯èœæ§ããããŸãã
ããã«ã`deepspeed==0.4.4`ã«ã¯ã次ã®ã³ãã³ãã§æå¹ã«ã§ããæ°ãããªãã·ã§ã³`round_robin_gradients`ãè¿œå ãããŸããã
```json
{
"zero_optimization": {
"round_robin_gradients": true
}
}
```
ããã¯ããã现ããåŸé
ããŒãã£ã·ã§ãã³ã°ã«ãã£ãŠã©ã³ã¯éã® CPU ã¡ã¢ãªãžã®åŸé
ã³ããŒã䞊ååãããCPU ãªãããŒãã®ã¹ããŒãž 2 æé©åã§ããããã©ãŒãã³ã¹ã®å©ç¹ã¯ãåŸé
环ç©ã¹ããã (ãªããã£ãã€ã¶ãŒ ã¹ãããéã®ã³ããŒã®å¢å ) ãŸã㯠GPU æ° (䞊ååŠçã®å¢å ) ã«å¿ããŠå¢å ããŸãã
<a id='deepspeed-zero3-config'></a>
#### ZeRO-3 Config
以äžã¯ãZeRO ã¹ããŒãž 3 ã®æ§æäŸã§ãã
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
ã¢ãã«ãŸãã¯ã¢ã¯ãã£ããŒã·ã§ã³ã GPU ã¡ã¢ãªã«é©åãããCPU ãæªäœ¿çšã§ããããã« OOM ãçºçããŠããå Žå
`"device": "cpu"` ã䜿çšããŠãªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã CPU ã¡ã¢ãªã«ã¡ã¢ãªãªãããŒããããšããã®å¶éã解決ãããå¯èœæ§ããããŸãã
CPU ã¡ã¢ãªã«ãªãããŒãããããªãå Žåã¯ã`device`ãšã³ããªã«`cpu`ã®ä»£ããã«`none`ã䜿çšããŸãããªãããŒãå
NVMe ã«ã€ããŠã¯åŸã»ã©èª¬æããŸãã
åºå®ã¡ã¢ãªã¯ã`pin_memory`ã`true`ã«èšå®ãããšæå¹ã«ãªããŸãããã®æ©èœã«ããã次ã®ãããªã³ã¹ãããããŠã¹ã«ãŒããããåäžãããããšãã§ããŸãã
ä»ã®ããã»ã¹ã䜿çšã§ããã¡ã¢ãªãå°ãªããªããŸãããã³çããããã¡ã¢ãªã¯ããããèŠæ±ããç¹å®ã®ããã»ã¹ã®ããã«ç¢ºä¿ãããŸãã
éåžžãéåžžã® CPU ã¡ã¢ãªãããã¯ããã«é«éã«ã¢ã¯ã»ã¹ãããŸãã
**æ§èœèª¿æŽïŒ**
- `stage3_max_live_parameters`: `1e9`
- `stage3_max_reuse_distance`: `1e9`
OOM ã«éããå Žåã¯ããstage3_max_live_parametersããšãstage3_max_reuse_ distanceããæžãããŸãã圱é¿ã¯æå°éã«æããããã¯ãã§ã
ã¢ã¯ãã£ãåãã§ãã¯ãã€ã³ããå®è¡ããªãéããããã©ãŒãã³ã¹ã«åœ±é¿ããŸãã `1e9`ã¯çŽ 2GB ãæ¶è²»ããŸããèšæ¶ãå
±æããŠããã®ã¯ã
`stage3_max_live_parameters` ãš `stage3_max_reuse_distance` ãªã®ã§ãå ç®ããããã®ã§ã¯ãªããåèšã§ 2GB ã«ãªããŸãã
`stage3_max_live_parameters` ã¯ãç¹å®ã®æç¹ã§ GPU äžã«ä¿æããå®å
šãªãã©ã¡ãŒã¿ã®æ°ã®äžéã§ãã
æéã ãåå©çšè·é¢ãã¯ããã©ã¡ãŒã¿ãå°æ¥ãã€åã³äœ¿çšãããããå€æããããã«äœ¿çšããææšã§ãã
`stage3_max_reuse_ distance`ã䜿çšããŠããã©ã¡ãŒã¿ãç Žæ£ãããä¿æãããã決å®ããŸãããã©ã¡ãŒã¿ã
è¿ãå°æ¥ã«åã³äœ¿çšãããäºå® (`stage3_max_reuse_distance`æªæº) ãªã®ã§ãéä¿¡ãæžããããã«ä¿æããŸãã
ãªãŒããŒããããããã¯ãã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ããæå¹ã«ããŠããå Žåã«éåžžã«åœ¹ç«ã¡ãŸãããã©ã¯ãŒãåèšç®ãè¡ããã
backward ã¯åäžã¬ã€ã€ãŒç²åºŠãæž¡ããåŸæ¹åèšç®ãŸã§ãã©ã¡ãŒã¿ãåæ¹åèšç®ã«ä¿æããããšèããŠããŸãã
次ã®æ§æå€ã¯ãã¢ãã«ã®é衚瀺ãµã€ãºã«ãã£ãŠç°ãªããŸãã
- `reduce_bucket_size`: `hidden_size*hidden_size`
- `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size`
- `stage3_param_persistence_threshold`: `10 * hidden_size`
ãããã£ãŠããããã®å€ã `auto` ã«èšå®ãããšã[`Trainer`] ãæšå¥šãããå€ãèªåçã«å²ãåœãŠãŸãã
䟡å€èŠ³ããã ãããã¡ãããããããæ瀺çã«èšå®ããããšãã§ããŸãã
`stage3_gather_16bit_weights_on_model_save` ã¯ãã¢ãã«ã®ä¿åæã«ã¢ãã« fp16 ã®éã¿çµ±åãæå¹ã«ããŸãã倧ãã
ã¢ãã«ãšè€æ°ã® GPU ã®å Žåãããã¯ã¡ã¢ãªãšé床ã®äž¡æ¹ã®ç¹ã§é«äŸ¡ãªæäœã§ããçŸåšå¿
é ãšãªã£ãŠããã®ã¯ã
ãã¬ãŒãã³ã°ãåéããäºå®ã§ãããã®å¶éãåãé€ãããã䟿å©ã«ããä»åŸã®ã¢ããããŒãã«æ³šç®ããŠãã ããã
ãã¬ãã·ãã«ã
ZeRO-2 æ§æãã移è¡ããŠããå Žåã¯ã`allgather_partitions`ã`allgather_bucket_size`ãããã³
`reduce_scatter`èšå®ãã©ã¡ãŒã¿ã¯ ZeRO-3 ã§ã¯äœ¿çšãããŸãããããããèšå®ãã¡ã€ã«ã«ä¿åããŠãããšã
ç¡èŠãããã
- `sub_group_size`: `1e9`
`sub_group_size` ã¯ããªããã£ãã€ã¶ãŒã®ã¹ãããäžã«ãã©ã¡ãŒã¿ãŒãæŽæ°ãããç²åºŠãå¶åŸ¡ããŸãããã©ã¡ãŒã¿ã¯æ¬¡ã®ãšããã§ãã
`sub_group_size` ã®ãã±ããã«ã°ã«ãŒãåãããåãã±ããã¯äžåºŠã« 1 ã€ãã€æŽæ°ãããŸãã NVMeãªãããŒãã§äœ¿çšããå Žå
ãããã£ãŠãZeRO-Infinity ã® `sub_group_size`ã¯ãã¢ãã«ã®ç¶æ
ã CPU ã«åºå
¥ãããç²åºŠãå¶åŸ¡ããŸãã
ãªããã£ãã€ã¶ã¹ãããäžã« NVMe ããã¡ã¢ãªãååŸããŸããããã«ãããéåžžã«å€§èŠæš¡ãªã¢ãã«ã® CPU ã¡ã¢ãªäžè¶³ãé²æ¢ãããŸãã
NVMe ãªãããŒãã䜿çšããªãå Žåã¯ã`sub_group_size`ãããã©ã«ãå€ã® *1e9* ã®ãŸãŸã«ããããšãã§ããŸããå€æŽããããšãã§ããŸã
次ã®å Žåã®ããã©ã«ãå€:
1. ãªããã£ãã€ã¶ãŒ ã¹ãããäžã« OOM ãçºçãã: `sub_group_size` ãæžãããŠãäžæãããã¡ãŒã®ã¡ã¢ãªäœ¿çšéãåæžããŸãã
2. ãªããã£ãã€ã¶ãŒ ã¹ãããã«æéãããããŸãã`sub_group_size`ãå¢ãããŠã垯åå¹
ã®äœ¿çšçãåäžãããŸãã
ããŒã¿ãããã¡ã®å¢å ã
#### ZeRO-0 Config
ã¹ããŒãž 0 ãš 1 ã¯ãã£ãã«äœ¿çšãããªããããæåŸã«ãªã¹ãããŠããããšã«æ³šæããŠãã ããã
ã¹ããŒãž 0 ã§ã¯ããã¹ãŠã®ã¿ã€ãã®ã·ã£ãŒãã£ã³ã°ãç¡å¹ã«ããDDP ãšã㊠DeepSpeed ã®ã¿ã䜿çšããŸãã次ã®ã³ãã³ãã§ãªã³ã«ã§ããŸãã
```json
{
"zero_optimization": {
"stage": 0
}
}
```
ããã«ãããä»ã«äœãå€æŽããå¿
èŠããªããåºæ¬çã« ZeRO ãç¡å¹ã«ãªããŸãã
#### ZeRO-1 Config
ã¹ããŒãž 1 ã¯ãã¹ããŒãž 2 ããã°ã©ããŒã·ã§ã³ ã·ã£ãŒãã£ã³ã°ãé€ãããã®ã§ãããªããã£ãã€ã¶ãŒã®ç¶æ
ãã·ã£ãŒãåããã ãã§ãåŠçãå°ãé«éåããããã«ãã€ã§ãè©Šãããšãã§ããŸãã
```json
{
"zero_optimization": {
"stage": 1
}
}
```
<a id='deepspeed-nvme'></a>
### NVMe Support
ZeRO-Infinity ã¯ãGPU ãš CPU ã¡ã¢ãªã NVMe ã¡ã¢ãªã§æ¡åŒµããããšã§ãéåžžã«å€§èŠæš¡ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãå¯èœã«ããŸãããããã§
ã¹ããŒã ããŒãã£ã·ã§ãã³ã°ããã³ã¿ã€ãªã³ã° ã¢ã«ãŽãªãºã ã§ã¯ãå GPU ãéåžžã«å°éã®ããŒã¿ãéåä¿¡ããå¿
èŠããããŸãã
ãªãããŒãã«ãããææ°ã® NVMe ããã¬ãŒãã³ã°ã«å©çšã§ããåèšã¡ã¢ãª ããŒã«ãããã«å€§ããããã®ã«é©ããŠããããšãå€æããŸããã
ããã»ã¹ã ZeRO-Infinity ã«ã¯ãZeRO-3 ãæå¹ã«ãªã£ãŠããå¿
èŠããããŸãã
次ã®èšå®äŸã§ã¯ãNVMe ããªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã®äž¡æ¹ããªãããŒãã§ããããã«ããŸãã
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
}
```
ãªããã£ãã€ã¶ã®ç¶æ
ãšãã©ã¡ãŒã¿ã®äž¡æ¹ã NVMe ã«ãªãããŒãããããã©ã¡ãã 1 ã€ã ãããªãããŒããããããŸã£ãããªãããŒãããªãããéžæã§ããŸããããšãã°ã次ã®å Žå
å©çšå¯èœãª CPU ã¡ã¢ãªã倧éã«ããå Žåã¯ãé«éã«ãªããããå¿
ã CPU ã¡ã¢ãªã®ã¿ã«ãªãããŒãããŠãã ãã (ãã³ã:
*"device": "CPU"*)ã
[ãªããã£ãã€ã¶ãŒã®ç¶æ
](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) ãš [ãã©ã¡ãŒã¿ãŒ](https://www.deepspeed.ai/docs/config-json/#parameter-offloading)ã
`nvme_path`ãå®éã« NVMe ã§ããããšã確èªããŠãã ãããNVMe ã¯éåžžã®ããŒããã©ã€ããŸã㯠SSD ã§åäœããŸããã
ã¯ããã«é
ããªããŸããé«éã¹ã±ãŒã©ãã«ãªãã¬ãŒãã³ã°ã¯ãææ°ã® NVMe 転éé床ã念é ã«çœ®ããŠèšèšãããŸãã (ãã®æç¹ã§ã¯
æžã蟌ã¿ã§ã¯ãèªã¿åãæ倧 3.5 GB/ç§ãæžã蟌ã¿æ倧 3 GB/ç§ã®ããŒã¯é床ãåŸãããŸã)ã
æé©ãª`aio`æ§æãããã¯ãèŠã€ããã«ã¯ãã¿ãŒã²ããèšå®ã§ãã³ãããŒã¯ãå®è¡ããå¿
èŠããããŸãã
[ããã§èª¬æ](https://github.com/microsoft/DeepSpeed/issues/998)ã
<a id='deepspeed-zero2-zero3-performance'></a>
#### ZeRO-2 vs ZeRO-3 Performance
ZeRO-3 ã¯ãä»ã®ãã¹ãŠãåãããã«æ§æãããŠããå ŽåãZeRO-2 ãããé
ããªãå¯èœæ§ããããŸããåè
ã¯åéããå¿
èŠãããããã§ãã
ZeRO-2 ã®æ©èœã«å ããŠã¢ãã«ã®éã¿ä»ããè¡ããŸãã ZeRO-2 ãããŒãºãæºãããæ°åã® GPU ãè¶
ããŠæ¡åŒµããå¿
èŠããªãå Žå
ããããã°ãããã«åºå·ããããšãéžæããããšãã§ããŸãã ZeRO-3 ã«ãããã¯ããã«é«ãã¹ã±ãŒã©ããªãã£å®¹éãå¯èœã«ãªãããšãç解ããããšãéèŠã§ã
ã¹ããŒããç ç²ã«ããŠã
ZeRO-3 ã®æ§æã調æŽããŠãZeRO-2 ã«è¿ã¥ããããšãã§ããŸãã
- `stage3_param_persistence_threshold` ãéåžžã«å€§ããªæ°å€ã«èšå®ããŸããããšãã°ã`6 * hidden_ââsize * hidden_ââsize` ã®ããã«ãæ倧ââãã©ã¡ãŒã¿ããã倧ãããªããŸããããã«ããããã©ã¡ãŒã¿ã GPU ã«ä¿æãããŸãã
- ZeRO-2 ã«ã¯ãã®ãªãã·ã§ã³ããªãããã`offload_params` ããªãã«ããŸãã
å€æŽããªããŠãã`offload_params`ããªãã«ããã ãã§ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããå¯èœæ§ããããŸãã
`stage3_param_persistence_threshold`ããã¡ããããããã®å€æŽã¯ãã¬ãŒãã³ã°ã§ããã¢ãã«ã®ãµã€ãºã«åœ±é¿ããŸããããã§
ãããã¯ãããŒãºã«å¿ããŠãã¹ã±ãŒã©ããªãã£ãšåŒãæãã«é床ãåäžãããã®ã«åœ¹ç«ã¡ãŸãã
<a id='deepspeed-zero2-example'></a>
#### ZeRO-2 Example
以äžã¯ãå®å
šãª ZeRO-2 èªåæ§æãã¡ã€ã« `ds_config_zero2.json` ã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
以äžã¯ãæåã§èšå®ãããå®å
šãª ZeRO-2 ã®ãã¹ãŠãæå¹ãªæ§æãã¡ã€ã«ã§ããããã§ã¯äž»ã«ãå
žåçãªãã®ã確èªããããã®ãã®ã§ãã
å€ã¯æ¬¡ã®ããã«ãªããŸãããè€æ°ã®`auto`èšå®ãå«ãŸããå€ã䜿çšããããšã匷ããå§ãããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
<a id='deepspeed-zero3-example'></a>
#### ZeRO-3 Example
以äžã¯ãå®å
šãª ZeRO-3 èªåæ§æãã¡ã€ã«`ds_config_zero3.json`ã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
以äžã¯ãæåã§èšå®ãããå®å
šãª ZeRO-3 ã®ãã¹ãŠãæå¹ãªæ§æãã¡ã€ã«ã§ããããã§ã¯äž»ã«ãå
žåçãªãã®ã確èªããããã®ãã®ã§ãã
å€ã¯æ¬¡ã®ããã«ãªããŸãããè€æ°ã®`auto`èšå®ãå«ãŸããå€ã䜿çšããããšã匷ããå§ãããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
#### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance
ããã§ãããŸããŸãªæ®µéãããããšãããããŸãããã©ã¡ãã䜿çšããããã©ã®ããã«æ±ºå®ããã°ããã§ãããã?ãã®ã»ã¯ã·ã§ã³ã§ã¯ããã®è³ªåã«çããŠãããŸãã
äžè¬ã«ã次ã®ããšãåœãŠã¯ãŸããŸãã
- é床ã®ç¹ïŒå·Šã®æ¹ãå³ããéãïŒ
ã¹ããŒãž 0 (DDP) > ã¹ããŒãž 1 > ã¹ããŒãž 2 > ã¹ããŒãž 2 + ãªãããŒã > ã¹ããŒãž 3 > ã¹ããŒãž 3 + ãªãããŒã
- GPU ã¡ã¢ãªã®äœ¿çšç¶æ³ (å³ã¯å·Šããã GPU ã¡ã¢ãªå¹çãé«ã)
ã¹ããŒãž 0 (DDP) < ã¹ããŒãž 1 < ã¹ããŒãž 2 < ã¹ããŒãž 2 + ãªãããŒã < ã¹ããŒãž 3 < ã¹ããŒãž 3 + ãªãããŒã
ãããã£ãŠãæå°éã®æ°ã® GPU ã«åãŸããªããæéã®å®è¡ãå®çŸãããå Žåã¯ã次ã®ããã»ã¹ã«åŸãããšãã§ããŸããæãéãã¢ãããŒãããéå§ããGPU OOM ã«é¥ã£ãå Žåã¯ã次ã«é
ãã¢ãããŒãã«é²ã¿ãŸãããããã«ãã䜿çšããã GPU ã¡ã¢ãªãå°ãªããªããŸãããªã©ãªã©ã
ãŸããããã ãµã€ãºã 1 ã«èšå®ããŸã (å¿
èŠãªæå¹ããã ãµã€ãºã«å¯ŸããŠããã€ã§ãåŸé
环ç©ã䜿çšã§ããŸã)ã
1. `--gradient_checkpointing 1` (HF Trainer) ãŸãã¯çŽæ¥ `model.gradient_checkpointing_enable()` ãæå¹ã«ããŸã - OOM ã®å Žå
2. æåã« ZeRO ã¹ããŒãž 2 ãè©ŠããŠãã ããã OOMã®å Žå
3. ZeRO ã¹ããŒãž 2 + `offload_optimizer` ãè©ŠããŸã - OOM ã®å Žå
4. ZeRO ã¹ããŒãž 3 ã«åãæ¿ãã - OOM ã®å Žå
5. `cpu` ã«å¯Ÿã㊠`offload_param` ãæå¹ã«ããŸã - OOM ã®å Žå
6. OOM ã®å Žåã¯ã`cpu`ã«å¯ŸããŠ`offload_optimizer`ãæå¹ã«ããŸãã
7. ããã§ãããã ãµã€ãº 1 ã«é©åããªãå Žåã¯ããŸãããŸããŸãªããã©ã«ãå€ã確èªããå¯èœã§ããã°å€ãäžããŸããããšãã°ã`generate`ã䜿çšããåºãæ€çŽ¢ããŒã ã䜿çšããªãå Žåã¯ã倧éã®ã¡ã¢ãªãæ¶è²»ãããããæ€çŽ¢ããŒã ãçãããŸãã
8. fp32 ã§ã¯å¿
ãæ··åå粟床ã䜿çšããŸããã€ãŸããAmpere 以äžã® GPU ã§ã¯ bf16ãå€ã GPU ã¢ãŒããã¯ãã£ã§ã¯ fp16 ã䜿çšããŸãã
9. ããã§ã OOM ãè¡ãå Žåã¯ãããŒããŠã§ã¢ãè¿œå ããããZeRO-Infinity ãæå¹ã«ããããšãã§ããŸããã€ãŸãããªãããŒã `offload_param` ãš `offload_optimizer` ã `nvme` ã«åãæ¿ããŸããéåžžã«é«é㪠nvme ã§ããããšã確èªããå¿
èŠããããŸããéžè©±ãšããŠãZeRO-Infinity ã䜿çšããŠå°ã㪠GPU 㧠BLOOM-176B ãæšè«ããããšãã§ããŸããããéåžžã«é
ãã£ãã§ããã§ããããŸããããŸããïŒ
ãã¡ãããæã GPU ã¡ã¢ãªå¹çã®é«ãæ§æããå§ããŠãåŸããéã«é²ãããšã§ããããã®æé ãéã«å®è¡ããããšãã§ããŸãããããã¯äºçåããŠã¿ãŠãã ããã
OOM ãåŒãèµ·ãããªãããã ãµã€ãº 1 ãååŸããããå®å¹ã¹ã«ãŒãããã枬å®ããŸãã
次ã«ãããã ãµã€ãºãã§ããã ã倧ããããŠã¿ãŸããããã ãµã€ãºã倧ããã»ã©ãä¹ç®ããè¡åã巚倧ãªå Žåã« GPU ã®ããã©ãŒãã³ã¹ãæé«ã«ãªããããGPU ã®å¹çãåäžããŸãã
ããã§ãããã©ãŒãã³ã¹æé©åã²ãŒã ãå§ãŸããŸããäžéšã®ãªãããŒãæ©èœããªãã«ããããZeRO 段éã§ã¹ãããããŠã³ããŠããã ãµã€ãºãå¢æžããŠãå®å¹ã¹ã«ãŒããããå床枬å®ããããšãã§ããŸããæºè¶³ãããŸã§æŽãæµããç¹°ãè¿ããŸãã
æ°žé ã«ããã«è²»ããå¿
èŠã¯ãããŸãããã3 ãæã®ãã¬ãŒãã³ã°ãéå§ããããšããŠããå Žåã¯ãã¹ã«ãŒãããã«é¢ããŠæãå¹æçãªèšå®ãèŠã€ããããã«æ°æ¥ãããŠãã ããããã®ããããã¬ãŒãã³ã°ã®ã³ã¹ããæå°éã«ãªãããã¬ãŒãã³ã°ãããæ©ãå®äºã§ããŸããçŸåšã®ç®ãŸããããå€åãã ML ã®äžçã§ã¯ãäœãããã¬ãŒãã³ã°ããã®ã«ããã« 1 ãæãããå Žåã絶奜ã®æ©äŒãéãå¯èœæ§ããããŸãããã¡ãããããã¯ç§ãæèŠãå
±æããŠããã ãã§ããã決ããŠããªããæ¥ããããšããŠããããã§ã¯ãããŸããã BLOOM-176B ã®ãã¬ãŒãã³ã°ãéå§ããåã«ããã®ããã»ã¹ã« 2 æ¥éè²»ãããã¹ã«ãŒãããã 90 TFLOP ãã 150 TFLOP ã«åäžãããããšãã§ããŸããããã®åãçµã¿ã«ããããã¬ãŒãã³ã°æéã 1 ãæ以äžç¯çŽã§ããŸããã
ãããã®ã¡ã¢ã¯äž»ã«ãã¬ãŒãã³ã° ã¢ãŒãçšã«æžããããã®ã§ãããã»ãšãã©ã®å Žåã¯æšè«ã«ãé©çšãããã¯ãã§ããããšãã°ãåŸé
ãã§ãã¯ãã€ã³ãã¯ãã¬ãŒãã³ã°äžã«ã®ã¿åœ¹ç«ã€ãããæšè«äžã¯äœãè¡ãããŸãããããã«ããã«ã GPU æšè«ãå®è¡ããŠããŠã[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/)ã[Accelerate](https://ãã°ãã§ã€ã¹.co/blog/bloom-inference-pytorch-scripts) ã¯åªããããã©ãŒãã³ã¹ãæäŸããã¯ãã§ãã
ãã®ä»ã®ããã©ãŒãã³ã¹é¢é£ã®ç°¡åãªã¡ã¢:
- äœããæåãããã¬ãŒãã³ã°ããŠããå Žåã¯ãåžžã« 16 ã§å²ãåãã圢ç¶ã®ãã³ãœã« (é ãããµã€ãºãªã©) ã䜿çšããããã«ããŠãã ãããããã ãµã€ãºã«ã€ããŠã¯ãå°ãªããšã 2 ã§å²ãåããããã«ããŠãã ããã GPU ããããã«é«ãããã©ãŒãã³ã¹ãåŒãåºãããå Žåã¯ãããŒããŠã§ã¢åºæã® [æ³¢ãšã¿ã€ã«ã®éåå](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) ã®å¯åæ§ããããŸãã
### Activation Checkpointing or Gradient Checkpointing
ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ããšåŸé
ãã§ãã¯ãã€ã³ãã¯ãåãæ¹æ³è«ãæã 2 ã€ã®ç°ãªãçšèªã§ãããšãŠããããããã§ããããããªæãã§ãã
åŸé
ãã§ãã¯ãã€ã³ãã䜿çšãããšãé床ã GPU ã¡ã¢ãªãšåŒãæãã«ã§ããŸããããã«ãããGPU OOM ãå
æããããããã ãµã€ãºãå¢ããããšãã§ããå€ãã®å Žåãããã©ãŒãã³ã¹ã®åäžã«ã€ãªãããŸãã
HF Transformers ã¢ãã«ã¯ãDeepSpeed ã®ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã«ã€ããŠäœãç¥ããªããããDeepSpeed æ§æãã¡ã€ã«ã§ãã®æ©èœãæå¹ã«ããããšããŠããäœãèµ·ãããŸããã
ãããã£ãŠããã®éåžžã«æçãªæ©èœã掻çšããã«ã¯ 2 ã€ã®æ¹æ³ããããŸãã
1. HF Transformers ã¢ãã«ã䜿çšãããå Žåã¯ã`model.gradient_checkpointing_enable()` ãå®è¡ããããHF ãã¬ãŒããŒã§ `--gradient_checkpointing` ã䜿çšããŸããããã«ããããããèªåçã«æå¹ã«ãªããŸããããã§äœ¿ãããã®ã `torch.utils.checkpoint` ã§ãã
2. ç¬èªã®ã¢ãã«ãäœæããDeepSpeed ã®ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã䜿çšãããå Žåã¯ã[ããã§èŠå®ãããŠãã API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html) ã䜿çšã§ããŸãã HF Transformers ã¢ããªã³ã° ã³ãŒãã䜿çšããŠã`torch.utils.checkpoint` ã DeepSpeed ã® API ã«çœ®ãæããããšãã§ããŸããåŸè
ã¯ãé æ¹åã¢ã¯ãã£ããŒã·ã§ã³ãåèšç®ãã代ããã« CPU ã¡ã¢ãªã«ãªãããŒãã§ãããããããæè»ã§ãã
### Optimizer and Scheduler
`offload_optimizer`ãæå¹ã«ããªãéããDeepSpeed ã¹ã±ãžã¥ãŒã©ãŒãš HuggingFace ã¹ã±ãžã¥ãŒã©ãŒãçµã¿åãããŠäœ¿çšââã§ããŸãã
ãªããã£ãã€ã¶ãŒ (HuggingFace ã¹ã±ãžã¥ãŒã©ãŒãš DeepSpeed ãªããã£ãã€ã¶ãŒã®çµã¿åãããé€ã):
| Combos | HF Scheduler | DS Scheduler |
|:-------------|:-------------|:-------------|
| HF Optimizer | Yes | Yes |
| DS Optimizer | No | Yes |
`offload_optimizer`ãæå¹ãªå ŽåãCPU ãš
GPU å®è£
(LAMB ãé€ã)ã
<a id='deepspeed-optimizer'></a>
#### Optimizer
DeepSpeed ã®äž»ãªãªããã£ãã€ã¶ãŒã¯ãAdamãAdamWãOneBitAdamãLamb ã§ããããã㯠ZeRO ã§åŸ¹åºçã«ãã¹ããããŠããã
ãããã£ãŠã䜿çšããããšããå§ãããŸãããã ããä»ã®ãªããã£ãã€ã¶ããtorchãããã€ã³ããŒãããããšã¯ã§ããŸããå®å
šãªããã¥ã¡ã³ã㯠[ãã¡ã](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) ã«ãããŸãã
èšå®ãã¡ã€ã«ã§ `optimizer` ãšã³ããªãèšå®ããªãå Žåã[`Trainer`] ã¯
èªåçã«`AdamW`ã«èšå®ãããæå®ãããå€ãŸãã¯æ¬¡ã®ã³ãã³ãã©ã€ã³ã®ããã©ã«ãã䜿çšãããŸãã
åŒæ°: `--learning_rate`ã`--adam_beta1`ã`--adam_beta2`ã`--adam_epsilon`ãããã³ `--weight_decay`ã
以äžã¯ã`AdamW`ã®èªåæ§æããã`optimizer`ãšã³ããªã®äŸã§ãã
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
```
ã³ãã³ãã©ã€ã³åŒæ°ã«ãã£ãŠæ§æãã¡ã€ã«å
ã®å€ãèšå®ãããããšã«æ³šæããŠãã ããããã㯠1 ã€ããããã§ã
å€ã®æ±ºå®çãªãœãŒã¹ãæäŸããããšãã°åŠç¿çã次ã®ããã«èšå®ãããŠããå Žåã«ãèŠã€ãã«ãããšã©ãŒãåé¿ããŸãã
ããŸããŸãªå Žæã§ããŸããŸãªäŸ¡å€èŠ³ãã³ãã³ãã©ã€ã³ã®ã«ãŒã«ããªãŒããŒã©ã€ããããå€ã¯æ¬¡ã®ãšããã§ãã
- `lr` ãš `--learning_rate` ã®å€
- `betas` ãš `--adam_beta1 --adam_beta2` ã®å€
- `eps` ãš `--adam_epsilon` ã®å€
- `weight_decay` ãš `--weight_decay` ã®å€
ãããã£ãŠãã³ãã³ãã©ã€ã³ã§å
±æãã€ããŒãã©ã¡ãŒã¿ã調æŽããããšãå¿ããªãã§ãã ããã
å€ãæ瀺çã«èšå®ããããšãã§ããŸãã
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
äžèšã«ãªã¹ããããŠããªãå¥ã®ãªããã£ãã€ã¶ãŒã䜿çšããå Žåã¯ããããã¬ãã«ã®æ§æã«è¿œå ããå¿
èŠããããŸãã
```json
{
"zero_allow_untested_optimizer": true
}
```
`AdamW`ãšåæ§ã«ãå
¬åŒã«ãµããŒããããŠããä»ã®ãªããã£ãã€ã¶ãŒãæ§æã§ããŸãããããã¯ç°ãªãèšå®å€ãæã€å¯èœæ§ãããããšã«æ³šæããŠãã ãããäŸãã°Adam ã®å Žåã¯ã`weight_decay`ã`0.01`ä»è¿ã«ããå¿
èŠããããŸãã
ããã«ããªãããŒãã¯ãDeepspeed ã® CPU Adam ãªããã£ãã€ã¶ãŒãšäœµçšãããšæãå¹æçã«æ©èœããŸãã `deepspeed==0.8.3` ãªã®ã§ããªãããŒãã§å¥ã®ãªããã£ãã€ã¶ãŒã䜿çšãããå Žåã¯ã以äžãè¿œå ããå¿
èŠããããŸãã
```json
{
"zero_force_ds_cpu_optimizer": false
}
```
æäžäœã®æ§æã«ç§»è¡ããŸãã
<a id='deepspeed-scheduler'></a>
#### Scheduler
DeepSpeed ã¯ã`LRRangeTest`ã`OneCycle`ã`WarmupLR`ãããã³`WarmupDecayLR`åŠç¿çã¹ã±ãžã¥ãŒã©ãŒããµããŒãããŠããŸããå®å
šãª
ããã¥ã¡ã³ãã¯[ãã](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters)ã§ãã
ããã§ã¯ãð€ Transformers ãš DeepSpeed ã®éã§ã¹ã±ãžã¥ãŒã©ãŒãéè€ããå Žæã瀺ããŸãã
- `--lr_scheduler_type constant_with_warmup` çµç±ã® `WarmupLR`
- `--lr_scheduler_type Linear` ãä»ãã `WarmupDecayLR`ããã㯠`--lr_scheduler_type` ã®ããã©ã«ãå€ã§ããããŸãã
ãããã£ãŠãã¹ã±ãžã¥ãŒã©ãèšå®ããªãå Žåããããããã©ã«ãã§èšå®ãããã¹ã±ãžã¥ãŒã©ã«ãªããŸãã
èšå®ãã¡ã€ã«ã§ `scheduler` ãšã³ããªãèšå®ããªãå Žåã[`Trainer`] ã¯
`--lr_scheduler_type`ã`--learning_rate`ãããã³ `--warmup_steps` ãŸã㯠`--warmup_ratio` ã®å€ãèšå®ããŸãã
ð€ ããã®ãã©ã³ã¹ãã©ãŒããŒããŒãžã§ã³ã
以äžã¯ã`WarmupLR`ã®èªåæ§æããã`scheduler`ãšã³ããªã®äŸã§ãã
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
*"auto"* ã䜿çšãããŠããããã[`Trainer`] åŒæ°ã¯èšå®ã«æ£ããå€ãèšå®ããŸãã
ãã¡ã€ã«ãããã¯ãå€ã®æ±ºå®çãªãœãŒã¹ã 1 ã€ããããšãšãããšãã°æ¬¡ã®ãããªå Žåã«èŠã€ãã«ãããšã©ãŒãé¿ããããã§ãã
åŠç¿çã¯ãå Žæããšã«ç°ãªãå€ã«èšå®ãããŸããã³ãã³ãã©ã€ã³ã®ã«ãŒã«ãèšå®ãããå€ã¯æ¬¡ã®ãšããã§ãã
- `warmup_min_lr` ã®å€ã¯ `0` ã§ãã
- `warmup_max_lr` ãš `--learning_rate` ã®å€ã
- `warmup_num_steps` ãš `--warmup_steps` ã®å€ (æå®ãããŠããå Žå)ããã以å€ã®å Žå㯠`--warmup_ratio` ã䜿çšããŸã
ãã¬ãŒãã³ã° ã¹ãããã®æ°ãä¹ç®ããåãäžããŸãã
- `total_num_steps` ã«ã¯ `--max_steps` ã®å€ãæå®ããããæå®ãããŠããªãå Žåã¯å®è¡æã«èªåçã«å°åºãããŸãã
ç°å¢ãããŒã¿ã»ããã®ãµã€ãºãããã³ãã®ä»ã®ã³ãã³ã ã©ã€ã³åŒæ° (
`WarmupDecayLR`)ã
ãã¡ãããæ§æå€ã®äžéšãŸãã¯ãã¹ãŠãåŒãç¶ãã§ãèªåã§èšå®ããããšãã§ããŸãã
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããšãã°ã`WarmupDecayLR`ã®å Žåã¯ã次ã®ãšã³ããªã䜿çšã§ããŸãã
```json
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
`total_num_steps`ã`warmup_max_lr`ã`warmup_num_steps`ãããã³ `total_num_steps` ã¯ããŒãæã«èšå®ãããŸãã
<a id='deepspeed-fp32'></a>
### fp32 Precision
Deepspeed ã¯ãå®å
šãª fp32 ãš fp16 ã®æ··å粟床ããµããŒãããŸãã
fp16 æ··å粟床ã䜿çšãããšãå¿
èŠãªã¡ã¢ãªã倧å¹
ã«åæžãããé床ãåäžããããã
䜿çšããŠããã¢ãã«ããã®ãã¬ãŒãã³ã° ã¢ãŒãã§é©åã«åäœããªãå Žåã¯ã䜿çšããªãæ¹ãããã§ããããéåžžãã
ã¢ãã«ã fp16 æ··å粟床ã§äºåãã¬ãŒãã³ã°ãããŠããªãå Žåã«çºçããŸã (ããšãã°ããã㯠bf16 ã§äºåãã¬ãŒãã³ã°ãããå Žåã«ããçºçããŸã)
ã¢ãã«ïŒããã®ãããªã¢ãã«ã§ã¯ããªãŒããŒãããŒãŸãã¯ã¢ã³ããŒãããŒãçºçãã`NaN`æ倱ãçºçããå¯èœæ§ããããŸãããããããªãã®å Žåã¯ã䜿çšããããšæãã§ããã
å®å
šãª fp32 ã¢ãŒããããã©ã«ãã® fp16 æ··å粟床ã¢ãŒãã次ã®ããã«æ瀺çã«ç¡å¹ã«ããŸãã
```json
{
"fp16": {
"enabled": false,
}
}
```
Ampere ã¢ãŒããã¯ã㣠ããŒã¹ã® GPU ã䜿çšããŠããå Žåãpytorch ããŒãžã§ã³ 1.7 以éã¯èªåçã« ã䜿çšããããã«åãæ¿ãããŸãã
äžéšã®æäœã§ã¯ã¯ããã«å¹çç㪠tf32 圢åŒã䜿çšããŸãããçµæã¯äŸç¶ãšã㊠fp32 ã«ãªããŸãã詳现ãš
ãã³ãããŒã¯ã«ã€ããŠã¯ã[Ampere ããã€ã¹äžã® TensorFloat-32(TF32)](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) ãåç
§ããŠãã ãããææžã«ã¯ä»¥äžãå«ãŸããŸã
äœããã®çç±ã§ãã®èªåå€æã䜿çšããããªãå Žåã¯ããã®èªåå€æãç¡å¹ã«ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
ð€ ãã¬ãŒããŒã§ã¯ã`--tf32` ã䜿çšããŠæå¹ã«ãããã`--tf32 0` ãŸã㯠`--no_tf32` ã䜿çšããŠç¡å¹ã«ããããšãã§ããŸããããã©ã«ãã§ã¯ãPyTorch ã®ããã©ã«ãã䜿çšãããŸãã
<a id='deepspeed-amp'></a>
### Automatic Mixed Precision
pytorch ã®ãã㪠AMP ã®æ¹æ³ãŸã㯠apex ã®ãããªæ¹æ³ã§èªåæ··å粟床ã䜿çšã§ããŸãã
### fp16
fp16 (float16) ãèšå®ã㊠pytorch AMP ã®ãããªã¢ãŒããèšå®ããã«ã¯:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
[`Trainer`] ã¯ãã®å€ã«åºã¥ããŠãããèªåçã«æå¹ãŸãã¯ç¡å¹ã«ããŸãã
`args.fp16_backend`ãæ®ãã®èšå®å€ã¯ããªã次第ã§ãã
ãã®ã¢ãŒãã¯ã`--fp16 --fp16_backend amp`ãŸãã¯`--fp16_full_eval`ã³ãã³ãã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæ瀺çã«æå¹/ç¡å¹ã«ããããšãã§ããŸãã
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããã[ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/#fp16-training-options)ã§ãã
### BF16
fp16 ã®ä»£ããã« bf16 (bfloat16) ãå¿
èŠãªå Žåã¯ã次ã®æ§æã»ã¯ã·ã§ã³ã䜿çšãããŸãã
```json
{
"bf16": {
"enabled": "auto"
}
}
```
bf16 㯠fp32 ãšåããã€ããã㯠ã¬ã³ãžãåããŠãããããæ倱ã¹ã±ãŒãªã³ã°ã¯å¿
èŠãããŸããã
ãã®ã¢ãŒãã¯ã`--bf16` ãŸã㯠`--bf16_full_eval` ã³ãã³ãã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæ瀺çã«æå¹/ç¡å¹ã«ããããšãã§ããŸãã
```json
{
"bf16": {
"enabled": true
}
}
```
<Tip>
`deepspeed==0.6.0`ã®æç¹ã§ã¯ãbf16 ãµããŒãã¯æ°ããå®éšçãªãã®ã§ãã
bf16 ãæå¹ãªç¶æ
㧠[åŸé
环ç©](#gradient-accumulation) ã䜿çšããå Žåã¯ãbf16 ã§åŸé
ã环ç©ãããããšã«æ³šæããå¿
èŠããããŸãããã®åœ¢åŒã®ç²ŸåºŠãäœããããããã¯åžæã©ããã§ã¯ãªãå¯èœæ§ããããŸããæ倱ã®ããèç©ã«ã€ãªãããŸãã
ãã®åé¡ãä¿®æ£ããããé«ç²ŸåºŠã® `dtype` (fp16 ãŸã㯠fp32) ã䜿çšãããªãã·ã§ã³ãæäŸããããã®äœæ¥ãè¡ãããŠããŸãã
</Tip>
### NCCL Collectives
èšç·Žäœå¶ã®`dtype`ããããããŸããŸãªåæžãåé/åæ£æäœãªã©ã®ã³ãã¥ãã±ãŒã·ã§ã³éåäœã«äœ¿çšãããå¥ã®`dtype`ããããŸãã
ãã¹ãŠã®åé/åæ£æäœã¯ãããŒã¿ãå«ãŸããŠããã®ãšåã `dtype` ã§å®è¡ããããããbf16 ãã¬ãŒãã³ã°äœå¶ã䜿çšããŠããå ŽåãããŒã¿ã¯ bf16 ã§åéãããŸããåéã¯æ倱ã®ãªãæäœã§ãã
ããŸããŸãªãªãã¥ãŒã¹æäœã¯éåžžã«æ倱ã倧ããå¯èœæ§ããããŸããããšãã°ãè€æ°ã® GPU éã§åŸé
ãå¹³ååãããå Žåãéä¿¡ã fp16 ãŸã㯠bf16 ã§è¡ãããå Žåãçµæã¯æ倱ãå€ããªãå¯èœæ§ããããŸããè€æ°ã®æ°å€ãäœç²ŸåºŠã§ã¢ããã¿ã€ãºãããšçµæã¯æ£ç¢ºã§ã¯ãªãããã§ãã ã bf16 ã§ã¯ fp16 ããã粟床ãäœããããããã«ããã§ããéåžžã¯éåžžã«å°ãã grad ãå¹³åããéã®æ倱ãæå°éã«æãããããããfp16 ã§ååã§ããããšããããããŸãããããã£ãŠãããã©ã«ãã§ã¯ãå粟床ãã¬ãŒãã³ã°ã§ã¯ fp16 ããªãã¯ã·ã§ã³æŒç®ã®ããã©ã«ããšããŠäœ¿çšãããŸãããã ãããã®æ©èœãå®å
šã«å¶åŸ¡ã§ããå¿
èŠã«å¿ããŠå°ããªãªãŒããŒããããè¿œå ããŠããªãã¯ã·ã§ã³ãçŽ¯ç© dtype ãšã㊠fp32 ã䜿çšããçµæã®æºåãã§ããå Žåã«ã®ã¿å粟床 `dtype` ã«ããŠã³ãã£ã¹ãããããã«ããããšãã§ããŸããã§ãã¬ãŒãã³ã°äžã§ãã
ããã©ã«ãããªãŒããŒã©ã€ãããã«ã¯ãæ°ããæ§æãšã³ããªãè¿œå ããã ãã§ãã
```json
{
"communication_data_type": "fp32"
}
```
ãã®èšäºã®å·çæç¹ã§ã®æå¹ãªå€ã¯ã"fp16"ã"bfp16"ã"fp32"ã§ãã
泚: ã¹ããŒãž ãŒã 3 ã«ã¯ãbf16 éä¿¡ã¿ã€ãã«é¢ãããã°ãããã`deepspeed==0.8.1`ã§ä¿®æ£ãããŸããã
### apex
apex AMP ã®ãããªã¢ãŒã ã»ãããèšå®ããã«ã¯:
```json
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
```
[`Trainer`] 㯠`args.fp16_backend` ã®å€ã«åºã¥ããŠèªåçã«èšå®ããŸãã
`args.fp16_opt_level`ã
ãã®ã¢ãŒãã¯ã`--fp16 --fp16_backend apex --fp16_opt_level 01`ã³ãã³ã ã©ã€ã³åŒæ°ãæž¡ããããšæå¹ã«ãªããŸãã
ãã®ã¢ãŒããæ瀺çã«æ§æããããšãã§ããŸãã
```json
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
ããã¯[ããã¥ã¡ã³ã](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options)ã§ãã
<a id='deepspeed-bs'></a>
### Batch Size
ããããµã€ãºãèšå®ããã«ã¯ã次ã䜿çšããŸãã
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
[`Trainer`] ã¯èªåçã« `train_micro_batch_size_per_gpu` ã次ã®å€ã«èšå®ããŸãã
`args.per_device_train_batch_size`ãš`train_batch_size`ã`args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`ã«å€æŽããŸãã
å€ãæ瀺çã«èšå®ããããšãã§ããŸãã
```json
{
"train_batch_size": 12,
"train_micro_batch_size_per_gpu": 4
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-grad-acc'></a>
### Gradient Accumulation
åŸé
环ç©ã»ãããæ§æããã«ã¯:
```json
{
"gradient_accumulation_steps": "auto"
}
```
[`Trainer`] ã¯èªåçã«ããã `args.gradient_accumulation_steps` ã®å€ã«èšå®ããŸãã
å€ãæ瀺çã«èšå®ããããšãã§ããŸãã
```json
{
"gradient_accumulation_steps": 3
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-grad-clip'></a>
### Gradient Clipping
ã°ã©ããŒã·ã§ã³ ã°ã©ããŒã·ã§ã³ ã¯ãªããã³ã° ã»ãããæ§æããã«ã¯:
```json
{
"gradient_clipping": "auto"
}
```
[`Trainer`] ã¯èªåçã«ããã `args.max_grad_norm` ã®å€ã«èšå®ããŸãã
å€ãæ瀺çã«èšå®ããããšãã§ããŸãã
```json
{
"gradient_clipping": 1.0
}
```
ãã ãã[`Trainer`] ã³ãã³ãã©ã€ã³åŒæ°ãš DeepSpeed ãèªåã§åæããããšã«ãªããŸãã
æ§æã
<a id='deepspeed-weight-extraction'></a>
### Getting The Model Weights Out
ãã¬ãŒãã³ã°ãç¶ç¶ããDeepSpeed ã®äœ¿çšãåéããéããäœãå¿é
ããå¿
èŠã¯ãããŸããã DeepSpeed ã¹ãã¢
fp32 ã®ã«ã¹ã¿ã ãã§ãã¯ãã€ã³ã ãªããã£ãã€ã¶ãŒ ãã¡ã€ã«å
ã®ãã¹ã¿ãŒã®éã¿ããã㯠`global_step*/*optim_states.pt` (ãã㯠glob
ãã¿ãŒã³)ãéåžžã®ãã§ãã¯ãã€ã³ãã®äžã«ä¿åãããŸãã
**FP16 ãŠã§ã€ã:**
ã¢ãã«ã ZeRO-2 ã§ä¿åãããšãã¢ãã«ã®éã¿ãå«ãéåžžã® `pytorch_model.bin` ãã¡ã€ã«ãäœæãããŸããã
ãããã¯éã¿ã® fp16 ããŒãžã§ã³ã«ãããŸããã
ZeRO-3 ã§ã¯ãã¢ãã«ã®éã¿ãè€æ°ã® GPU ã«åå²ããããããç¶æ³ã¯ããã«è€éã«ãªããŸãã
ãããã£ãŠãfp16 ãä¿åããããã® `Trainer` ãååŸããã«ã¯ã`"stage3_gather_16bit_weights_on_model_save": true` ãå¿
èŠã§ãã
éã¿ã®ããŒãžã§ã³ããã®èšå®ã`False`ã®å Žåã`pytorch_model.bin`ã¯äœæãããŸãããããã¯ãããã©ã«ã㧠DeepSpeed ã® `state_dict` ã«å®éã®éã¿ã§ã¯ãªããã¬ãŒã¹ãã«ããŒãå«ãŸããããã§ãããã® `state_dict` ãä¿åããå ŽåãããŒããçŽãããšã¯ã§ããŸããã
```json
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
**FP32 éé:**
fp16 ãŠã§ã€ãã¯ãã¬ãŒãã³ã°ãåéããã®ã«é©ããŠããŸãããã¢ãã«ã®åŸ®èª¿æŽãå®äºããããã
[ã¢ãã« ãã](https://huggingface.co/models) ã«ã¢ã¯ã»ã¹ããããfp32 ãå
¥æããããšæãããä»ã®äººã«æž¡ããŸãã
éã¿ãããã¯å€§éã®ã¡ã¢ãªãå¿
èŠãšããããã»ã¹ã§ããããããã¬ãŒãã³ã°äžã«è¡ãã¹ãã§ã¯ãªãã®ãçæ³çã§ãã
ãããã£ãŠããã¬ãŒãã³ã°ã®å®äºåŸã«ãªãã©ã€ã³ã§å®è¡ããã®ãæé©ã§ãããã ããå¿
èŠã«å¿ããŠã空ã CPU ãååã«ããå Žåã¯ã
åããã¬ãŒãã³ã° ã¹ã¯ãªããã§å®è¡ã§ããããšãæãåºããŠãã ããã次ã®ã»ã¯ã·ã§ã³ã§ã¯ãäž¡æ¹ã®ã¢ãããŒãã«ã€ããŠèª¬æããŸãã
**ã©ã€ã FP32 ãŠã§ã€ã ãªã«ããª:**
ã¢ãã«ã倧ããããã¬ãŒãã³ã°ã®çµäºæã«ç©ºã CPU ã¡ã¢ãªãã»ãšãã©æ®ã£ãŠããªãå Žåããã®ã¢ãããŒãã¯æ©èœããªãå¯èœæ§ããããŸãã
å°ãªããšã 1 ã€ã®ãã§ãã¯ãã€ã³ããä¿åããŠããŠãææ°ã®ãã§ãã¯ãã€ã³ãã䜿çšãããå Žåã¯ã次ã®æé ãå®è¡ã§ããŸãã
```python
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
`--load_best_model_at_end` class:*~transformers.TrainingArguments* åŒæ°ã䜿çšããŠããå Žå (æé©ãªã¢ãã«ã远跡ãããã)
ãã§ãã¯ãã€ã³ã)ãæåã«æçµã¢ãã«ãæ瀺çã«ä¿åããŠãããäžèšãšåãããšãè¡ãããšã§ãã¬ãŒãã³ã°ãçµäºã§ããŸãã
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip>
`load_state_dict_from_zero_checkpoint` ãå®è¡ããããšã`model` ã¯ãã¯ã䜿çšã§ããªããªãããšã«æ³šæããŠãã ããã
åãã¢ããªã±ãŒã·ã§ã³ã® DeepSpeed ã³ã³ããã¹ããã€ãŸããdeepspeed ãšã³ãžã³ãååæåããå¿
èŠããããŸãã
`model.load_state_dict(state_dict)` ã¯ãããããã¹ãŠã® DeepSpeed ããžãã¯ãåé€ããŸãããããã£ãŠãããã¯æåŸã«ã®ã¿å®è¡ããŠãã ãã
ãã¬ãŒãã³ã°ã®æ§åã
</Tip>
ãã¡ãããclass:*~transformers.Trainer* ã䜿çšããå¿
èŠã¯ãªããäžèšã®äŸãç¬èªã®ãã®ã«èª¿æŽããããšãã§ããŸãã
ãã¬ãŒããŒã
äœããã®çç±ã§ããã«æ¹è¯ãããå Žåã¯ãéã¿ã® fp32 `state_dict` ãæœåºããŠé©çšããããšãã§ããŸãã
次ã®äŸã«ç€ºãããã«ããããã¯èªåã§äœæããŸãã
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
model = model.cpu()
model.load_state_dict(state_dict)
```
**ãªãã©ã€ã³ FP32 ãŠã§ã€ã ãªã«ããª:**
DeepSpeed ã¯ç¹å¥ãªå€æã¹ã¯ãªãã`zero_to_fp32.py`ãäœæãããã§ãã¯ãã€ã³ãã®æäžäœã«é
眮ããŸãã
ãã©ã«ãããã®ã¹ã¯ãªããã䜿çšãããšããã€ã§ãéã¿ãæœåºã§ããŸããã¹ã¯ãªããã¯ã¹ã¿ã³ãã¢ãã³ãªã®ã§ãããå¿
èŠãããŸããã
æœåºãè¡ãããã®èšå®ãã¡ã€ã«ãŸã㯠`Trainer` ãå¿
èŠã§ãã
ãã§ãã¯ãã€ã³ã ãã©ã«ããŒã次ã®ããã«ãªã£ãŠãããšããŸãã
```bash
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
```
ãã®äŸã§ã¯ãDeepSpeed ãã§ãã¯ãã€ã³ã ãµããã©ã«ã㌠*global_step1* ã 1 ã€ã ããããŸãããããã£ãŠãFP32ãåæ§ç¯ããã«ã¯
éã¿ãå®è¡ããã ãã§ã:
```bash
python zero_to_fp32.py . pytorch_model.bin
```
ããã ãã `pytorch_model.bin`ã«ã¯ãè€æ°ã® GPU ããçµ±åãããå®å
šãª fp32 ã¢ãã«ã®éã¿ãå«ãŸããããã«ãªããŸãã
ã¹ã¯ãªããã¯ãZeRO-2 ãŸã㯠ZeRO-3 ãã§ãã¯ãã€ã³ããèªåçã«åŠçã§ããããã«ãªããŸãã
`python zero_to_fp32.py -h` ãå®è¡ãããšã䜿çšæ¹æ³ã®è©³çŽ°ã衚瀺ãããŸãã
ã¹ã¯ãªããã¯ããã¡ã€ã«`latest`ã®å
容ã䜿çšã㊠deepspeed ãµããã©ã«ããŒãèªåæ€åºããŸãã
äŸã«ã¯`global_step1`ãå«ãŸããŸãã
泚: çŸåšãã¹ã¯ãªããã«ã¯æçµç㪠fp32 ã¢ãã«ã®éã¿ã® 2 åã®äžè¬ RAM ãå¿
èŠã§ãã
### ZeRO-3 ãš Infinity Nuances
ZeRO-3 ã¯ããã©ã¡ãŒã¿ ã·ã£ãŒãã£ã³ã°æ©èœã®ç¹ã§ ZeRO-2 ãšã¯å€§ããç°ãªããŸãã
ZeRO-Infinity 㯠ZeRO-3 ãããã«æ¡åŒµããNVMe ã¡ã¢ãªããã®ä»ã®è€æ°ã®é床ãšã¹ã±ãŒã©ããªãã£ã®åäžããµããŒãããŸãã
ã¢ãã«ã«ç¹å¥ãªå€æŽãå ããå¿
èŠããªããŠãæ£åžžã«åäœããããã«ããããåªåãæãããŠããŸããããç¹å®ã®ç¹ã§ã¯
ç¶æ³ã«ãã£ãŠã¯ã次ã®æ
å ±ãå¿
èŠã«ãªãå ŽåããããŸãã
#### Constructing Massive Models
DeepSpeed/ZeRO-3 ã¯ãæ¢åã® RAM ã«åãŸããªãå¯èœæ§ã®ããæ°å
ã®ãã©ã¡ãŒã¿ãæã€ã¢ãã«ãåŠçã§ããŸãããã®ãããªå Žåã
ãŸããåæåãããé«éã«å®è¡ãããå Žåã¯ã*deepspeed.zero.Init()* ã䜿çšããŠã¢ãã«ãåæåããŸãã
ã³ã³ããã¹ã ãããŒãžã£ãŒ (é¢æ°ãã³ã¬ãŒã¿ãŒã§ããããŸã)ã次ã®ããã«ãªããŸãã
```python
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed
with deepspeed.zero.Init():
config = T5Config.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration(config)
```
ã芧ã®ãšãããããã«ããã©ã³ãã ã«åæåãããã¢ãã«ãåŸãããŸãã
äºåãã¬ãŒãã³ã°ãããã¢ãã«ã䜿çšãããå Žåã`model_class.from_pretrained` ã¯æ¬¡ã®æ¡ä»¶ãæºããéããã®æ©èœãæå¹ã«ããŸãã
`is_deepspeed_zero3_enabled()` 㯠`True` ãè¿ããŸããããã¯çŸåšã
[`TrainingArguments`] ãªããžã§ã¯ã (æž¡ããã DeepSpeed æ§æãã¡ã€ã«ã« ZeRO-3 æ§æãå«ãŸããŠããå Žå)
ã»ã¯ã·ã§ã³ããããã£ãŠãåŒã³åºãã®åã«** [`TrainingArguments`] ãªããžã§ã¯ããäœæããå¿
èŠããããŸãã
`from_pretrained`ãèããããã·ãŒã±ã³ã¹ã®äŸã次ã«ç€ºããŸãã
```python
from transformers import AutoModel, Trainer, TrainingArguments
training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("google-t5/t5-small")
trainer = Trainer(model=model, args=training_args, ...)
```
å
¬åŒã®ãµã³ãã« ã¹ã¯ãªããã䜿çšããŠããŠãã³ãã³ã ã©ã€ã³åŒæ°ã« `--deepspeed ds_config.json` ãå«ãŸããŠããå Žå
ZeRO-3 èšå®ãæå¹ã«ãããšãããããµã³ãã« ã¹ã¯ãªããã®èšè¿°æ¹æ³ã§ããããããã¹ãŠããã§ã«å®äºããŠããŸãã
泚: ã¢ãã«ã® fp16 éã¿ãåäžã® GPU ã®ã¡ã¢ãªã«åãŸããªãå Žåã¯ããã®æ©èœã䜿çšããå¿
èŠããããŸãã
ãã®æ¹æ³ãšãã®ä»ã®é¢é£æ©èœã®è©³çŽ°ã«ã€ããŠã¯ã[倧èŠæš¡ã¢ãã«ã®æ§ç¯](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) ãåç
§ããŠãã ããã
ãŸããfp16 ã§äºåèšç·Žãããã¢ãã«ãããŒããããšãã¯ã`from_pretrained` ã«äœ¿çšããããã«æ瀺ããå¿
èŠããããŸãã
`torch_dtype=torch.float16`ã詳现ã«ã€ããŠã¯ã[from_pretrained-torch-dtype](#from_pretrained-torch-dtype) ãåç
§ããŠãã ããã
#### Gathering Parameters
è€æ°ã® GPU äžã® ZeRO-3 ã§ã¯ãçŸåšã® GPU ã®ãã©ã¡ãŒã¿ã§ãªãéããåäžã® GPU ããã¹ãŠã®ãã©ã¡ãŒã¿ãæã€ããšã¯ãããŸããã
å®è¡å±€ããããã£ãŠããã¹ãŠã®ã¬ã€ã€ãŒã®ãã¹ãŠã®ãã©ã¡ãŒã¿ãŒã«äžåºŠã«ã¢ã¯ã»ã¹ããå¿
èŠãããå Žåã¯ããããè¡ãããã®ç¹å®ã®æ¹æ³ããããŸãã
ã»ãšãã©ã®å Žåã¯å¿
èŠãããŸããããå¿
èŠãªå Žåã¯ã[ãã©ã¡ãŒã¿ã®åé](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) ãåç
§ããŠãã ããã
ãã ããããã€ãã®å Žæã§å
éšçã«äœ¿çšããŠããŸãããã®äŸã® 1 ã€ã¯ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ã®éã¿ãããŒããããšãã§ãã
`from_pretrained`ãäžåºŠã« 1 ã€ã®ã¬ã€ã€ãŒãããŒãããåå ããŠãããã¹ãŠã® GPU ã«å³åº§ã«åå²ããŸãã
倧èŠæš¡ãªã¢ãã«ã§ã¯ãã¡ã¢ãªã®é¢ä¿ã§ã1 ã€ã® GPU ã«ããŒãããŠããè€æ°ã® GPU ã«åæ£ããããšã¯ã§ããŸããã
å¶éã
ãŸããZeRO-3 ã§ã¯ãç¬èªã®ã³ãŒããäœæãã次ã®ãããªã¢ãã« ãã©ã¡ãŒã¿ãŒã®éã¿ãçºçãããšããŸãã
```python
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
`tensor([1.])` ã«ã¹ãã¬ã¹ãæããå ŽåããŸãã¯ãã©ã¡ãŒã¿ã®ãµã€ãºã `1` ã§ãããšãããšã©ãŒãçºçããå Žå
ãã倧ããªå€æ¬¡å
圢ç¶ãããã¯ããã©ã¡ãŒã¿ãŒãåå²ãããŠããã衚瀺ãããã®ã¯ ZeRO-3 ãã¬ãŒã¹ãã«ããŒã§ããããšãæå³ããŸãã
<a id='deepspeed-zero-inference'></a>
### ZeRO Inference
ZeRO Inference ã¯ãZeRO-3 Training ãšåãæ§æã䜿çšããŸãããªããã£ãã€ã¶ãŒãšã¹ã±ãžã¥ãŒã©ãŒã®ã»ã¯ã·ã§ã³ã¯å¿
èŠãããŸãããã§
å®éãåããã®ããã¬ãŒãã³ã°ãšå
±æãããå Žåã¯ãããããèšå®ãã¡ã€ã«ã«æ®ãããšãã§ããŸãã圌ãã¯ãã ãããªãã ãã
ç¡èŠãããŸããã
ãã以å€ã®å Žåã¯ãéåžžã® [`TrainingArguments`] åŒæ°ãæž¡ãã ãã§ããäŸãã°ïŒ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
```
å¯äžéèŠãªããšã¯ãZeRO-2 ã«ã¯äœã®å©ç¹ããªããããZeRO-3 æ§æã䜿çšããå¿
èŠããããšããããšã§ãã
ZeRO-3 ã®ã¿ããã©ã¡ãŒã¿ãŒã®ã·ã£ãŒãã£ã³ã°ãå®è¡ããã®ã«å¯ŸããZeRO-1 ã¯åŸé
ãšãªããã£ãã€ã¶ãŒã®ç¶æ
ãã·ã£ãŒãã£ã³ã°ãããããæšè«ã«åœ¹ç«ã¡ãŸãã
以äžã¯ãå©çšå¯èœãªãã¹ãŠã® GPU ããããã€ãã DeepSpeed ã§`run_translation.py`ãå®è¡ããäŸã§ãã
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google-t5/t5-small --output_dir output_dir \
--do_eval --max_eval_samples 50 --warmup_steps 50 \
--max_source_length 128 --val_max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 4 \
--predict_with_generate --dataset_config "ro-en" --fp16 \
--source_lang en --target_lang ro --dataset_name wmt16 \
--source_prefix "translate English to Romanian: "
```
æšè«ã®ããã«ããªããã£ãã€ã¶ãŒã®ç¶æ
ãšåŸé
ã«ãã£ãŠäœ¿çšãããè¿œå ã®å€§ããªã¡ã¢ãªã¯å¿
èŠãªãããã
ã¯ããã«å€§ããªããããã·ãŒã±ã³ã¹é·ãåãããŒããŠã§ã¢ã«é©åã§ããå¿
èŠããããŸãã
ããã«ãDeepSpeed ã¯çŸåšãDeepspeed-Inference ãšåŒã°ããé¢é£è£œåãéçºããŠããŸããããããšã¯äœã®é¢ä¿ããããŸããã
ZeRO ãã¯ãããžãŒã«æºæ ããŠããŸããã代ããã«ãã³ãœã«äžŠååŠçã䜿çšããŠãåäžã® GPU ã«åãŸããªãã¢ãã«ãã¹ã±ãŒãªã³ã°ããŸããããã¯
çŸåšéçºäžã§ãã補åãå®æãããçµ±åãæäŸããäºå®ã§ãã
### Memory Requirements
Deepspeed ZeRO ã¯ã¡ã¢ãªã CPU (ããã³ NVMe) ã«ãªãããŒãã§ããããããã¬ãŒã ã¯ãŒã¯ã¯ã䜿çšãããŠãã GPU ã®æ°ã«å¿ããŠå¿
èŠãª CPU ããã³ GPU ã¡ã¢ãªã®éãç¥ãããšãã§ãããŠãŒãã£ãªãã£ãæäŸããŸãã
åäžã® GPU 㧠`bigscience/T0_3B`ã埮調æŽããããã«å¿
èŠãªã¡ã¢ãªã®éãèŠç©ãã£ãŠã¿ãŸãããã
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ãããã£ãŠãåäžã® 80 GB GPU 㧠CPU ãªãããŒããªãã§æèŒããããšããå°ã㪠8 GB GPU ã§ãæ倧 60 GB ã® CPU ã¡ã¢ãªãå¿
èŠã«ãªãããšãå¯èœã§ãã (ããã¯ãã©ã¡ãŒã¿ããªããã£ãã€ã¶ã®ç¶æ
ãããã³åŸé
ã®ããã®ã¡ã¢ãªã§ããããšã«æ³šæããŠãã ãããcuda ã«ãŒãã«ãã¢ã¯ãã£ããŒã·ã§ã³ãããã³äžæã¡ã¢ãªã«ã¯ããå°ãå€ãã®ã¡ã¢ãªãå¿
èŠã§ãã)
次ã«ãã³ã¹ããšé床ã®ãã¬ãŒããªãã«ãªããŸããããå°ãã GPU ã賌å
¥ãŸãã¯ã¬ã³ã¿ã«ããæ¹ãå®ããªããŸã (Deepspeed ZeRO ã§ã¯è€æ°ã® GPU ã䜿çšã§ãããããGPU ã®æ°ãæžããããšãã§ããŸã)ããããããã®å Žåã¯é
ããªããŸãããã®ãããäœããå®è¡ããé床ãæ°ã«ããªããŠããé床ã®äœäžã¯ GPU ã®äœ¿çšæéã«çŽæ¥åœ±é¿ããã³ã¹ããå¢å€§ãããããã©ããæãå¹æçããå®éšããŠæ¯èŒããŠãã ããã
åå㪠GPU ã¡ã¢ãªãããå Žåã¯ããã¹ãŠãé«éã«ãªããããCPU/NVMe ãªãããŒããå¿
ãç¡å¹ã«ããŠãã ããã
ããšãã°ã2 ã€ã® GPU ã«å¯ŸããŠåãããšãç¹°ãè¿ããŠã¿ãŸãããã
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1
31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ãããã£ãŠãããã§ã¯ãCPU ã«ãªãããŒãããã« 2x 32GB 以äžã® GPU ãå¿
èŠã«ãªããŸãã
詳现ã«ã€ããŠã¯ã[ã¡ã¢ãªæšå®ããŒã«](https://deepspeed.readthedocs.io/en/latest/memory.html) ãåç
§ããŠãã ããã
### Filing Issues
ããã§ã¯ãåé¡ã®ççžãããã«è§£æããäœæ¥ã®ãããã¯ã解é€ã§ãããããåé¡ãå ±åããæ¹æ³ã説æããŸãã
ã¬ããŒãã«ã¯å¿
ã次ã®å
容ãå«ããŠãã ããã
1. ã¬ããŒãå
ã®å®å
šãª Deepspeed æ§æãã¡ã€ã«
2. [`Trainer`] ã䜿çšããŠããå Žåã¯ã³ãã³ãã©ã€ã³åŒæ°ããŸãã¯
ãã¬ãŒããŒã®ã»ããã¢ãããèªåã§ã¹ã¯ãªããäœæããŠããå Žåã¯ã[`TrainingArguments`] åŒæ°ãããªãã§ãã ãã
[`TrainingArguments`] ã«ã¯ç¡é¢ä¿ãªãšã³ããªãå€æ°å«ãŸããŠããããããã³ãããŸãã
3. 次ã®åºå:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
```
4. å¯èœã§ããã°ãåé¡ãåçŸã§ãã Google Colab ããŒãããã¯ãžã®ãªã³ã¯ãå«ããŠãã ãããããã䜿ããŸã
[ããŒãããã¯](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) ãšããŠ
åºçºç¹ã
5. äžå¯èœã§ãªãéããã«ã¹ã¿ã ããŒã¿ã»ããã§ã¯ãªããåžžã«äœ¿çšã§ããæšæºããŒã¿ã»ããã䜿çšããŠãã ããã
6. å¯èœã§ããã°ãæ¢åã® [ãµã³ãã«](https://github.com/huggingface/transformers/tree/main/examples/pytorch) ã®ããããã䜿çšããŠåé¡ãåçŸããŠã¿ãŠãã ããã
- Deepspeed ãåé¡ã®åå ã§ã¯ãªãããšããããããŸãã
æåºãããåé¡ã®äžéšã¯ãDeepspeed ãšã¯ç¡é¢ä¿ã§ããããšãå€æããŸãããããã¯ãDeepspeed ãã»ããã¢ããããåé€ãããåŸã§ãã
åé¡ã¯ãŸã æ®ã£ãŠããã
ãããã£ãŠãå®å
šã«æçœã§ãªãå Žåã¯ãDeepSpeed é¢é£ã®åé¡ã§ãã
äŸå€ãçºçããDeepSpeed ã¢ãžã¥ãŒã«ãé¢ä¿ããŠããããšãããããŸãããŸããDeepSpeed ãå«ãŸãªãã»ããã¢ãããåãã¹ãããŠãã ããã
åé¡ã解決ããªãå Žåã«ã®ã¿ãDeepspeed ã«ã€ããŠèšåããå¿
èŠãªè©³çŽ°ããã¹ãŠæäŸããŠãã ããã
- åé¡ãçµ±åéšåã§ã¯ãªã DeepSpeed ã³ã¢ã«ããããšãæãããªå Žåã¯ãåé¡ãæåºããŠãã ããã
[Deepspeed](https://github.com/microsoft/DeepSpeed/) ãçŽæ¥äœ¿çšããŸããããããããªãå Žåã§ãããå®å¿ãã ããã
ã©ã¡ãã®åé¡ãã©ãã«ãŒã§ãåé¡ãããŸãããæçš¿ãããããããå€æãã次ã®å Žåã¯å¥ã®åé¡ãã©ãã«ãŒã«ãªãã€ã¬ã¯ãããŸãã
ããã§ããå¿
èŠãããã
### Troubleshooting
#### the `deepspeed` process gets killed at startup without a traceback
`deepspeed`ããã»ã¹ãèµ·åæã«ãã¬ãŒã¹ããã¯ãªãã§åŒ·å¶çµäºãããå Žåãããã¯éåžžãããã°ã©ã ãè©Šè¡ããããšãæå³ããŸãã
ã·ã¹ãã ãæã£ãŠãããããå€ãã® CPU ã¡ã¢ãªãå²ãåœãŠãããããã»ã¹ãå²ãåœãŠãèš±å¯ãããŠãããããOS ã«ãŒãã«ãããã匷å¶çµäºããŸãã
ããã»ã¹ãããã¯ãèšå®ãã¡ã€ã«ã« `offload_optimizer` ãŸã㯠`offload_param` ãå«ãŸããŠããå¯èœæ§ãé«ãããã§ãã
ã©ã¡ãã`cpu`ã«ãªãããŒãããããã«èšå®ãããŠããŸãã NVMe ã䜿çšããŠããå Žåã¯ã次ã®ç°å¢ã§å®è¡ããŠããå Žå㯠NVMe ãžã®ãªãããŒããè©ŠããŠãã ããã
ãŒã-3ã [ç¹å®ã®ã¢ãã«ã«å¿
èŠãªã¡ã¢ãªéãèŠç©ãã]æ¹æ³ã¯æ¬¡ã®ãšããã§ã(https://deepspeed.readthedocs.io/en/latest/memory.html)ã
#### training and/or eval/predict loss is `NaN`
ããã¯ãbf16 æ··å粟床ã¢ãŒãã§äºåãã¬ãŒãã³ã°ãããã¢ãã«ãååŸããããã fp16 (æ··å粟床ã®æç¡ã«ããããã) ã§äœ¿çšããããšããå Žåã«ããçºçããŸãã TPU ã§ãã¬ãŒãã³ã°ãããã»ãšãã©ã®ã¢ãã«ãããã³å€ãã®å ŽåãGoogle ã«ãã£ãŠãªãªãŒã¹ãããã¢ãã«ã¯ããã®ã«ããŽãªã«åé¡ãããŸã (ããšãã°ãã»ãŒãã¹ãŠã® t5 ããŒã¹ã®ã¢ãã«)ãããã§ã®è§£æ±ºçã¯ãããŒããŠã§ã¢ããµããŒãããŠããå Žå (TPUãAmpere GPU 以é)ãfp32 ãŸã㯠bf16 ã䜿çšããããšã§ãã
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ãã°ã«ã¯ãDeepspeed ã次ã®ããã«`OVERFLOW!`ãå ±åããŠããããšãããããŸãã
```
0%| | 0/189 [00:00<?, ?it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1%|â | 1/189 [00:00<01:26, 2.17it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1%|ââ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14%|âââââââââââââââââ | 27/189 [00:14<01:13, 2.21it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ââââââââââââââââââ | 28/189 [00:14<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|ââââââââââââââââââ | 29/189 [00:15<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
```
ããã¯ãDeepspeed æ倱ã¹ã±ãŒã©ãŒãæ倱ãªãŒããŒãããŒãå
æããã¹ã±ãŒãªã³ã°ä¿æ°ãèŠã€ããããªãããšãæå³ããŸãã
(ãã°ã¯ããã§èªã¿ãããããããã«ãããµãŒãžãããŠããŸãã)
ãã®å Žåãé垞㯠`initial_scale_power` ã®å€ãäžããå¿
èŠããããŸããéåžžã`initial_scale_power: 32` ã«èšå®ãããšåé¡ã解決ããŸãã
### Notes
- DeepSpeed ã«ã¯ pip ã§ã€ã³ã¹ããŒã«å¯èœãª PyPI ããã±ãŒãžããããŸãããããŒããŠã§ã¢ã«æãé©åããããã«ããŸãæå¹ã«ããå¿
èŠãããå Žåã¯ã[ãœãŒã¹](https://github.com/microsoft/deepspeed#installation) ããã€ã³ã¹ããŒã«ããããšã匷ããå§ãããŸãã
1 ããã Adam ãªã©ã®ç¹å®ã®æ©èœã¯ãpypi ãã£ã¹ããªãã¥ãŒã·ã§ã³ã§ã¯å©çšã§ããŸããã
- ð€ Transformers 㧠DeepSpeed ã䜿çšããããã« [`Trainer`] ã䜿çšããå¿
èŠã¯ãããŸãã - ä»»æã®ã¢ãã«ã䜿çšã§ããŸã
åŸè
㯠[DeepSpeed çµ±åæé ](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models) ã«åŸã£ãŠèª¿æŽããå¿
èŠããããŸãã
## Non-Trainer Deepspeed Integration
[`~integrations.HfDeepSpeedConfig`] ã¯ãDeepspeed ã ð€ Transformers ã³ã¢ã«çµ±åããããã«äœ¿çšãããŸã
[`Trainer`] ã䜿çšããªãå Žåã®æ©èœãå®è¡ããå¯äžã®ããšã¯ãDeepspeed ZeRO-3 ãã©ã¡ãŒã¿åéãåŠçãã`from_pretrained`åŒã³åºãäžã«ã¢ãã«ãè€æ°ã® GPU ã«èªåçã«åå²ããããšã§ãããã以å€ã¯ãã¹ãŠèªåã§è¡ãå¿
èŠããããŸãã
[`Trainer`] ã䜿çšãããšããã¹ãŠãèªåçã«åŠçãããŸãã
[`Trainer`] ã䜿çšããªãå ŽåãDeepSpeed ZeRO-3 ãå¹ççã«å°å
¥ããã«ã¯ã
ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããåã« [`~integrations.HfDeepSpeedConfig`] ãªããžã§ã¯ããåé€ãããã®ãªããžã§ã¯ããçãããŸãŸã«ããŸãã
Deepspeed ZeRO-1 ãŸã㯠ZeRO-2 ã䜿çšããŠããå Žåã¯ã`HfDeepSpeedConfig`ã䜿çšããå¿
èŠã¯ãŸã£ãããããŸããã
ããšãã°ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ã®å Žåã¯æ¬¡ã®ããã«ãªããŸãã
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = AutoModel.from_pretrained("openai-community/gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
ãŸãã¯ãäºåãã¬ãŒãã³ã°ãããŠããªãã¢ãã«ã®å Žå:
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
config = AutoConfig.from_pretrained("openai-community/gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
[`Trainer`] çµ±åã䜿çšããŠããªãå Žåã¯ãå®å
šã«ç¬åã§è¡ãããšã«ãªãããšã«æ³šæããŠãã ãããåºæ¬çã«ã¯ã[Deepspeed](https://www.deepspeed.ai/) Web ãµã€ãã®ããã¥ã¡ã³ãã«åŸã£ãŠãã ããããŸããèšå®ãã¡ã€ã«ãæ瀺çã«èšå®ããå¿
èŠããããŸãã`"auto"`å€ã¯äœ¿çšã§ããã代ããã«å®éã®å€ãå
¥åããå¿
èŠããããŸãã
## HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all
### Custom DeepSpeed ZeRO Inference
以äžã¯ãåäžã® GPU ã«ã¢ãã«ãé©åã§ããªãå Žåã«ã[`Trainer`] ã䜿çšããã« DeepSpeed ZeRO æšè«ãå®è¡ããæ¹æ³ã®äŸã§ãã解決çã«ã¯ãè¿œå ã® GPU ã®äœ¿çšããŸã㯠GPU ã¡ã¢ãªã CPU ã¡ã¢ãªã«ãªãããŒãããããšãå«ãŸããŸãã
ããã§ç解ãã¹ãéèŠãªãã¥ã¢ã³ã¹ã¯ãZeRO ã®èšèšæ¹æ³ã«ãããç°ãªã GPU ã§ç°ãªãå
¥åã䞊è¡ããŠåŠçã§ãããšããããšã§ãã
ãã®äŸã«ã¯å€§éã®ã¡ã¢ããããèªå·±ææžåãããŠããŸãã
å¿
ã次ã®ããšãè¡ã£ãŠãã ããã
1. åå㪠GPU ã¡ã¢ãªãããå Žåã¯ãCPU ãªãããŒããç¡å¹ã«ããŸã (é床ãäœäžãããã)ã
2. Ampere ãŸãã¯æ°ãã GPU ãææããŠããå Žåã¯ãåŠçãé«éåããããã« bf16 ãæå¹ã«ããŸãããã®ããŒããŠã§ã¢ããªãå Žåã¯ãbf16 æ··å粟床ã§äºåãã¬ãŒãã³ã°ãããã¢ãã« (ã»ãšãã©ã® t5 ã¢ãã«ãªã©) ã䜿çšããªãéããfp16 ãæå¹ã«ããããšãã§ããŸãããããã¯éåžžãfp16 ã§ãªãŒããŒãããŒããåºåãšããŠã¬ããŒãžã衚瀺ãããŸãã
```python
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For in-depth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed
# keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
ããã`t0.py`ãšããŠä¿åããŠå®è¡ããŸãããã
```bash
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
ããã¯éåžžã«åºæ¬çãªäŸã§ãããããŒãºã«åãããŠèª¿æŽããŠãã ããã
### `generate` nuances
ZeRO Stage-3 ã§è€æ°ã® GPU ã䜿çšããå Žåã`generate(..., synced_gpus=True)`ãåŒã³åºã㊠GPU ãåæããå¿
èŠããããŸãããããè¡ããªããšã1 ã€ã® GPU ãä»ã® GPU ããå
ã«çæãçµäºããå Žåãæ®ãã® GPU ãçæãåæ¢ãã GPU ãããŠã§ã€ãã®ã·ã£ãŒããåä¿¡ã§ããªããªããããã·ã¹ãã å
šäœããã³ã°ããŸãã
`transformers>=4.28` 以éã`synced_gpus` ãæ瀺çã«æå®ãããŠããªãå Žåããããã®æ¡ä»¶ãæ€åºããããšèªåçã« `True` ã«èšå®ãããŸãããã ããå¿
èŠã«å¿ã㊠`synced_gpus` ã®å€ããªãŒããŒã©ã€ãããããšãã§ããŸãã
## Deepspeed çµ±åã®ãã¹ã
DeepSpeed çµ±åãå«ã PR ãéä¿¡ããå Žåã¯ãCircleCI PR CI ã»ããã¢ããã«ã¯ GPU ããªãããšã«æ³šæããŠãã ããããã®ãããGPU ãå¿
èŠãšãããã¹ãã¯å¥ã® CI ã§æ¯æ©ã®ã¿å®è¡ãããŸãããããã£ãŠãPR ã§ç·è²ã® CI ã¬ããŒãã衚瀺ãããŠããDeepSpeed ãã¹ããåæ Œããããšãæå³ããããã§ã¯ãããŸããã
DeepSpeed ãã¹ããå®è¡ããã«ã¯ãå°ãªããšã以äžãå®è¡ããŠãã ããã
```bash
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
```
ã¢ããªã³ã°ãŸã㯠pytorch ãµã³ãã« ã³ãŒãã®ãããããå€æŽããå Žåã¯ãModel Zoo ãã¹ããå®è¡ããŸãã以äžã¯ãã¹ãŠã® DeepSpeed ãã¹ããå®è¡ããŸãã
```bash
RUN_SLOW=1 pytest tests/deepspeed
```
## Main DeepSpeed Resources
- [ãããžã§ã¯ãã® github](https://github.com/microsoft/deepspeed)
- [䜿çšæ¹æ³ããã¥ã¡ã³ã](https://www.deepspeed.ai/getting-started/)
- [API ããã¥ã¡ã³ã](https://deepspeed.readthedocs.io/en/latest/index.html)
- [ããã°æçš¿](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
è«æ:
- [ZeRO: å
ãã©ã¡ãŒã¿ ã¢ãã«ã®ãã¬ãŒãã³ã°ã«åããã¡ã¢ãªã®æé©å](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: 10 åèŠæš¡ã®ã¢ãã« ãã¬ãŒãã³ã°ã®æ°äž»å](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: 極éã¹ã±ãŒã«ã®æ·±å±€åŠç¿ã®ããã® GPU ã¡ã¢ãªã®å£ãæã¡ç Žã](https://arxiv.org/abs/2104.07857)
æåŸã«ãHuggingFace [`Trainer`] 㯠DeepSpeed ã®ã¿ãçµ±åããŠããããšãèŠããŠãããŠãã ããã
DeepSpeed ã®äœ¿çšã«é¢ããŠåé¡ã質åãããå Žåã¯ã[DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues) ã«åé¡ãæåºããŠãã ããã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/output.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Model outputs
ãã¹ãŠã®ã¢ãã«ã«ã¯ã[`~utils.ModelOutput`] ã®ãµãã¯ã©ã¹ã®ã€ã³ã¹ã¿ã³ã¹ã§ããåºåããããŸãããããã¯
ã¢ãã«ã«ãã£ãŠè¿ããããã¹ãŠã®æ
å ±ãå«ãããŒã¿æ§é ã§ãããã¿ãã«ãŸãã¯
èŸæžã
ãããã©ã®ããã«ãªãããäŸã§èŠãŠã¿ãŸãããã
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
`outputs`ãªããžã§ã¯ãã¯[`~modeling_outputs.SequenceClassifierOutput`]ã§ããã
ããã¯ããªãã·ã§ã³ã§ `loss`ã`logits`ããªãã·ã§ã³ã§ `hidden_states`ããªãã·ã§ã³ã§ `attentions` å±æ§ãæã€ããšãæå³ããŸãã
ãªãã·ã§ã³ã® `attentions` å±æ§ãæã€ããšãæå³ãããããã§ã¯ã`labels`ãæž¡ããã®ã§`loss`ããããã`hidden_states`ãš`attentions`ã¯ãªãã
`output_hidden_states=True`ã`output_attentions=True`ãæž¡ããŠããªãã®ã§ã`hidden_states`ãš`attentions`ã¯ãªãã
`output_attentions=True`ãæž¡ããªãã£ãããã ã
<Tip>
`output_hidden_states=True`ãæž¡ããšã`outputs.hidden_states[-1]`ã `outputs.last_hidden_states` ãšæ£ç¢ºã«äžèŽããããšãæåŸ
ãããããããªãã
ããããå¿
ããããããªããšã¯éããŸãããã¢ãã«ã«ãã£ãŠã¯ãæåŸã«é ãããç¶æ
ãè¿ããããšãã«ãæ£èŠåããã®åŸã®åŠçãé©çšãããã®ããããŸãã
</Tip>
éåžžãšåãããã«åå±æ§ã«ã¢ã¯ã»ã¹ã§ããŸãããã®å±æ§ãã¢ãã«ããè¿ãããªãã£ãå Žåã¯ã
㯠`None`ãååŸããŸããããã§ãããšãã°`outputs.loss`ã¯ã¢ãã«ã«ãã£ãŠèšç®ãããæ倱ã§ããã`outputs.attentions`ã¯
`None`ã
`outputs`ãªããžã§ã¯ããã¿ãã«ãšããŠèããå Žåã`None`å€ãæããªãå±æ§ã®ã¿ãèæ
®ãããŸãã
ããšãã°ãããã«ã¯ 2 ã€ã®èŠçŽ ã`loss`ã次ã«`logits`ããããŸãã
```python
outputs[:2]
```
ããšãã°ãã¿ãã« `(outputs.loss, Outputs.logits)` ãè¿ããŸãã
`outputs`ãªããžã§ã¯ããèŸæžãšããŠèæ
®ããå ŽåããNoneããæããªãå±æ§ã®ã¿ãèæ
®ãããŸãã
䟡å€èŠ³ãããšãã°ãããã«ã¯`loss` ãš `logits`ãšãã 2 ã€ã®ããŒããããŸãã
ããã§ã¯ãè€æ°ã®ã¢ãã« ã¿ã€ãã§äœ¿çšãããæ±çšã¢ãã«ã®åºåãææžåããŸããå
·äœçãªåºåã¿ã€ãã¯æ¬¡ã®ãšããã§ãã
察å¿ããã¢ãã«ã®ããŒãžã«èšèŒãããŠããŸãã
## ModelOutput
[[autodoc]] utils.ModelOutput
- to_tuple
## BaseModelOutput
[[autodoc]] modeling_outputs.BaseModelOutput
## BaseModelOutputWithPooling
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
## BaseModelOutputWithCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
## BaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
## BaseModelOutputWithPast
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
## BaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
## Seq2SeqModelOutput
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
## CausalLMOutput
[[autodoc]] modeling_outputs.CausalLMOutput
## CausalLMOutputWithCrossAttentions
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
## CausalLMOutputWithPast
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
## MaskedLMOutput
[[autodoc]] modeling_outputs.MaskedLMOutput
## Seq2SeqLMOutput
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
## NextSentencePredictorOutput
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
## SequenceClassifierOutput
[[autodoc]] modeling_outputs.SequenceClassifierOutput
## Seq2SeqSequenceClassifierOutput
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
## MultipleChoiceModelOutput
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
## TokenClassifierOutput
[[autodoc]] modeling_outputs.TokenClassifierOutput
## QuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
## Seq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
## Seq2SeqSpectrogramOutput
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
## SemanticSegmenterOutput
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
## ImageClassifierOutput
[[autodoc]] modeling_outputs.ImageClassifierOutput
## ImageClassifierOutputWithNoAttention
[[autodoc]] modeling_outputs.ImageClassifierOutputWithNoAttention
## DepthEstimatorOutput
[[autodoc]] modeling_outputs.DepthEstimatorOutput
## Wav2Vec2BaseModelOutput
[[autodoc]] modeling_outputs.Wav2Vec2BaseModelOutput
## XVectorOutput
[[autodoc]] modeling_outputs.XVectorOutput
## Seq2SeqTSModelOutput
[[autodoc]] modeling_outputs.Seq2SeqTSModelOutput
## Seq2SeqTSPredictionOutput
[[autodoc]] modeling_outputs.Seq2SeqTSPredictionOutput
## SampleTSPredictionOutput
[[autodoc]] modeling_outputs.SampleTSPredictionOutput
## TFBaseModelOutput
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
## TFBaseModelOutputWithPooling
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
## TFBaseModelOutputWithPoolingAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
## TFBaseModelOutputWithPast
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
## TFBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
## TFSeq2SeqModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
## TFCausalLMOutput
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
## TFCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
## TFCausalLMOutputWithPast
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
## TFMaskedLMOutput
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
## TFSeq2SeqLMOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
## TFNextSentencePredictorOutput
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
## TFSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
## TFSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
## TFMultipleChoiceModelOutput
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
## TFTokenClassifierOutput
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
## TFQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
## TFSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
## FlaxBaseModelOutput
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
## FlaxBaseModelOutputWithPast
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
## FlaxBaseModelOutputWithPooling
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
## FlaxBaseModelOutputWithPastAndCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
## FlaxSeq2SeqModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
## FlaxCausalLMOutputWithCrossAttentions
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
## FlaxMaskedLMOutput
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
## FlaxSeq2SeqLMOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
## FlaxNextSentencePredictorOutput
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
## FlaxSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
## FlaxSeq2SeqSequenceClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
## FlaxMultipleChoiceModelOutput
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
## FlaxTokenClassifierOutput
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
## FlaxQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
## FlaxSeq2SeqQuestionAnsweringModelOutput
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/data_collator.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ããŒã¿ç
§åè
ããŒã¿ç
§ååšã¯ãããŒã¿ã»ããèŠçŽ ã®ãªã¹ããå
¥åãšããŠäœ¿çšããŠãããã圢æãããªããžã§ã¯ãã§ãããããã®èŠçŽ ã¯ã
`train_dataset` ãŸã㯠`eval_dataset` ã®èŠçŽ ãšåãåã
ããããæ§ç¯ã§ããããã«ããããã«ãããŒã¿ç
§åè
ã¯äœããã®åŠç (ããã£ã³ã°ãªã©) ãé©çšããå ŽåããããŸãããã®ãã¡ã®ããã€ãã¯ïŒ
[`DataCollatââorForLanguageModeling`]) ã©ã³ãã ãªããŒã¿æ¡åŒµ (ã©ã³ãã ãã¹ãã³ã°ãªã©) ãé©çšããŸã
圢æããããããäžã§ã
䜿çšäŸã¯ã[ãµã³ãã« ã¹ã¯ãªãã](../examples) ãŸã㯠[ãµã³ãã« ããŒãããã¯](../notebooks) ã«ãããŸãã
## Default data collator
[[autodoc]] data.data_collator.default_data_collator
## DefaultDataCollator
[[autodoc]] data.data_collator.DefaultDataCollator
## DataCollatorWithPadding
[[autodoc]] data.data_collator.DataCollatorWithPadding
## DataCollatorForTokenClassification
[[autodoc]] data.data_collator.DataCollatorForTokenClassification
## DataCollatorForSeq2Seq
[[autodoc]] data.data_collator.DataCollatorForSeq2Seq
## DataCollatorForLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
## DataCollatorForWholeWordMask
[[autodoc]] data.data_collator.DataCollatorForWholeWordMask
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
## DataCollatorForPermutationLanguageModeling
[[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling
- numpy_mask_tokens
- tf_mask_tokens
- torch_mask_tokens
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/onnx.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Exporting ð€ Transformers models to ONNX
ð€ Transformers 㯠`transformers.onnx` ããã±ãŒãžãæäŸããŸãã
èšå®ãªããžã§ã¯ããå©çšããããšã§ãã¢ãã«ã®ãã§ãã¯ãã€ã³ããONNXã°ã©ãã«å€æããããšãã§ããŸãã
詳现ã¯[ã¬ã€ã](../serialization) ãåç
§ããŠãã ããã
ãåç
§ããŠãã ããã
## ONNX Configurations
以äžã®3ã€ã®æœè±¡ã¯ã©ã¹ãæäŸããŠããŸãã
ãšã¯ã¹ããŒããããã¢ãã«ã¢ãŒããã¯ãã£ã®ã¿ã€ãã«å¿ããŠãç¶æ¿ãã¹ã3ã€ã®æœè±¡ã¯ã©ã¹ãæäŸããŸãïŒ
* ãšã³ã³ãŒããŒããŒã¹ã®ã¢ãã«ã¯ [`~onnx.config.OnnxConfig`] ãç¶æ¿ããŸãã
* ãã³ãŒããŒããŒã¹ã®ã¢ãã«ã¯ [`~onnx.config.OnnxConfigWithPast`] ãç¶æ¿ããŸãã
* ãšã³ã³ãŒããŒã»ãã³ãŒããŒã¢ãã«ã¯ [`~onnx.config.OnnxSeq2SeqConfigWithPast`] ãç¶æ¿ããŠããŸãã
### OnnxConfig
[[autodoc]] onnx.config.OnnxConfig
### OnnxConfigWithPast
[[autodoc]] onnx.config.OnnxConfigWithPast
### OnnxSeq2SeqConfigWithPast
[[autodoc]] onnx.config.OnnxSeq2SeqConfigWithPast
## ONNX Features
å ONNX æ§æã¯ã次ã®ããšãå¯èœã«ããäžé£ã® _æ©èœ_ ã«é¢é£ä»ããããŠããŸãã
ããŸããŸãªã¿ã€ãã®ããããžãŸãã¯ã¿ã¹ã¯ã®ã¢ãã«ããšã¯ã¹ããŒãããŸãã
### FeaturesManager
[[autodoc]] onnx.features.FeaturesManager
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/optimizer_schedules.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Optimization
`.optimization` ã¢ãžã¥ãŒã«ã¯ä»¥äžãæäŸããŸãã
- ã¢ãã«ã®åŸ®èª¿æŽã«äœ¿çšã§ããéã¿æžè¡°ãä¿®æ£ããããªããã£ãã€ã¶ãŒãããã³
- `_LRSchedule` ããç¶æ¿ããã¹ã±ãžã¥ãŒã« ãªããžã§ã¯ãã®åœ¢åŒã®ããã€ãã®ã¹ã±ãžã¥ãŒã«:
- è€æ°ã®ãããã®åŸé
ã环ç©ããããã®åŸé
环ç©ã¯ã©ã¹
## AdamW (PyTorch)
[[autodoc]] AdamW
## AdaFactor (PyTorch)
[[autodoc]] Adafactor
## AdamWeightDecay (TensorFlow)
[[autodoc]] AdamWeightDecay
[[autodoc]] create_optimizer
## Schedules
### Learning Rate Schedules (Pytorch)
[[autodoc]] SchedulerType
[[autodoc]] get_scheduler
[[autodoc]] get_constant_schedule
[[autodoc]] get_constant_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/>
[[autodoc]] get_cosine_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/>
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/>
[[autodoc]] get_linear_schedule_with_warmup
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/>
[[autodoc]] get_polynomial_decay_schedule_with_warmup
[[autodoc]] get_inverse_sqrt_schedule
### Warmup (TensorFlow)
[[autodoc]] WarmUp
## Gradient Strategies
### GradientAccumulator (TensorFlow)
[[autodoc]] GradientAccumulator
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/pipelines.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Pipelines
ãã€ãã©ã€ã³ã¯ãæšè«ã«ã¢ãã«ã䜿ãããã®ç°¡åã§åªããæ¹æ³ã§ããããã€ãã©ã€ã³ã¯ãè€éãªã³ãŒãã®ã»ãšãã©ãæœè±¡åãããªããžã§ã¯ãã§ãã
ãã€ãã©ã€ã³ã¯ãã©ã€ãã©ãªããè€éãªã³ãŒãã®ã»ãšãã©ãæœè±¡åãããªããžã§ã¯ãã§ãååä»ãåºæè¡šçŸèªèããã¹ã¯èšèªã¢ããªã³ã°ãææ
åæãç¹åŸŽæœåºã質åå¿çãªã©ã®ã¿ã¹ã¯ã«ç¹åããã·ã³ãã«ãªAPIãæäŸããŸãã
RecognitionãMasked Language ModelingãSentiment AnalysisãFeature ExtractionãQuestion Answeringãªã©ã®ã¿ã¹ã¯ã«ç¹åããã·ã³ãã«ãªAPIãæäŸããŸãã以äžãåç
§ã®ããšã
[ã¿ã¹ã¯æŠèŠ](../task_summary)ãåç
§ããŠãã ããã
ãã€ãã©ã€ã³ã®æœè±¡åã«ã¯2ã€ã®ã«ããŽãªãŒãããïŒ
- [`pipeline`] ã¯ãä»ã®ãã¹ãŠã®ãã€ãã©ã€ã³ãã«ãã»ã«åããæã匷åãªãªããžã§ã¯ãã§ãã
- ã¿ã¹ã¯åºæã®ãã€ãã©ã€ã³ã¯ã[ãªãŒãã£ãª](#audio)ã[ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³](#computer-vision)ã[èªç¶èšèªåŠç](#natural-language-processing)ãããã³ [ãã«ãã¢ãŒãã«](#multimodal) ã¿ã¹ã¯ã§äœ¿çšã§ããŸãã
## The pipeline abstraction
*ãã€ãã©ã€ã³* æœè±¡åã¯ãä»ã®ãã¹ãŠã®å©çšå¯èœãªãã€ãã©ã€ã³ã®ã©ãããŒã§ããä»ã®ãã®ãšåæ§ã«ã€ã³ã¹ã¿ã³ã¹åãããŸã
ãã€ãã©ã€ã³ã§ããããããªãç掻ã®è³ªãæäŸã§ããŸãã
1 ã€ã®é
ç®ã«å¯ŸããåçŽãªåŒã³åºã:
```python
>>> pipe = pipeline("text-classification")
>>> pipe("This restaurant is awesome")
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
```
[ãã](https://huggingface.co) ã®ç¹å®ã®ã¢ãã«ã䜿çšãããå Žåã¯ãã¢ãã«ããªã³ã«ãªã£ãŠããå Žåã¯ã¿ã¹ã¯ãç¡èŠã§ããŸãã
ããã¯ãã§ã«ãããå®çŸ©ããŠããŸãã
```python
>>> pipe = pipeline(model="FacebookAI/roberta-large-mnli")
>>> pipe("This restaurant is awesome")
[{'label': 'NEUTRAL', 'score': 0.7313136458396912}]
```
å€ãã®é
ç®ã«å¯ŸããŠãã€ãã©ã€ã³ãåŒã³åºãã«ã¯ã*list* ã䜿çšããŠãã€ãã©ã€ã³ãåŒã³åºãããšãã§ããŸãã
```python
>>> pipe = pipeline("text-classification")
>>> pipe(["This restaurant is awesome", "This restaurant is awful"])
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
```
å®å
šãªããŒã¿ã»ãããå埩ããã«ã¯ã`Dataset`ãçŽæ¥äœ¿çšããããšããå§ãããŸããããã¯ãå²ãåœãŠãå¿
èŠããªãããšãæå³ããŸã
ããŒã¿ã»ããå
šäœãäžåºŠã«åŠçããããšããèªåã§ãããåŠçãè¡ãå¿
èŠããããŸãããããã¯ã«ã¹ã¿ã ã«ãŒããšåããããéãåäœããã¯ãã§ãã
GPUããããåé¡ã§ãªãå Žåã¯ããããããã«åé¡ãäœæããŠãã ããã
```python
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
䜿ããããããããã«ããžã§ãã¬ãŒã¿ãŒã䜿çšããããšãã§ããŸãã
```python
from transformers import pipeline
pipe = pipeline("text-classification")
def data():
while True:
# This could come from a dataset, a database, a queue or HTTP request
# in a server
# Caveat: because this is iterative, you cannot use `num_workers > 1` variable
# to use multiple threads to preprocess data. You can still have 1 thread that
# does the preprocessing while the main runs the big inference
yield "This is a test"
for out in pipe(data()):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
[[autodoc]] pipeline
## Pipeline batching
ãã¹ãŠã®ãã€ãã©ã€ã³ã§ãããåŠçã䜿çšã§ããŸããããã¯ããŸããããŸã
ãã€ãã©ã€ã³ãã¹ããªãŒãã³ã°æ©èœã䜿çšãããšãã¯åžžã« (ã€ãŸãããªã¹ãã`dataset`ããŸã㯠`generator`ãæž¡ããšã)ã
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
<Tip warning={true}>
ãã ããããã«ãã£ãŠããã©ãŒãã³ã¹ãèªåçã«åäžããããã§ã¯ãããŸãããç¶æ³ã«å¿ããŠã10 åã®é«éåãŸã㯠5 åã®äœéåã®ããããã«ãªããŸãã
ããŒããŠã§ã¢ãããŒã¿ã䜿çšãããŠããå®éã®ã¢ãã«ã«ã€ããŠã
äž»ã«é«éåã§ããäŸ:
</Tip>
```python
from transformers import pipeline
from torch.utils.data import Dataset
from tqdm.auto import tqdm
pipe = pipeline("text-classification", device=0)
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
return "This is a test"
dataset = MyDataset()
for batch_size in [1, 8, 64, 256]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
pass
```
```
# On GTX 970
------------------------------
Streaming no batching
100%|ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns, saturated the GPU)
```
æãé床ãäœäžããäŸ:
```python
class MyDataset(Dataset):
def __len__(self):
return 5000
def __getitem__(self, i):
if i % 64 == 0:
n = 100
else:
n = 1
return "This is a test" * n
```
ããã¯ãä»ã®æã«æ¯ã¹ãŠéåžžã«é·ãæãææãããŸãããã®å Žåã**å
šäœ**ã®ããã㯠400 ã§ããå¿
èŠããããŸãã
ããŒã¯ã³ãé·ãããããããå
šäœã [64, 4] ã§ã¯ãªã [64, 400] ã«ãªããé床ã倧å¹
ã«äœäžããŸããããã«æªãããšã«ã
ãããã倧ãããªããšãããã°ã©ã ã¯åçŽã«ã¯ã©ãã·ã¥ããŸãã
```
------------------------------
Streaming no batching
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
```
ãã®åé¡ã«å¯Ÿããé©å㪠(äžè¬çãª) 解決çã¯ãªãã䜿çšã§ããè·é¢ã¯ãŠãŒã¹ã±ãŒã¹ã«ãã£ãŠç°ãªãå ŽåããããŸããã®ã«ãŒã«
芪æïŒ
ãŠãŒã¶ãŒã«ãšã£ãŠã®çµéšåã¯æ¬¡ã®ãšããã§ãã
- **ããŒããŠã§ã¢ã䜿çšããŠãè² è·ã«å¯Ÿããããã©ãŒãã³ã¹ã枬å®ããŸãã枬ã£ãŠã枬ã£ãŠã枬ãç¶ãããå®æ°ãšããã®ã¯ã
é²ãã¹ãå¯äžã®æ¹æ³ã**
- ã¬ã€ãã³ã·ã«å¶çŽãããå Žå (å®éã®è£œåãæšè«ãå®è¡ããŠããå Žå)ããããåŠçãè¡ããªãã§ãã ããã
- CPU ã䜿çšããŠããå Žåã¯ããããåŠçãè¡ããªãã§ãã ããã
- GPU ã§ã¹ã«ãŒãããã䜿çšããŠããå Žå (倧éã®éçããŒã¿ã§ã¢ãã«ãå®è¡ãããå Žå)ã次ã®ããã«ããŸãã
- sequence_length (ãèªç¶ãªãããŒã¿) ã®ãµã€ãºã«ã€ããŠãŸã£ããããããªãå Žåã¯ãããã©ã«ãã§ã¯ãããåŠçã枬å®ãè¡ããã
æ«å®çã«è¿œå ããŠã¿ãŸãã倱æããå Žåã«å埩ããããã« OOM ãã§ãã¯ãè¿œå ããŸã (倱æããå Žåã¯ãããæç¹ã§å埩ããŸã)ã
sequence_length ãå¶åŸ¡ããŸãã)
- sequence_length ãéåžžã«èŠåçã§ããå ŽåããããåŠçã¯éåžžã«èå³æ·±ããã®ãšãªãå¯èœæ§ãé«ãã枬å®ããŠããã·ã¥ããŠãã ããã
OOM ãçºçãããŸã§ç¶ããŸãã
- GPU ã倧ããã»ã©ããããåŠçãããèå³æ·±ããã®ã«ãªãå¯èœæ§ãé«ããªããŸãã
- ãããåŠçãæå¹ã«ãããããã«ãOOM ãé©åã«åŠçã§ããããšã確èªããŠãã ããã
## Pipeline chunk batching
`zero-shot-classification` ãš `question-answering` ã¯ãåäžã®å
¥åã§çµæãåŸãããå¯èœæ§ããããšããæå³ã§ãå°ãç¹æ®ã§ãã
ã¢ãã«ã®è€æ°ã®åæ¹ãã¹ãéåžžã®ç¶æ³ã§ã¯ãããã«ãã `batch_size` åŒæ°ã«é¢ããåé¡ãçºçããŸãã
ãã®åé¡ãåé¿ããããã«ããããã®ãã€ãã©ã€ã³ã¯ã©ã¡ããå°ãç¹æ®ã«ãªã£ãŠããã代ããã« `ChunkPipeline` ã«ãªã£ãŠããŸãã
éåžžã® `Pipeline`ãèŠããã«ïŒ
```python
preprocessed = pipe.preprocess(inputs)
model_outputs = pipe.forward(preprocessed)
outputs = pipe.postprocess(model_outputs)
```
ä»ã¯æ¬¡ã®ããã«ãªããŸã:
```python
all_model_outputs = []
for preprocessed in pipe.preprocess(inputs):
model_outputs = pipe.forward(preprocessed)
all_model_outputs.append(model_outputs)
outputs = pipe.postprocess(all_model_outputs)
```
ãã€ãã©ã€ã³ã¯ä»¥äžã§äœ¿çšããããããããã¯ã³ãŒãã«å¯ŸããŠéåžžã«ééçã§ããå¿
èŠããããŸãã
åãæ¹æ³ã
ãã€ãã©ã€ã³ã¯ããããèªåçã«åŠçã§ãããããããã¯ç°¡ç¥åããããã¥ãŒã§ããæ°ã«ããå¿
èŠã¯ãªããšããæå³ã§ã
å
¥åãå®éã«ããªã¬ãŒããåæ¹ãã¹ã®æ°ã«ã€ããŠã¯ã`batch_size` ãæé©åã§ããŸãã
å
¥åãšã¯ç¬ç«ããŠãåã®ã»ã¯ã·ã§ã³ã®æ³šæäºé
ãåŒãç¶ãé©çšãããŸãã
## Pipeline custom code
ç¹å®ã®ãã€ãã©ã€ã³ããªãŒããŒã©ã€ãããå Žåã
ç®ã®åã®ã¿ã¹ã¯ã«é¢ããåé¡ãäœæããããšãèºèºããªãã§ãã ããããã€ãã©ã€ã³ã®ç®æšã¯ã䜿ãããããã»ãšãã©ã®ãŠãŒã¶ãŒããµããŒãããããšã§ãã
ãããã£ãŠã`transformers`ãããªãã®ãŠãŒã¹ã±ãŒã¹ããµããŒãããå¯èœæ§ããããŸãã
åçŽã«è©ŠããŠã¿ããå Žåã¯ã次ã®ããšãã§ããŸãã
- éžæãããã€ãã©ã€ã³ããµãã¯ã©ã¹åããŸã
```python
class MyPipeline(TextClassificationPipeline):
def postprocess():
# Your code goes here
scores = scores * 100
# And here
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
# or if you use *pipeline* function, then:
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
```
ããã«ãããå¿
èŠãªã«ã¹ã¿ã ã³ãŒãããã¹ãŠå®è¡ã§ããããã«ãªããŸãã
## Implementing a pipeline
[Implementing a new pipeline](../add_new_pipeline)
## Audio
ãªãŒãã£ãª ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### AudioClassificationPipeline
[[autodoc]] AudioClassificationPipeline
- __call__
- all
### AutomaticSpeechRecognitionPipeline
[[autodoc]] AutomaticSpeechRecognitionPipeline
- __call__
- all
### TextToAudioPipeline
[[autodoc]] TextToAudioPipeline
- __call__
- all
### ZeroShotAudioClassificationPipeline
[[autodoc]] ZeroShotAudioClassificationPipeline
- __call__
- all
## Computer vision
ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### DepthEstimationPipeline
[[autodoc]] DepthEstimationPipeline
- __call__
- all
### ImageClassificationPipeline
[[autodoc]] ImageClassificationPipeline
- __call__
- all
### ImageSegmentationPipeline
[[autodoc]] ImageSegmentationPipeline
- __call__
- all
### ImageToImagePipeline
[[autodoc]] ImageToImagePipeline
- __call__
- all
### ObjectDetectionPipeline
[[autodoc]] ObjectDetectionPipeline
- __call__
- all
### VideoClassificationPipeline
[[autodoc]] VideoClassificationPipeline
- __call__
- all
### ZeroShotImageClassificationPipeline
[[autodoc]] ZeroShotImageClassificationPipeline
- __call__
- all
### ZeroShotObjectDetectionPipeline
[[autodoc]] ZeroShotObjectDetectionPipeline
- __call__
- all
## Natural Language Processing
èªç¶èšèªåŠçã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### ConversationalPipeline
[[autodoc]] Conversation
[[autodoc]] ConversationalPipeline
- __call__
- all
### FillMaskPipeline
[[autodoc]] FillMaskPipeline
- __call__
- all
### NerPipeline
[[autodoc]] NerPipeline
詳现ã«ã€ããŠã¯ã[`TokenClassificationPipeline`] ãåç
§ããŠãã ããã
### QuestionAnsweringPipeline
[[autodoc]] QuestionAnsweringPipeline
- __call__
- all
### SummarizationPipeline
[[autodoc]] SummarizationPipeline
- __call__
- all
### TableQuestionAnsweringPipeline
[[autodoc]] TableQuestionAnsweringPipeline
- __call__
### TextClassificationPipeline
[[autodoc]] TextClassificationPipeline
- __call__
- all
### TextGenerationPipeline
[[autodoc]] TextGenerationPipeline
- __call__
- all
### Text2TextGenerationPipeline
[[autodoc]] Text2TextGenerationPipeline
- __call__
- all
### TokenClassificationPipeline
[[autodoc]] TokenClassificationPipeline
- __call__
- all
### TranslationPipeline
[[autodoc]] TranslationPipeline
- __call__
- all
### ZeroShotClassificationPipeline
[[autodoc]] ZeroShotClassificationPipeline
- __call__
- all
## Multimodal
ãã«ãã¢ãŒãã« ã¿ã¹ã¯ã«äœ¿çšã§ãããã€ãã©ã€ã³ã«ã¯æ¬¡ã®ãã®ããããŸãã
### DocumentQuestionAnsweringPipeline
[[autodoc]] DocumentQuestionAnsweringPipeline
- __call__
- all
### FeatureExtractionPipeline
[[autodoc]] FeatureExtractionPipeline
- __call__
- all
### ImageFeatureExtractionPipeline
[[autodoc]] ImageFeatureExtractionPipeline
- __call__
- all
### ImageToTextPipeline
[[autodoc]] ImageToTextPipeline
- __call__
- all
### VisualQuestionAnsweringPipeline
[[autodoc]] VisualQuestionAnsweringPipeline
- __call__
- all
## Parent class: `Pipeline`
[[autodoc]] Pipeline
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/logging.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Logging
ð€ Transformersã«ã¯ãã©ã€ãã©ãªã®è©³çŽ°åºŠãç°¡åã«èšå®ã§ããäžå€®éäžåã®ãã®ã³ã°ã·ã¹ãã ããããŸãã
çŸåšãã©ã€ãã©ãªã®ããã©ã«ãã®è©³çŽ°åºŠã¯ãWARNINGãã§ãã
詳现床ãå€æŽããã«ã¯ãçŽæ¥èšå®ã¡ãœããã®1ã€ã䜿çšããã ãã§ããäŸãã°ã詳现床ãINFOã¬ãã«ã«å€æŽããæ¹æ³ã¯ä»¥äžã®éãã§ãã
```python
import transformers
transformers.logging.set_verbosity_info()
```
ç°å¢å€æ° `TRANSFORMERS_VERBOSITY` ã䜿çšããŠãããã©ã«ãã®åé·æ§ããªãŒããŒã©ã€ãããããšãã§ããŸããèšå®ã§ããŸã
`debug`ã`info`ã`warning`ã`error`ã`critical` ã®ããããã«å€æŽããŸããäŸãã°ïŒ
```bash
TRANSFORMERS_VERBOSITY=error ./myprogram.py
```
ããã«ãäžéšã®ãèŠåãã¯ç°å¢å€æ°ãèšå®ããããšã§ç¡å¹ã«ã§ããŸãã
`TRANSFORMERS_NO_ADVISORY_WARNINGS` ã *1* ãªã©ã® true å€ã«èšå®ããŸããããã«ããã次ã䜿çšããŠãã°ã«èšé²ãããèŠåãç¡å¹ã«ãªããŸãã
[`logger.warning_advice`]ãäŸãã°ïŒ
```bash
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
```
以äžã¯ãç¬èªã®ã¢ãžã¥ãŒã«ãŸãã¯ã¹ã¯ãªããã§ã©ã€ãã©ãªãšåããã¬ãŒã䜿çšããæ¹æ³ã®äŸã§ãã
```python
from transformers.utils import logging
logging.set_verbosity_info()
logger = logging.get_logger("transformers")
logger.info("INFO")
logger.warning("WARN")
```
ãã®ãã®ã³ã° ã¢ãžã¥ãŒã«ã®ãã¹ãŠã®ã¡ãœããã¯ä»¥äžã«ææžåãããŠããŸããäž»ãªã¡ãœããã¯æ¬¡ã®ãšããã§ãã
[`logging.get_verbosity`] ãã¬ãŒã®çŸåšã®åé·ã¬ãã«ãååŸããŸãã
[`logging.set_verbosity`] ã䜿çšããŠãåé·æ§ãéžæããã¬ãã«ã«èšå®ããŸããé çªã«ïŒå°ãªããã®ããïŒ
åé·ããæãåé·ãŸã§)ããããã®ã¬ãã« (æ¬åŒ§å
ã¯å¯Ÿå¿ãã int å€) ã¯æ¬¡ã®ãšããã§ãã
- `transformers.logging.CRITICAL` ãŸã㯠`transformers.logging.FATAL` (int å€ã50): æãå€ããã®ã®ã¿ãã¬ããŒãããŸãã
é倧ãªãšã©ãŒã
- `transformers.logging.ERROR` (int å€ã40): ãšã©ãŒã®ã¿ãå ±åããŸãã
- `transformers.logging.WARNING` ãŸã㯠`transformers.logging.WARN` (int å€ã30): ãšã©ãŒãš
èŠåãããã¯ã©ã€ãã©ãªã§äœ¿çšãããããã©ã«ãã®ã¬ãã«ã§ãã
- `transformers.logging.INFO` (int å€ã20): ãšã©ãŒãèŠåãããã³åºæ¬æ
å ±ãã¬ããŒãããŸãã
- `transformers.logging.DEBUG` (int å€ã10): ãã¹ãŠã®æ
å ±ãã¬ããŒãããŸãã
ããã©ã«ãã§ã¯ãã¢ãã«ã®ããŠã³ããŒãäžã«ãtqdmãé²è¡ç¶æ³ããŒã衚瀺ãããŸãã [`logging.disable_progress_bar`] ããã³ [`logging.enable_progress_bar`] ã䜿çšããŠããã®åäœãæå¶ãŸãã¯æå¶è§£é€ã§ããŸãã
## `logging` vs `warnings`
Python ã«ã¯ãããçµã¿åãããŠäœ¿çšââããã 2 ã€ã®ãã®ã³ã° ã·ã¹ãã ããããŸããäžã§èª¬æãã `logging` ãš `warnings` ã§ãã
ããã«ãããç¹å®ã®ãã±ããå
ã®èŠåãããã«åé¡ã§ããŸã (äŸ: æ©èœãŸãã¯ãã¹ã®`FutureWarning`)
ããã¯ãã§ã«éæšå¥šã«ãªã£ãŠããã`DeprecationWarning`ã¯ä»åŸã®éæšå¥šã瀺ããŸãã
äž¡æ¹ãšã`transformers`ã©ã€ãã©ãªã§äœ¿çšããŸãã `logging`ã®`captureWarning`ã¡ãœããã掻çšããŠé©å¿ãããŠã
ãããã®èŠåã¡ãã»ãŒãžã¯ãäžèšã®åé·èšå®ããŒã«ã«ãã£ãŠç®¡çãããŸãã
ããã¯ã©ã€ãã©ãªã®éçºè
ã«ãšã£ãŠäœãæå³ããŸãã?次ã®ãã¥ãŒãªã¹ãã£ãã¯ãå°éããå¿
èŠããããŸãã
- `warnings`ã¯ãã©ã€ãã©ãªããã³`transformers`ã«äŸåããã©ã€ãã©ãªã®éçºè
ã«åªå
ãããã¹ãã§ãã
- `logging`ã¯ãæ¥åžžã®ãããžã§ã¯ãã§ã©ã€ãã©ãªã䜿çšããã©ã€ãã©ãªã®ãšã³ããŠãŒã¶ãŒã«äœ¿çšããå¿
èŠããããŸãã
以äžã®`captureWarnings`ã¡ãœããã®ãªãã¡ã¬ã³ã¹ãåç
§ããŠãã ããã
[[autodoc]] logging.captureWarnings
## Base setters
[[autodoc]] logging.set_verbosity_error
[[autodoc]] logging.set_verbosity_warning
[[autodoc]] logging.set_verbosity_info
[[autodoc]] logging.set_verbosity_debug
## Other functions
[[autodoc]] logging.get_verbosity
[[autodoc]] logging.set_verbosity
[[autodoc]] logging.get_logger
[[autodoc]] logging.enable_default_handler
[[autodoc]] logging.disable_default_handler
[[autodoc]] logging.enable_explicit_format
[[autodoc]] logging.reset_format
[[autodoc]] logging.enable_progress_bar
[[autodoc]] logging.disable_progress_bar
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/text_generation.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Generation
åãã¬ãŒã ã¯ãŒã¯ã«ã¯ãããããã® `GenerationMixin` ã¯ã©ã¹ã«å®è£
ãããããã¹ãçæã®ããã® Generate ã¡ãœããããããŸãã
- PyTorch [`~generation.GenerationMixin.generate`] 㯠[`~generation.GenerationMixin`] ã«å®è£
ãããŠããŸãã
- TensorFlow [`~generation.TFGenerationMixin.generate`] 㯠[`~generation.TFGenerationMixin`] ã«å®è£
ãããŠããŸãã
- Flax/JAX [`~generation.FlaxGenerationMixin.generate`] 㯠[`~generation.FlaxGenerationMixin`] ã«å®è£
ãããŠããŸãã
éžæãããã¬ãŒã ã¯ãŒã¯ã«é¢ä¿ãªãã[`~generation.GenerationConfig`] ã䜿çšããŠçæã¡ãœããããã©ã¡ãŒã¿åã§ããŸãã
ã¯ã©ã¹ã€ã³ã¹ã¿ã³ã¹ãåäœãå¶åŸ¡ããçæãã©ã¡ãŒã¿ã®å®å
šãªãªã¹ãã«ã€ããŠã¯ããã®ã¯ã©ã¹ãåç
§ããŠãã ããã
çææ¹æ³ã®ããšã
ã¢ãã«ã®çææ§æãæ€æ»ããæ¹æ³ãããã©ã«ããšã¯äœãããã©ã¡ãŒã¿ãŒãã¢ãããã¯ã«å€æŽããæ¹æ³ãåŠç¿ããã«ã¯ã
ã«ã¹ã¿ãã€ãºãããçææ§æãäœæããŠä¿åããæ¹æ³ã«ã€ããŠã¯ãã
[ããã¹ãçææŠç¥ã¬ã€ã](../generation_strategies)ããã®ã¬ã€ãã§ã¯ãé¢é£æ©èœã®äœ¿çšæ¹æ³ã«ã€ããŠã説æããŠããŸãã
ããŒã¯ã³ã¹ããªãŒãã³ã°ã®ãããªã
## GenerationConfig
[[autodoc]] generation.GenerationConfig
- from_pretrained
- from_model_config
- save_pretrained
## GenerationMixin
[[autodoc]] generation.GenerationMixin
- generate
- compute_transition_scores
## TFGenerationMixin
[[autodoc]] generation.TFGenerationMixin
- generate
- compute_transition_scores
## FlaxGenerationMixin
[[autodoc]] generation.FlaxGenerationMixin
- generate
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/quantization.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Quantize ð€ Transformers models
## `AutoGPTQ` Integration
ð€ Transformers ã«ã¯ãèšèªã¢ãã«ã§ GPTQ éååãå®è¡ããããã® `optimum` API ãçµ±åãããŠããŸããããã©ãŒãã³ã¹ã倧å¹
ã«äœäžãããããšãªããæšè«é床ãé«éåããããšãªããã¢ãã«ã 8ã4ã3ãããã«ã¯ 2 ãããã§ããŒãããã³éååã§ããŸããããã¯ãã»ãšãã©ã® GPU ããŒããŠã§ã¢ã§ãµããŒããããŠããŸãã
éååã¢ãã«ã®è©³çŽ°ã«ã€ããŠã¯ã以äžã確èªããŠãã ããã
- [GPTQ](https://arxiv.org/pdf/2210.17323.pdf) è«æ
- GPTQ éååã«é¢ãã `optimum` [ã¬ã€ã](https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization)
- ããã¯ãšã³ããšããŠäœ¿çšããã [`AutoGPTQ`](https://github.com/PanQiWei/AutoGPTQ) ã©ã€ãã©ãª
### Requirements
以äžã®ã³ãŒããå®è¡ããã«ã¯ã以äžã®èŠä»¶ãã€ã³ã¹ããŒã«ãããŠããå¿
èŠããããŸãïŒ
- ææ°ã® `AutoGPTQ` ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããã
`pip install auto-gptq` ãã€ã³ã¹ããŒã«ããã
- ææ°ã® `optimum` ããœãŒã¹ããã€ã³ã¹ããŒã«ããã
`git+https://github.com/huggingface/optimum.git` ãã€ã³ã¹ããŒã«ããã
- ææ°ã® `transformers` ããœãŒã¹ããã€ã³ã¹ããŒã«ããã
ææ°ã® `transformers` ããœãŒã¹ããã€ã³ã¹ããŒã«ãã `pip install git+https://github.com/huggingface/transformers.git`
- ææ°ã® `accelerate` ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããã
`pip install --upgrade accelerate` ãå®è¡ããã
GPTQçµ±åã¯ä»ã®ãšããããã¹ãã¢ãã«ã®ã¿ããµããŒãããŠããã®ã§ãèŠèŠãé³å£°ããã«ãã¢ãŒãã«ã¢ãã«ã§ã¯äºæãã¬æåã«ééãããããããªãããšã«æ³šæããŠãã ããã
### Load and quantize a model
GPTQ ã¯ãéååã¢ãã«ã䜿çšããåã«éã¿ã®ãã£ãªãã¬ãŒã·ã§ã³ãå¿
èŠãšããéååæ¹æ³ã§ãããã©ã³ã¹ãã©ãŒã㌠ã¢ãã«ãæåããéååããå Žåã¯ãéååã¢ãã«ãäœæãããŸã§ã«æéããããããšããããŸã (`facebook/opt-350m`ã¢ãã«ã® Google colab ã§ã¯çŽ 5 å)ã
ãããã£ãŠãGPTQ éååã¢ãã«ã䜿çšããã·ããªãªã¯ 2 ã€ãããŸããæåã®äœ¿çšäŸã¯ãããã§å©çšå¯èœãªä»ã®ãŠãŒã¶ãŒã«ãã£ãŠãã§ã«éååãããã¢ãã«ãããŒãããããšã§ãã2 çªç®ã®äœ¿çšäŸã¯ãã¢ãã«ãæåããéååããä¿åãããããã«ããã·ã¥ããŠãä»ã®ãŠãŒã¶ãŒã䜿çšã§ããããã«ããããšã§ããããã䜿ã£ãŠãã ããã
#### GPTQ Configuration
ã¢ãã«ãããŒãããŠéååããã«ã¯ã[`GPTQConfig`] ãäœæããå¿
èŠããããŸããããŒã¿ã»ãããæºåããã«ã¯ã`bits`ã®æ°ãéååã調æŽããããã®`dataset`ãããã³ã¢ãã«ã®`Tokenizer`ãæž¡ãå¿
èŠããããŸãã
```python
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
```
ç¬èªã®ããŒã¿ã»ãããæååã®ãªã¹ããšããŠæž¡ãããšãã§ããããšã«æ³šæããŠãã ããããã ããGPTQ è«æã®ããŒã¿ã»ããã䜿çšããããšã匷ããå§ãããŸãã
```python
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
quantization = GPTQConfig(bits=4, dataset = dataset, tokenizer=tokenizer)
```
#### Quantization
`from_pretrained` ã䜿çšãã`quantization_config` ãèšå®ããããšã§ã¢ãã«ãéååã§ããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=gptq_config)
```
ã¢ãã«ãéååããã«ã¯ GPU ãå¿
èŠã§ããããšã«æ³šæããŠãã ãããã¢ãã«ã CPU ã«é
眮ããéååããããã«ã¢ãžã¥ãŒã«ã GPU ã«ååŸã«ç§»åãããŸãã
CPU ãªãããŒãã®äœ¿çšäžã« GPU ã®äœ¿çšéãæ倧åãããå Žåã¯ã`device_map = "auto"` ãèšå®ã§ããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config)
```
ãã£ã¹ã¯ ãªãããŒãã¯ãµããŒããããŠããªãããšã«æ³šæããŠãã ãããããã«ãããŒã¿ã»ãããåå ã§ã¡ã¢ãªãäžè¶³ããŠããå Žåã¯ã`from_pretained` 㧠`max_memory` ãæž¡ãå¿
èŠãããå ŽåããããŸãã `device_map`ãš`max_memory`ã®è©³çŽ°ã«ã€ããŠã¯ããã® [ã¬ã€ã](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) ãåç
§ããŠãã ããã
<Tip warning={true}>
GPTQ éååã¯ãçŸæç¹ã§ã¯ããã¹ã ã¢ãã«ã§ã®ã¿æ©èœããŸããããã«ãéååããã»ã¹ã¯ããŒããŠã§ã¢ã«ãã£ãŠã¯é·æéãããå ŽåããããŸã (NVIDIA A100 ã䜿çšããå Žåã175B ã¢ãã« = 4 gpu æé)ãã¢ãã«ã® GPTQ éååããŒãžã§ã³ãååšããªãå Žåã¯ãããã§ç¢ºèªããŠãã ãããããã§ãªãå Žåã¯ãgithub ã§èŠæ±ãéä¿¡ã§ããŸãã
</Tip>
### Push quantized model to ð€ Hub
ä»ã® ð€ ã¢ãã«ãšåæ§ã«ã`push_to_hub` ã䜿çšããŠéååã¢ãã«ãããã«ããã·ã¥ã§ããŸããéååæ§æã¯ä¿åãããã¢ãã«ã«æ²¿ã£ãŠããã·ã¥ãããŸãã
```python
quantized_model.push_to_hub("opt-125m-gptq")
tokenizer.push_to_hub("opt-125m-gptq")
```
éååãããã¢ãã«ãããŒã«ã« ãã·ã³ã«ä¿åãããå Žåã¯ã`save_pretrained` ã䜿çšããŠè¡ãããšãã§ããŸãã
```python
quantized_model.save_pretrained("opt-125m-gptq")
tokenizer.save_pretrained("opt-125m-gptq")
```
`device_map` ã䜿çšããŠã¢ãã«ãéååããå Žåã¯ãä¿åããåã«ã¢ãã«å
šäœã GPU ãŸã㯠`cpu` ã®ããããã«ç§»åããŠãã ããã
```python
quantized_model.to("cpu")
quantized_model.save_pretrained("opt-125m-gptq")
```
### Load a quantized model from the ð€ Hub
`from_pretrained`ã䜿çšããŠãéååãããã¢ãã«ãããããããŒãã§ããŸãã
å±æ§ `quantization_config` ãã¢ãã«èšå®ãªããžã§ã¯ãã«ååšããããšã確èªããŠãããã·ã¥ãããéã¿ãéååãããŠããããšã確èªããŸãã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq")
```
å¿
èŠä»¥äžã®ã¡ã¢ãªãå²ãåœãŠãã«ã¢ãã«ãããéãããŒããããå Žåã¯ã`device_map` åŒæ°ã¯éååã¢ãã«ã§ãæ©èœããŸãã `accelerate`ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
```
### Exllama kernels for faster inference
4 ããã ã¢ãã«ã®å Žåãæšè«é床ãé«ããããã« exllama ã«ãŒãã«ã䜿çšã§ããŸããããã©ã«ãã§æå¹ã«ãªã£ãŠããŸãã [`GPTQConfig`] 㧠`disable_exllama` ãæž¡ãããšã§ããã®åäœãå€æŽã§ããŸããããã«ãããèšå®ã«ä¿åãããŠããéååèšå®ãäžæžããããŸããã«ãŒãã«ã«é¢é£ããå±æ§ã®ã¿ãäžæžãã§ããããšã«æ³šæããŠãã ãããããã«ãexllama ã«ãŒãã«ã䜿çšãããå Žåã¯ãã¢ãã«å
šäœã GPU äžã«çœ®ãå¿
èŠããããŸãã
```py
import torch
gptq_config = GPTQConfig(bits=4, disable_exllama=False)
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config = gptq_config)
```
çŸæç¹ã§ã¯ 4 ããã ã¢ãã«ã®ã¿ããµããŒããããŠããããšã«æ³šæããŠãã ãããããã«ãpeft ã䜿çšããŠéååã¢ãã«ã埮調æŽããŠããå Žåã¯ãexllama ã«ãŒãã«ãéã¢ã¯ãã£ãåããããšããå§ãããŸãã
#### Fine-tune a quantized model
Hugging Face ãšã³ã·ã¹ãã ã®ã¢ããã¿ãŒã®å
¬åŒãµããŒãã«ãããGPTQ ã§éååãããã¢ãã«ã埮調æŽã§ããŸãã
詳现ã«ã€ããŠã¯ã[`peft`](https://github.com/huggingface/peft) ã©ã€ãã©ãªãã芧ãã ããã
### Example demo
GPTQ ã䜿çšããŠã¢ãã«ãéååããæ¹æ³ãšãpeft ã䜿çšããŠéååãããã¢ãã«ã埮調æŽããæ¹æ³ã«ã€ããŠã¯ãGoogle Colab [ããŒãããã¯](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) ãåç
§ããŠãã ããã
### GPTQConfig
[[autodoc]] GPTQConfig
## `bitsandbytes` Integration
ð€ Transformers ã¯ã`bitsandbytes` ã§æããã䜿çšãããã¢ãžã¥ãŒã«ãšç·å¯ã«çµ±åãããŠããŸããæ°è¡ã®ã³ãŒãã§ã¢ãã«ã 8 ããã粟床ã§ããŒãã§ããŸãã
ããã¯ã`bitsandbytes`ã® `0.37.0`ãªãªãŒã¹ä»¥éãã»ãšãã©ã® GPU ããŒããŠã§ã¢ã§ãµããŒããããŠããŸãã
éååæ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã[LLM.int8()](https://arxiv.org/abs/2208.07339) è«æããŸã㯠[ããã°æçš¿](https://huggingface.co/blog/hf-bitsandbytes-) ãã芧ãã ãããçµ±åïŒã³ã©ãã¬ãŒã·ã§ã³ã«ã€ããŠã
`0.39.0`ãªãªãŒã¹ä»¥éãFP4 ããŒã¿åã掻çšãã4 ãããéååã䜿çšããŠ`device_map`ããµããŒãããä»»æã®ã¢ãã«ãããŒãã§ããŸãã
ç¬èªã® pytorch ã¢ãã«ãéååãããå Žåã¯ãð€ Accelerate ã©ã€ãã©ãªã® [ããã¥ã¡ã³ã](https://huggingface.co/docs/accelerate/main/en/usage_guides/quantization) ããã§ãã¯ããŠãã ããã
`bitsandbytes`çµ±åã䜿çšããŠã§ããããšã¯æ¬¡ã®ãšããã§ã
### General usage
ã¢ãã«ã ð€ Accelerate ã«ããèªã¿èŸŒã¿ããµããŒããã`torch.nn.Linear` ã¬ã€ã€ãŒãå«ãŸããŠããéãã [`~PreTrainedModel.from_pretrained`] ã¡ãœãããåŒã³åºããšãã« `load_in_8bit` ãŸã㯠`load_in_4bit` åŒæ°ã䜿çšããŠã¢ãã«ãéååã§ããŸããããã¯ã©ã®ãããªã¢ããªãã£ã§ãåæ§ã«æ©èœããã¯ãã§ãã
```python
from transformers import AutoModelForCausalLM
model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True)
model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True)
```
ããã©ã«ãã§ã¯ãä»ã®ãã¹ãŠã®ã¢ãžã¥ãŒã« (äŸ: `torch.nn.LayerNorm`) 㯠`torch.float16` ã«å€æãããŸããããã® `dtype` ãå€æŽãããå Žåã¯ã`torch_dtype` åŒæ°ãäžæžãã§ããŸãã
```python
>>> import torch
>>> from transformers import AutoModelForCausalLM
>>> model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32)
>>> model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype
torch.float32
```
### FP4 quantization
#### Requirements
以äžã®ã³ãŒã ã¹ãããããå®è¡ããåã«ã以äžã®èŠä»¶ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ããã
- ææ°ã®`bitsandbytes`ã©ã€ãã©ãª
`pip install bitsandbytes>=0.39.0`
- ææ°ã®`accelerate`ãã€ã³ã¹ããŒã«ãã
`pip install --upgrade accelerate`
- ææ°ã® `transformers` ãã€ã³ã¹ããŒã«ãã
`pip install --upgrade transformers`
#### Tips and best practices
- **é«åºŠãªäœ¿çšæ³:** å¯èœãªãã¹ãŠã®ãªãã·ã§ã³ã䜿çšãã 4 ãããéååã®é«åºŠãªäœ¿çšæ³ã«ã€ããŠã¯ã[ãã® Google Colab ããŒãããã¯](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) ãåç
§ããŠãã ããã
- **`batch_size=1` ã«ããé«éæšè« :** bitsandbytes ã® `0.40.0` ãªãªãŒã¹ä»¥éã`batch_size=1` ã§ã¯é«éæšè«ã®æ©æµãåããããšãã§ããŸãã [ãããã®ãªãªãŒã¹ ããŒã](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0) ã確èªãããã®æ©èœã掻çšããã«ã¯`0.40.0`以éã®ããŒãžã§ã³ã䜿çšããŠããããšã確èªããŠãã ãããç®±ã®ã
- **ãã¬ãŒãã³ã°:** [QLoRA è«æ](https://arxiv.org/abs/2305.14314) ã«ãããšã4 ãããåºæ¬ã¢ãã«ããã¬ãŒãã³ã°ããå Žå (äŸ: LoRA ã¢ããã¿ãŒã䜿çš)ã`bnb_4bit_quant_type='nf4'` ã䜿çšããå¿
èŠããããŸãã ã
- **æšè«:** æšè«ã®å Žåã`bnb_4bit_quant_type` ã¯ããã©ãŒãã³ã¹ã«å€§ããªåœ±é¿ãäžããŸããããã ããã¢ãã«ã®éã¿ãšã®äžè²«æ§ãä¿ã€ããã«ãå¿
ãåã `bnb_4bit_compute_dtype` ããã³ `torch_dtype` åŒæ°ã䜿çšããŠãã ããã
#### Load a large model in 4bit
`.from_pretrained` ã¡ãœãããåŒã³åºããšãã« `load_in_4bit=True` ã䜿çšãããšãã¡ã¢ãªäœ¿çšéã (ãããã) 4 ã§å²ãããšãã§ããŸãã
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_4bit=True)
```
<Tip warning={true}>
ã¢ãã«ã 4 ãããã§ããŒãããããšãçŸæç¹ã§ã¯éååãããéã¿ãããã«ããã·ã¥ããããšã¯ã§ããªãããšã«æ³šæããŠãã ããã 4 ãããã®éã¿ã¯ãŸã ãµããŒããããŠããªãããããã¬ãŒãã³ã°ã§ããªãããšã«ã泚æããŠãã ããããã ãã4 ããã ã¢ãã«ã䜿çšããŠè¿œå ã®ãã©ã¡ãŒã¿ãŒããã¬ãŒãã³ã°ããããšãã§ããŸããããã«ã€ããŠã¯æ¬¡ã®ã»ã¯ã·ã§ã³ã§èª¬æããŸãã
</Tip>
### Load a large model in 8bit
`.from_pretrained` ã¡ãœãããåŒã³åºããšãã« `load_in_8bit=True` åŒæ°ã䜿çšãããšãã¡ã¢ãªèŠä»¶ããããååã«ããŠã¢ãã«ãããŒãã§ããŸãã
```python
# pip install transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True)
```
次ã«ãéåžž [`PreTrainedModel`] ã䜿çšããã®ãšåãããã«ã¢ãã«ã䜿çšããŸãã
`get_memory_footprint` ã¡ãœããã䜿çšããŠãã¢ãã«ã®ã¡ã¢ãª ãããããªã³ãã確èªã§ããŸãã
```python
print(model.get_memory_footprint())
```
ãã®çµ±åã«ããã倧ããªã¢ãã«ãå°ããªããã€ã¹ã«ããŒãããåé¡ãªãå®è¡ã§ããããã«ãªããŸããã
<Tip warning={true}>
ã¢ãã«ã 8 ãããã§ããŒãããããšãææ°ã® `transformers`ãš`bitsandbytes`ã䜿çšããå Žåãé€ããéååãããéã¿ãããã«ããã·ã¥ããããšã¯çŸåšäžå¯èœã§ããããšã«æ³šæããŠãã ããã 8 ãããã®éã¿ã¯ãŸã ãµããŒããããŠããªãããããã¬ãŒãã³ã°ã§ããªãããšã«ã泚æããŠãã ããããã ãã8 ããã ã¢ãã«ã䜿çšããŠè¿œå ã®ãã©ã¡ãŒã¿ãŒããã¬ãŒãã³ã°ããããšãã§ããŸããããã«ã€ããŠã¯æ¬¡ã®ã»ã¯ã·ã§ã³ã§èª¬æããŸãã
ãŸãã`device_map` ã¯ãªãã·ã§ã³ã§ãããå©çšå¯èœãªãªãœãŒã¹äžã§ã¢ãã«ãå¹ççã«ãã£ã¹ããããããããæšè«ã«ã¯ `device_map = 'auto'` ãèšå®ããããšãæšå¥šãããŸãã
</Tip>
#### Advanced use cases
ããã§ã¯ãFP4 éååã䜿çšããŠå®è¡ã§ããããã€ãã®é«åºŠãªäœ¿çšäŸã«ã€ããŠèª¬æããŸãã
##### Change the compute dtype
compute dtype ã¯ãèšç®äžã«äœ¿çšããã dtype ãå€æŽããããã«äœ¿çšãããŸããããšãã°ãé ãç¶æ
ã¯`float32`ã«ãããŸãããé«éåã®ããã«èšç®ã bf16 ã«èšå®ã§ããŸããããã©ã«ãã§ã¯ãcompute dtype 㯠`float32` ã«èšå®ãããŸãã
```python
import torch
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
```
##### Using NF4 (Normal Float 4) data type
NF4 ããŒã¿åã䜿çšããããšãã§ããŸããããã¯ãæ£èŠååžã䜿çšããŠåæåãããéã¿ã«é©åããæ°ãã 4 ããã ããŒã¿åã§ãããã®å®è¡ã®ããã«:
```python
from transformers import BitsAndBytesConfig
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
)
model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)
```
##### Use nested quantization for more memory efficient inference
ãŸãããã¹ããããéååææ³ã䜿çšããããšããå§ãããŸããããã«ãããããã©ãŒãã³ã¹ãè¿œå ããããšãªããããå€ãã®ã¡ã¢ãªãç¯çŽãããŸããçµéšçãªèŠ³å¯ãããããã«ãããNVIDIA-T4 16GB äžã§ã·ãŒã±ã³ã¹é· 1024ãããã ãµã€ãº 1ãåŸé
环ç©ã¹ããã 4 ã® llama-13b ã¢ãã«ã埮調æŽããããšãå¯èœã«ãªããŸãã
```python
from transformers import BitsAndBytesConfig
double_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
)
model_double_quant = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=double_quant_config)
```
### Push quantized models on the ð€ Hub
`push_to_hub`ã¡ãœãããåçŽã«äœ¿çšããããšã§ãéååãããã¢ãã«ãããã«ããã·ã¥ã§ããŸããããã«ãããæåã«éååæ§æãã¡ã€ã«ãããã·ã¥ããã次ã«éååãããã¢ãã«ã®éã¿ãããã·ã¥ãããŸãã
ãã®æ©èœã䜿çšã§ããããã«ããã«ã¯ãå¿
ã `bitsandbytes>0.37.2` ã䜿çšããŠãã ãã (ãã®èšäºã®å·çæç¹ã§ã¯ã`bitsandbytes==0.38.0.post1` ã§ãã¹ãããŸãã)ã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model.push_to_hub("bloom-560m-8bit")
```
<Tip warning={true}>
倧èŠæš¡ãªã¢ãã«ã§ã¯ãããäžã§ 8 ããã ã¢ãã«ãããã·ã¥ããããšã匷ãæšå¥šãããŸããããã«ãããã³ãã¥ããã£ã¯ã¡ã¢ãª ãããããªã³ãã®åæžãšãããšãã° Google Colab ã§ã®å€§èŠæš¡ãªã¢ãã«ã®èªã¿èŸŒã¿ã«ããæ©æµãåããããšãã§ããŸãã
</Tip>
### Load a quantized model from the ð€ Hub
`from_pretrained`ã¡ãœããã䜿çšããŠãããããéååã¢ãã«ãããŒãã§ããŸããå±æ§ `quantization_config` ãã¢ãã«èšå®ãªããžã§ã¯ãã«ååšããããšã確èªããŠãããã·ã¥ãããéã¿ãéååãããŠããããšã確èªããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto")
```
ãã®å ŽåãåŒæ° `load_in_8bit=True` ãæå®ããå¿
èŠã¯ãããŸãããã`bitsandbytes` ãš `accelerate` ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããå¿
èŠãããããšã«æ³šæããŠãã ããã
ãŸãã`device_map` ã¯ãªãã·ã§ã³ã§ãããå©çšå¯èœãªãªãœãŒã¹äžã§ã¢ãã«ãå¹ççã«ãã£ã¹ããããããããæšè«ã«ã¯ `device_map = 'auto'` ãèšå®ããããšãæšå¥šãããŸãã
### Advanced use cases
ãã®ã»ã¯ã·ã§ã³ã¯ã8 ããã ã¢ãã«ã®ããŒããšå®è¡ä»¥å€ã«äœãã§ããããæ¢æ±ãããäžçŽãŠãŒã¶ãŒã察象ãšããŠããŸãã
#### Offload between `cpu` and `gpu`
ãã®é«åºŠãªäœ¿çšäŸã® 1 ã€ã¯ãã¢ãã«ãããŒããã`CPU`ãš`GPU`ã®éã§éã¿ããã£ã¹ãããã§ããããšã§ãã CPU äžã§ãã£ã¹ããããããéã¿ã¯ **8 ãããã«å€æãããªã**ããã`float32`ã«ä¿æãããããšã«æ³šæããŠãã ããããã®æ©èœã¯ãéåžžã«å€§èŠæš¡ãªã¢ãã«ãé©åããããã®ã¢ãã«ã GPU ãš CPU ã®éã§ãã£ã¹ãããããããŠãŒã¶ãŒã察象ãšããŠããŸãã
ãŸãã`transformers` ãã [`BitsAndBytesConfig`] ãããŒãããå±æ§ `llm_int8_enable_fp32_cpu_offload` ã `True` ã«èšå®ããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
```
`bigscience/bloom-1b7`ã¢ãã«ãããŒãããå¿
èŠãããã`lm_head`ãé€ãã¢ãã«å
šäœã«ââé©åããã®ã«åå㪠GPU RAM ããããšããŸãããããã£ãŠã次ã®ããã«ã«ã¹ã¿ã device_map ãäœæããŸãã
```python
device_map = {
"transformer.word_embeddings": 0,
"transformer.word_embeddings_layernorm": 0,
"lm_head": "cpu",
"transformer.h": 0,
"transformer.ln_f": 0,
}
```
ãããŠã次ã®ããã«ã¢ãã«ãããŒãããŸãã
```python
model_8bit = AutoModelForCausalLM.from_pretrained(
"bigscience/bloom-1b7",
device_map=device_map,
quantization_config=quantization_config,
)
```
以äžã§ãïŒã¢ãã«ã楜ããã§ãã ããïŒ
#### Play with `llm_int8_threshold`
`llm_int8_threshold` åŒæ°ãæäœããŠãå€ãå€ã®ãããå€ãå€æŽã§ããŸãã å€ãå€ ãšã¯ãç¹å®ã®ãããå€ãã倧ããé ããç¶æ
ã®å€ã§ãã
ããã¯ã`LLM.int8()`è«æã§èª¬æãããŠããå€ãå€æ€åºã®å€ãå€ãããå€ã«å¯Ÿå¿ããŸãããã®ãããå€ãè¶
ããé ãç¶æ
ã®å€ã¯å€ãå€ãšã¿ãªããããããã®å€ã«å¯Ÿããæäœã¯ fp16 ã§å®è¡ãããŸããéåžžãå€ã¯æ£èŠååžããŸããã€ãŸããã»ãšãã©ã®å€ã¯ [-3.5, 3.5] ã®ç¯å²å
ã«ãããŸããã倧èŠæš¡ãªã¢ãã«ã§ã¯å€§ããç°ãªãååžã瀺ãäŸå€çãªç³»çµ±çå€ãå€ãããã€ããããŸãããããã®å€ãå€ã¯ãå€ãã®å Žå [-60, -6] ãŸã㯠[6, 60] ã®ç¯å²å
ã«ãããŸãã Int8 éååã¯ã倧ããã 5 çšåºŠãŸã§ã®å€ã§ã¯ããŸãæ©èœããŸããããããè¶
ãããšãããã©ãŒãã³ã¹ã倧å¹
ã«äœäžããŸããé©åãªããã©ã«ãã®ãããå€ã¯ 6 ã§ãããããäžå®å®ãªã¢ãã« (å°èŠæš¡ãªã¢ãã«ã埮調æŽ) ã§ã¯ãããäœããããå€ãå¿
èŠã«ãªãå ŽåããããŸãã
ãã®åŒæ°ã¯ãã¢ãã«ã®æšè«é床ã«åœ±é¿ãäžããå¯èœæ§ããããŸãããã®ãã©ã¡ãŒã¿ãè©ŠããŠã¿ãŠããŠãŒã¹ã±ãŒã¹ã«æé©ãªãã©ã¡ãŒã¿ãèŠã€ããããšããå§ãããŸãã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_threshold=10,
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Skip the conversion of some modules
äžéšã®ã¢ãã«ã«ã¯ãå®å®æ§ã確ä¿ããããã« 8 ãããã«å€æããå¿
èŠããªãã¢ãžã¥ãŒã«ãããã€ããããŸããããšãã°ããžã¥ãŒã¯ããã¯ã¹ ã¢ãã«ã«ã¯ãã¹ãããããå¿
èŠãããããã€ãã® `lm_head` ã¢ãžã¥ãŒã«ããããŸãã `llm_int8_skip_modules` ã§éãã§ã¿ã
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "bigscience/bloom-1b7"
quantization_config = BitsAndBytesConfig(
llm_int8_skip_modules=["lm_head"],
)
model_8bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
#### Fine-tune a model that has been loaded in 8-bit
Hugging Face ãšã³ã·ã¹ãã ã®ã¢ããã¿ãŒã®å
¬åŒãµããŒãã«ããã8 ãããã§ããŒããããã¢ãã«ã埮調æŽã§ããŸãã
ããã«ãããåäžã® Google Colab ã§`flan-t5-large`ã`facebook/opt-6.7b`ãªã©ã®å€§èŠæš¡ã¢ãã«ã埮調æŽããããšãã§ããŸãã詳现ã«ã€ããŠã¯ã[`peft`](https://github.com/huggingface/peft) ã©ã€ãã©ãªãã芧ãã ããã
ãã¬ãŒãã³ã°çšã®ã¢ãã«ãããŒããããšãã« `device_map` ãæž¡ãå¿
èŠããªãããšã«æ³šæããŠãã ãããã¢ãã«ã GPU ã«èªåçã«ããŒããããŸããå¿
èŠã«å¿ããŠãããã€ã¹ ããããç¹å®ã®ããã€ã¹ã«èšå®ããããšãã§ããŸã (äŸ: `cuda:0`ã`0`ã`torch.device('cuda:0')`)ã `device_map=auto`ã¯æšè«ã®ã¿ã«äœ¿çšããå¿
èŠãããããšã«æ³šæããŠãã ããã
### BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
## Quantization with ð€ `optimum`
`optimum`ã§ãµããŒããããŠããéååæ¹æ³ã®è©³çŽ°ã«ã€ããŠã¯ã[Optimum ããã¥ã¡ã³ã](https://huggingface.co/docs/optimum/index) ãåç
§ããããããèªåã®ãŠãŒã¹ã±ãŒã¹ã«é©çšã§ãããã©ããã確èªããŠãã ããã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/keras_callbacks.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Keras callbacks
Keras ã䜿çšã㊠Transformers ã¢ãã«ããã¬ãŒãã³ã°ããå Žåãäžè¬çãªåŠçãèªååããããã«äœ¿çšã§ããã©ã€ãã©ãªåºæã®ã³ãŒã«ããã¯ãããã€ããããŸãã
ã¿ã¹ã¯:
## KerasMetricCallback
[[autodoc]] KerasMetricCallback
## PushToHubCallback
[[autodoc]] PushToHubCallback
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/callback.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ã³ãŒã«ããã¯æ°
ã³ãŒã«ããã¯ã¯ãPyTorch ã®ãã¬ãŒãã³ã° ã«ãŒãã®åäœãã«ã¹ã¿ãã€ãºã§ãããªããžã§ã¯ãã§ãã
ãã¬ãŒãã³ã° ã«ãŒããæ€æ»ã§ãã [`Trainer`] (ãã®æ©èœã¯ TensorFlow ã«ã¯ãŸã å®è£
ãããŠããŸãã)
ç¶æ
ã確èªã (é²æã¬ããŒããTensorBoard ãŸãã¯ä»ã® ML ãã©ãããã©ãŒã ãžã®ãã°èšé²ãªã©)ã決å®ãäžããŸã (åæ段éãªã©)ã
åæ¢äžïŒã
ã³ãŒã«ããã¯ã¯ãè¿ããã [`TrainerControl`] ãªããžã§ã¯ããé€ãã°ããèªã¿åãå°çšãã®ã³ãŒãéšåã§ãã
ãã¬ãŒãã³ã° ã«ãŒãå
ã§ã¯äœãå€æŽã§ããŸããããã¬ãŒãã³ã° ã«ãŒãã®å€æŽãå¿
èŠãªã«ã¹ã¿ãã€ãºã®å Žåã¯ã次ã®ããšãè¡ãå¿
èŠããããŸãã
[`Trainer`] ããµãã¯ã©ã¹åããå¿
èŠãªã¡ãœããããªãŒããŒã©ã€ãããŸã (äŸã«ã€ããŠã¯ã[trainer](trainer) ãåç
§ããŠãã ãã)ã
ããã©ã«ãã§ã¯ã`TrainingArguments.report_to` 㯠`"all"` ã«èšå®ãããŠããããã[`Trainer`] ã¯æ¬¡ã®ã³ãŒã«ããã¯ã䜿çšããŸãã
- [`DefaultFlowCallback`] ã¯ããã°èšé²ãä¿åãè©äŸ¡ã®ããã©ã«ãã®åäœãåŠçããŸãã
- [`PrinterCallback`] ãŸã㯠[`ProgressCallback`] ã§é²è¡ç¶æ³ã衚瀺ãã
ãã° (æåã®ãã°ã¯ã[`TrainingArguments`] ãéã㊠tqdm ãéã¢ã¯ãã£ãåããå Žåã«äœ¿çšãããããã§ãªãå Žåã«äœ¿çšãããŸã)
2çªç®ã§ã)ã
- [`~integrations.TensorBoardCallback`] (PyTorch >= 1.4 ãä»ããŠ) tensorboard ã«ã¢ã¯ã»ã¹ã§ããå Žå
ãŸãã¯ãã³ãœã«ããŒãXïŒã
- [`~integrations.WandbCallback`] [wandb](https://www.wandb.com/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.CometCallback`] [comet_ml](https://www.comet.ml/site/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [mlflow](https://www.mlflow.org/) ãã€ã³ã¹ããŒã«ãããŠããå Žå㯠[`~integrations.MLflowCallback`]ã
- [`~integrations.NeptuneCallback`] [neptune](https://neptune.ai/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.AzureMLCallback`] [azureml-sdk](https://pypi.org/project/azureml-sdk/) ã®å Žå
ã€ã³ã¹ããŒã«ãããŠããŸãã
- [`~integrations.CodeCarbonCallback`] [codecarbon](https://pypi.org/project/codecarbon/) ã®å Žå
ã€ã³ã¹ããŒã«ãããŠããŸãã
- [`~integrations.ClearMLCallback`] [clearml](https://github.com/allegroai/clearml) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.DagsHubCallback`] [dagshub](https://dagshub.com/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.FlyteCallback`] [flyte](https://flyte.org/) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
- [`~integrations.DVCLiveCallback`] [dvclive](https://www.dvc.org/doc/dvclive) ãã€ã³ã¹ããŒã«ãããŠããå Žåã
ããã±ãŒãžãã€ã³ã¹ããŒã«ãããŠããããä»éããçµ±åã䜿çšããããªãå Žåã¯ã`TrainingArguments.report_to` ãã䜿çšãããçµ±åã®ã¿ã®ãªã¹ãã«å€æŽã§ããŸã (äŸ: `["azure_ml", "wandb"]`) ã
ã³ãŒã«ããã¯ãå®è£
ããã¡ã€ã³ã¯ã©ã¹ã¯ [`TrainerCallback`] ã§ããããã¯ã
[`TrainingArguments`] 㯠[`Trainer`] ãã€ã³ã¹ã¿ã³ã¹åããããã«äœ¿çšãããããã«ã¢ã¯ã»ã¹ã§ããŸãã
[`TrainerState`] ãä»ããŠãã¬ãŒããŒã®å
éšç¶æ
ãååŸãããã¬ãŒãã³ã° ã«ãŒãäžã§ããã€ãã®ã¢ã¯ã·ã§ã³ãå®è¡ã§ããŸãã
[`TrainerControl`]ã
## å©çšå¯èœãªã³ãŒã«ããã¯
ã©ã€ãã©ãªã§å©çšå¯èœãª [`TrainerCallback`] ã®ãªã¹ãã¯æ¬¡ã®ãšããã§ãã
[[autodoc]] integrations.CometCallback
- setup
[[autodoc]] DefaultFlowCallback
[[autodoc]] PrinterCallback
[[autodoc]] ProgressCallback
[[autodoc]] EarlyStoppingCallback
[[autodoc]] integrations.TensorBoardCallback
[[autodoc]] integrations.WandbCallback
- setup
[[autodoc]] integrations.MLflowCallback
- setup
[[autodoc]] integrations.AzureMLCallback
[[autodoc]] integrations.CodeCarbonCallback
[[autodoc]] integrations.NeptuneCallback
[[autodoc]] integrations.ClearMLCallback
[[autodoc]] integrations.DagsHubCallback
[[autodoc]] integrations.FlyteCallback
[[autodoc]] integrations.DVCLiveCallback
- setup
## TrainerCallback
[[autodoc]] TrainerCallback
以äžã¯ãã«ã¹ã¿ã ã³ãŒã«ããã¯ã PyTorch [`Trainer`] ã«ç»é²ããæ¹æ³ã®äŸã§ãã
```python
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
```
ã³ãŒã«ããã¯ãç»é²ããå¥ã®æ¹æ³ã¯ã次ã®ããã« `trainer.add_callback()` ãåŒã³åºãããšã§ãã
```python
trainer = Trainer(...)
trainer.add_callback(MyCallback)
# Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback())
```
## TrainerState
[[autodoc]] TrainerState
## TrainerControl
[[autodoc]] TrainerControl
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/configuration.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
ïŒ æ§æ
åºæ¬ã¯ã©ã¹ [`PretrainedConfig`] ã¯ãèšå®ãããŒã/ä¿åããããã®äžè¬çãªã¡ãœãããå®è£
ããŸãã
ããŒã«ã« ãã¡ã€ã«ãŸãã¯ãã£ã¬ã¯ããªããããŸãã¯ã©ã€ãã©ãª (ããŠã³ããŒãããã) ã«ãã£ãŠæäŸãããäºåãã¬ãŒãã³ã°æžã¿ã¢ãã«æ§æãã
HuggingFace ã® AWS S3 ãªããžããªãã)ã
å掟çæ§æã¯ã©ã¹ã¯ã¢ãã«åºæã®å±æ§ãå®è£
ããŸãããã¹ãŠã®æ§æã¯ã©ã¹ã«ååšããå
±éã®å±æ§ã¯æ¬¡ã®ãšããã§ãã
`hidden_ââsize`ã`num_attention_heads`ãããã³ `num_hidden_ââlayers`ãããã¹ã ã¢ãã«ã¯ããã«ä»¥äžãå®è£
ããŸãã
`vocab_size`ã
## PretrainedConfig
[[autodoc]] PretrainedConfig
- push_to_hub
- all
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/agent.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ãšãŒãžã§ã³ããšããŒã«
<Tip warning={true}>
Transformers Agents ã¯å®éšç㪠API ã§ããããã€ã§ãå€æŽãããå¯èœæ§ããããŸãããšãŒãžã§ã³ãããè¿ãããçµæ
API ãŸãã¯åºç€ãšãªãã¢ãã«ã¯å€æŽãããåŸåããããããå€æŽãããå¯èœæ§ããããŸãã
</Tip>
ãšãŒãžã§ã³ããšããŒã«ã®è©³çŽ°ã«ã€ããŠã¯ã[å
¥éã¬ã€ã](../transformers_agents) ãå¿
ããèªã¿ãã ããããã®ããŒãž
åºç€ãšãªãã¯ã©ã¹ã® API ããã¥ã¡ã³ããå«ãŸããŠããŸãã
## ãšãŒãžã§ã³ã
ç§ãã¡ã¯ 3 çš®é¡ã®ãšãŒãžã§ã³ããæäŸããŸãã[`HfAgent`] ã¯ãªãŒãã³ãœãŒã¹ ã¢ãã«ã®æšè«ãšã³ããã€ã³ãã䜿çšãã[`LocalAgent`] ã¯éžæããã¢ãã«ãããŒã«ã«ã§äœ¿çšãã[`OpenAiAgent`] 㯠OpenAI ã¯ããŒãºã ã¢ãã«ã䜿çšããŸãã
### HfAgent
[[autodoc]] HfAgent
### LocalAgent
[[autodoc]] LocalAgent
### OpenAiAgent
[[autodoc]] OpenAiAgent
### AzureOpenAiAgent
[[autodoc]] AzureOpenAiAgent
### Agent
[[autodoc]] Agent
- chat
- run
- prepare_for_new_chat
## Tools
### load_tool
[[autodoc]] load_tool
### Tool
[[autodoc]] Tool
### PipelineTool
[[autodoc]] PipelineTool
### RemoteTool
[[autodoc]] RemoteTool
### launch_gradio_demo
[[autodoc]] launch_gradio_demo
## ãšãŒãžã§ã³ãã®çš®é¡
ãšãŒãžã§ã³ãã¯ããŒã«éã§ããããçš®é¡ã®ãªããžã§ã¯ããåŠçã§ããŸããããŒã«ã¯å®å
šã«ãã«ãã¢ãŒãã«ã§ãããããåãåããšè¿åãå¯èœã§ã
ããã¹ããç»åããªãŒãã£ãªããããªãªã©ã®ã¿ã€ããããŒã«éã®äºææ§ãé«ããããã ãã§ãªãã
ãããã®æ»ãå€ã ipython (jupyterãcolabãipython ããŒãããã¯ãªã©) ã§æ£ããã¬ã³ããªã³ã°ããã«ã¯ãã©ãã㌠ã¯ã©ã¹ãå®è£
ããŸãã
ãã®ã¿ã€ãã®åšãã
ã©ããããããªããžã§ã¯ãã¯æåãšåãããã«åäœãç¶ããã¯ãã§ããããã¹ããªããžã§ã¯ãã¯äŸç¶ãšããŠæååãŸãã¯ç»åãšããŠåäœããå¿
èŠããããŸã
ãªããžã§ã¯ãã¯äŸç¶ãšã㊠`PIL.Image` ãšããŠåäœããã¯ãã§ãã
ãããã®ã¿ã€ãã«ã¯ã次㮠3 ã€ã®ç¹å®ã®ç®çããããŸãã
- åã«å¯Ÿã㊠`to_raw` ãåŒã³åºããšãåºã«ãªããªããžã§ã¯ããè¿ãããã¯ãã§ã
- åã«å¯Ÿã㊠`to_string` ãåŒã³åºããšããªããžã§ã¯ããæååãšããŠè¿ãå¿
èŠããããŸãã`AgentText` ã®å Žåã¯æååã«ãªãå¯èœæ§ããããŸãã
ãã ããä»ã®ã€ã³ã¹ã¿ã³ã¹ã®ãªããžã§ã¯ãã®ã·ãªã¢ã«åãããããŒãžã§ã³ã®ãã¹ã«ãªããŸãã
- ipython ã«ãŒãã«ã§è¡šç€ºãããšããªããžã§ã¯ããæ£ãã衚瀺ãããã¯ãã§ã
### AgentText
[[autodoc]] transformers.tools.agent_types.AgentText
### AgentImage
[[autodoc]] transformers.tools.agent_types.AgentImage
### AgentAudio
[[autodoc]] transformers.tools.agent_types.AgentAudio
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/model.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Models
ããŒã¹ã¯ã©ã¹ã§ãã [`PreTrainedModel`]ã[`TFPreTrainedModel`]ã[`FlaxPreTrainedModel`] ã¯ãã¢ãã«ã®èªã¿èŸŒã¿ãšä¿åã«é¢ããå
±éã®ã¡ãœãããå®è£
ããŠãããããã¯ããŒã«ã«ã®ãã¡ã€ã«ããã£ã¬ã¯ããªããããŸãã¯ã©ã€ãã©ãªãæäŸããäºååŠç¿ã¢ãã«æ§æïŒHuggingFaceã®AWS S3ãªããžããªããããŠã³ããŒãïŒããã¢ãã«ãèªã¿èŸŒãããã«äœ¿çšã§ããŸãã
[`PreTrainedModel`] ãš [`TFPreTrainedModel`] ã¯ã次ã®å
±éã®ã¡ãœãããå®è£
ããŠããŸãïŒ
- èªåœã«æ°ããããŒã¯ã³ãè¿œå ãããå Žåã«ãå
¥åããŒã¯ã³åã蟌ã¿ã®ãªãµã€ãºãè¡ã
- ã¢ãã«ã®ã¢ãã³ã·ã§ã³ããããåã蟌ã
åã¢ãã«ã«å
±éãããã®ä»ã®ã¡ãœããã¯ã[`~modeling_utils.ModuleUtilsMixin`]ïŒPyTorchã¢ãã«çšïŒããã³[`~modeling_tf_utils.TFModuleUtilsMixin`]ïŒTensorFlowã¢ãã«çšïŒã§å®çŸ©ãããŠãããããã¹ãçæã®å Žåã[`~generation.GenerationMixin`]ïŒPyTorchã¢ãã«çšïŒã[`~generation.TFGenerationMixin`]ïŒTensorFlowã¢ãã«çšïŒãããã³[`~generation.FlaxGenerationMixin`]ïŒFlax/JAXã¢ãã«çšïŒããããŸãã
## PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
<a id='from_pretrained-torch-dtype'></a>
### 倧èŠæš¡ã¢ãã«ã®èªã¿èŸŒã¿
Transformers 4.20.0ã§ã¯ã[`~PreTrainedModel.from_pretrained`] ã¡ãœãããåèšèšããã[Accelerate](https://huggingface.co/docs/accelerate/big_modeling) ã䜿çšããŠå€§èŠæš¡ã¢ãã«ãæ±ãããšãå¯èœã«ãªããŸãããããã«ã¯ Accelerate >= 0.9.0 ãš PyTorch >= 1.9.0 ãå¿
èŠã§ãã以åã®æ¹æ³ã§ãã«ã¢ãã«ãäœæãããã®åŸäºååŠç¿ã®éã¿ãèªã¿èŸŒã代ããã«ïŒããã«ã¯ã¡ã¢ãªå
ã®ã¢ãã«ãµã€ãºã2åå¿
èŠã§ãã©ã³ãã ã«åæåãããã¢ãã«çšãšéã¿çšã®2ã€ãå¿
èŠã§ããïŒãã¢ãã«ã空ã®å€æ®»ãšããŠäœæããäºååŠç¿ã®éã¿ãèªã¿èŸŒãŸãããšãã«ãã©ã¡ãŒã¿ãŒãå®äœåãããªãã·ã§ã³ãè¿œå ãããŸããã
ãã®ãªãã·ã§ã³ã¯ `low_cpu_mem_usage=True` ã§æå¹ã«ã§ããŸããã¢ãã«ã¯ãŸã空ã®éã¿ãæã€ã¡ã¿ããã€ã¹äžã«äœæããããã®åŸç¶æ
èŸæžãå
éšã«èªã¿èŸŒãŸããŸãïŒã·ã£ãŒãããããã§ãã¯ãã€ã³ãã®å Žåãã·ã£ãŒãããšã«èªã¿èŸŒãŸããŸãïŒããã®æ¹æ³ã§äœ¿çšãããæ倧RAMã¯ãã¢ãã«ã®å®å
šãªãµã€ãºã ãã§ãã
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
```
ããã«ãã¢ãã«ãå®å
šã«RAMã«åãŸããªãå ŽåïŒçŸæç¹ã§ã¯æšè«ã®ã¿æå¹ïŒãç°ãªãããã€ã¹ã«ã¢ãã«ãçŽæ¥é
眮ã§ããŸãã`device_map="auto"` ã䜿çšãããšãAccelerateã¯åã¬ã€ã€ãŒãã©ã®ããã€ã¹ã«é
眮ãããã決å®ããæéã®ããã€ã¹ïŒGPUïŒãæ倧éã«æŽ»çšããæ®ãã®éšåãCPUããããã¯GPU RAMãäžè¶³ããŠããå Žåã¯ããŒããã©ã€ãã«ãªãããŒãããŸããã¢ãã«ãè€æ°ã®ããã€ã¹ã«åå²ãããŠããŠããéåžžã©ããå®è¡ãããŸãã
`device_map` ãæž¡ãéã`low_cpu_mem_usage` ã¯èªåçã« `True` ã«èšå®ãããããããããæå®ããå¿
èŠã¯ãããŸããã
```py
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
```
ã¢ãã«ãããã€ã¹éã§ã©ã®ããã«åå²ããããã¯ããã® `hf_device_map` å±æ§ãèŠãããšã§ç¢ºèªã§ããŸã:
```py
t0pp.hf_device_map
```
```python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
```
åããã©ãŒãããã«åŸã£ãŠãç¬èªã®ããã€ã¹ããããäœæããããšãã§ããŸãïŒã¬ã€ã€ãŒåããããã€ã¹ãžã®èŸæžã§ãïŒãã¢ãã«ã®ãã¹ãŠã®ãã©ã¡ãŒã¿ãæå®ãããããã€ã¹ã«ãããããå¿
èŠããããŸããã1ã€ã®ã¬ã€ã€ãŒãå®å
šã«åãããã€ã¹ã«ããå Žåããã®ã¬ã€ã€ãŒã®ãµãã¢ãžã¥ãŒã«ã®ãã¹ãŠãã©ãã«è¡ããã®è©³çŽ°ã瀺ãå¿
èŠã¯ãããŸãããäŸãã°ã次ã®ããã€ã¹ãããã¯T0ppã«é©ããŠããŸãïŒGPUã¡ã¢ãªãããå ŽåïŒ:
```python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
```
ã¢ãã«ã®ã¡ã¢ãªãžã®åœ±é¿ãæå°éã«æãããã 1 ã€ã®æ¹æ³ã¯ãäœç²ŸåºŠã® dtype (`torch.float16` ãªã©) ã§ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åãããã以äžã§èª¬æããçŽæ¥éååææ³ã䜿çšããããšã§ãã
### Model Instantiation dtype
Pytorch ã§ã¯ãã¢ãã«ã¯éåžž `torch.float32` 圢åŒã§ã€ã³ã¹ã¿ã³ã¹åãããŸããããã¯ãããããšãããšåé¡ã«ãªãå¯èœæ§ããããŸã
éã¿ã fp16 ã«ããã¢ãã«ãããŒããããšã2 åã®ã¡ã¢ãªãå¿
èŠã«ãªãããã§ãããã®å¶éãå
æããã«ã¯ã次ã®ããšãã§ããŸãã
`torch_dtype` åŒæ°ã䜿çšããŠãç®çã® `dtype` ãæ瀺çã«æž¡ããŸãã
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
```
ãŸãã¯ãã¢ãã«ãåžžã«æé©ãªã¡ã¢ãª ãã¿ãŒã³ã§ããŒããããå Žåã¯ãç¹å¥ãªå€ `"auto"` ã䜿çšã§ããŸãã
ãããŠã`dtype` ã¯ã¢ãã«ã®éã¿ããèªåçã«å°åºãããŸãã
```python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
```
ã¹ã¯ã©ããããã€ã³ã¹ã¿ã³ã¹åãããã¢ãã«ã«ã¯ãã©ã® `dtype` ã䜿çšããããæ瀺ããããšãã§ããŸãã
```python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
```
Pytorch ã®èšèšã«ããããã®æ©èœã¯æµ®åå°æ°ç¹ dtype ã§ã®ã¿äœ¿çšã§ããŸãã
## ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
## TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
## TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
## FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
## Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
## Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/image_processor.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Image Processor
ç»åããã»ããµã¯ãããžã§ã³ ã¢ãã«ã®å
¥åç¹åŸŽã®æºåãšãã®åºåã®åŸåŠçãæ
åœããŸããããã«ã¯ããµã€ãºå€æŽãæ£èŠåãPyTorchãTensorFlowãFlaxãNumpy ãã³ãœã«ãžã®å€æãªã©ã®å€æãå«ãŸããŸããããžãããã»ã°ã¡ã³ããŒã·ã§ã³ ãã¹ã¯ã«å€æãããªã©ãã¢ãã«åºæã®åŸåŠçãå«ãŸããå ŽåããããŸãã
## ImageProcessingMixin
[[autodoc]] image_processing_utils.ImageProcessingMixin
- from_pretrained
- save_pretrained
## BatchFeature
[[autodoc]] BatchFeature
## BaseImageProcessor
[[autodoc]] image_processing_utils.BaseImageProcessor
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/processors.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Processors
Transformers ã©ã€ãã©ãªã§ã¯ãããã»ããµã¯ 2 ã€ã®ç°ãªãæå³ãæã¡ãŸãã
- [Wav2Vec2](../model_doc/wav2vec2) ãªã©ã®ãã«ãã¢ãŒãã« ã¢ãã«ã®å
¥åãååŠçãããªããžã§ã¯ã (é³å£°ãšããã¹ã)
ãŸã㯠[CLIP](../model_doc/clip) (ããã¹ããšããžã§ã³)
- å€ãããŒãžã§ã³ã®ã©ã€ãã©ãªã§ GLUE ãŸã㯠SQUAD ã®ããŒã¿ãååŠçããããã«äœ¿çšãããŠãããªããžã§ã¯ãã¯éæšå¥šã«ãªããŸããã
## Multi-modal processors
ãã«ãã¢ãŒãã« ã¢ãã«ã§ã¯ããªããžã§ã¯ããè€æ°ã®ã¢ããªã㣠(ããã¹ãã
èŠèŠãšé³å£°ïŒãããã¯ã2 ã€ä»¥äžã®åŠçãªããžã§ã¯ããã°ã«ãŒãåããããã»ããµãŒãšåŒã°ãããªããžã§ã¯ãã«ãã£ãŠåŠçãããŸãã
ããŒã¯ãã€ã¶ãŒ (ããã¹ã ã¢ããªãã£çš)ãç»åããã»ããµãŒ (èŠèŠçš)ãç¹åŸŽæœåºåš (ãªãŒãã£ãªçš) ãªã©ã
ãããã®ããã»ããµã¯ãä¿åããã³ããŒãæ©èœãå®è£
ãã次ã®åºæ¬ã¯ã©ã¹ãç¶æ¿ããŸãã
[[autodoc]] ProcessorMixin
## Deprecated processors
ãã¹ãŠã®ããã»ããµã¯ãåãã¢ãŒããã¯ãã£ã«åŸã£ãŠããŸãã
[`~data.processors.utils.DataProcessor`]ãããã»ããµã¯æ¬¡ã®ãªã¹ããè¿ããŸãã
[`~data.processors.utils.InputExample`]ãããã
[`~data.processors.utils.InputExample`] ã¯æ¬¡ã®ããã«å€æã§ããŸãã
[`~data.processors.utils.Input features`] ãã¢ãã«ã«ãã£ãŒãããŸãã
[[autodoc]] data.processors.utils.DataProcessor
[[autodoc]] data.processors.utils.InputExample
[[autodoc]] data.processors.utils.InputFeatures
## GLUE
[äžè¬èšèªç解è©äŸ¡ (GLUE)](https://gluebenchmark.com/) ã¯ã
æ¢åã® NLU ã¿ã¹ã¯ã®å€æ§ãªã»ããã«ãããã¢ãã«ã®ããã©ãŒãã³ã¹ãçŽãšåæçºå£²ããã [GLUE: A
èªç¶èšèªç解ã®ããã®ãã«ãã¿ã¹ã¯ãã³ãããŒã¯ããã³åæãã©ãããã©ãŒã ](https://openreview.net/pdf?id=rJ4km2R5t7)
ãã®ã©ã€ãã©ãªã¯ãMRPCãMNLIãMNLI (äžäžèŽ)ãCoLAãSST2ãSTSBã
QQPãQNLIãRTEãWNLIã
ãããã®ããã»ããµã¯æ¬¡ã®ãšããã§ãã
- [`~data.processors.utils.MrpcProcessor`]
- [`~data.processors.utils.MnliProcessor`]
- [`~data.processors.utils.MnliMismatchedProcessor`]
- [`~data.processors.utils.Sst2Processor`]
- [`~data.processors.utils.StsbProcessor`]
- [`~data.processors.utils.QqpProcessor`]
- [`~data.processors.utils.QnliProcessor`]
- [`~data.processors.utils.RteProcessor`]
- [`~data.processors.utils.WnliProcessor`]
ããã«ã次ã®ã¡ãœããã䜿çšããŠãããŒã¿ ãã¡ã€ã«ããå€ãããŒããããããããªã¹ãã«å€æããããšãã§ããŸãã
[`~data.processors.utils.InputExample`]ã
[[autodoc]] data.processors.glue.glue_convert_examples_to_features
## XNLI
[ã¯ãã¹ãªã³ã¬ã« NLI ã³ãŒãã¹ (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) ã¯ã
èšèªãè¶
ããããã¹ãè¡šçŸã®å質ã XNLI ã¯ã[*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/) ã«åºã¥ãã¯ã©ãŠããœãŒã¹ã®ããŒã¿ã»ããã§ããããã¹ãã®ãã¢ã«ã¯ã15 åã®ããã¹ãå«æã¢ãããŒã·ã§ã³ãã©ãã«ä»ããããŠããŸãã
ããŸããŸãªèšèª (è±èªãªã©ã®é«ãªãœãŒã¹èšèªãšã¹ã¯ããªèªãªã©ã®äœãªãœãŒã¹èšèªã®äž¡æ¹ãå«ã)ã
è«æ [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053) ãšåæã«ãªãªãŒã¹ãããŸããã
ãã®ã©ã€ãã©ãªã¯ãXNLI ããŒã¿ãããŒãããããã»ããµããã¹ãããŸãã
- [`~data.processors.utils.XnliProcessor`]
ãã¹ãã»ããã«ã¯ãŽãŒã«ãã©ãã«ãä»ããŠãããããè©äŸ¡ã¯ãã¹ãã»ããã§è¡ãããŸãã®ã§ãäºæ¿ãã ããã
ãããã®ããã»ããµã䜿çšããäŸã¯ã[run_xnli.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification/run_xnli.py) ã¹ã¯ãªããã«ç€ºãããŠããŸãã
## SQuAD
[The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) ã¯ã次ã®ãã³ãããŒã¯ã§ãã
質åå¿çã«é¢ããã¢ãã«ã®ããã©ãŒãã³ã¹ãè©äŸ¡ããŸãã v1.1 ãš v2.0 ã® 2 ã€ã®ããŒãžã§ã³ãå©çšå¯èœã§ããæåã®ããŒãžã§ã³
(v1.1) ã¯ãè«æ [SQuAD: 100,000+ question for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ãšãšãã«ãªãªãŒã¹ãããŸããã 2 çªç®ã®ããŒãžã§ã³ (v2.0) ã¯ãè«æ [Know What You Don't ãšåæã«ãªãªãŒã¹ãããŸããã
ç¥ã£ãŠããã¹ã: SQuAD ã®çããããªã質å](https://arxiv.org/abs/1806.03822)ã
ãã®ã©ã€ãã©ãªã¯ã次㮠2 ã€ã®ããŒãžã§ã³ã®ããããã®ããã»ããµããã¹ãããŸãã
### Processors
ãããã®ããã»ããµã¯æ¬¡ã®ãšããã§ãã
- [`~data.processors.utils.SquadV1Processor`]
- [`~data.processors.utils.SquadV2Processor`]
ã©ã¡ããæœè±¡ã¯ã©ã¹ [`~data.processors.utils.SquadProcessor`] ãç¶æ¿ããŠããŸãã
[[autodoc]] data.processors.squad.SquadProcessor
- all
ããã«ã次ã®ã¡ãœããã䜿çšããŠãSQuAD ã®äŸã次ã®åœ¢åŒã«å€æã§ããŸãã
ã¢ãã«ã®å
¥åãšããŠäœ¿çšã§ãã [`~data.processors.utils.SquadFeatures`]ã
[[autodoc]] data.processors.squad.squad_convert_examples_to_features
ãããã®ããã»ããµãšåè¿°ã®æ¹æ³ã¯ãããŒã¿ãå«ããã¡ã€ã«ã ãã§ãªãã
*tensorflow_datasets* ããã±ãŒãžã以äžã«äŸã瀺ããŸãã
### Example usage
以äžã«ããã»ããµã䜿çšããäŸãšãããŒã¿ ãã¡ã€ã«ã䜿çšããå€ææ¹æ³ã瀺ããŸãã
```python
# Loading a V2 processor
processor = SquadV2Processor()
examples = processor.get_dev_examples(squad_v2_data_dir)
# Loading a V1 processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(squad_v1_data_dir)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
*tensorflow_datasets* ã®äœ¿çšã¯ãããŒã¿ ãã¡ã€ã«ã䜿çšããã®ãšåããããç°¡åã§ãã
```python
# tensorflow_datasets only handle Squad V1.
tfds_examples = tfds.load("squad")
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=args.doc_stride,
max_query_length=max_query_length,
is_training=not evaluate,
)
```
ãããã®ããã»ããµã䜿çšããå¥ã®äŸã¯ã[run_squad.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering/run_squad.py) ã¹ã¯ãªããã«ç€ºãããŠããŸãã
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/trainer.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Trainer
[`Trainer`] ã¯ã©ã¹ã¯ãã»ãšãã©ã®æšæºçãªãŠãŒã¹ã±ãŒã¹ã«å¯ŸããŠãPyTorch ã§æ©èœãå®å
šã«ãã¬ãŒãã³ã°ããããã® API ãæäŸããŸããããã¯ã[ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples) ã®ã»ãšãã©ã§äœ¿çšãããŠããŸãã
[`Trainer`] ãã€ã³ã¹ã¿ã³ã¹åããåã«ããã¬ãŒãã³ã°äžã«ã«ã¹ã¿ãã€ãºã®ãã¹ãŠã®ãã€ã³ãã«ã¢ã¯ã»ã¹ããããã« [`TrainingArguments`] ãäœæããŸãã
ãã® API ã¯ãè€æ°ã® GPU/TPU ã§ã®åæ£ãã¬ãŒãã³ã°ã[NVIDIA Apex](https://github.com/NVIDIA/apex) ããã³ PyTorch ã®ãã€ãã£ã AMP ã«ããæ··å粟床ããµããŒãããŸãã
[`Trainer`] ã«ã¯ãäžèšã®æ©èœããµããŒãããåºæ¬çãªãã¬ãŒãã³ã° ã«ãŒããå«ãŸããŠããŸããã«ã¹ã¿ã åäœãæ¿å
¥ããã«ã¯ããããããµãã¯ã©ã¹åãã次ã®ã¡ãœããããªãŒããŒã©ã€ãããŸãã
- **get_train_dataloader** -- ãã¬ãŒãã³ã° ããŒã¿ããŒããŒãäœæããŸãã
- **get_eval_dataloader** -- è©äŸ¡çšããŒã¿ããŒããŒãäœæããŸãã
- **get_test_dataloader** -- ãã¹ã ããŒã¿ããŒããŒãäœæããŸãã
- **log** -- ãã¬ãŒãã³ã°ãç£èŠããŠããããŸããŸãªãªããžã§ã¯ãã«é¢ããæ
å ±ããã°ã«èšé²ããŸãã
- **create_optimizer_and_scheduler** -- ãªããã£ãã€ã¶ãšåŠç¿çã¹ã±ãžã¥ãŒã©ãæž¡ãããªãã£ãå Žåã«ã»ããã¢ããããŸãã
åæåã `create_optimizer`ã¡ãœãããš`create_scheduler`ã¡ãœããããµãã¯ã©ã¹åãŸãã¯ãªãŒããŒã©ã€ãããããšãã§ããããšã«æ³šæããŠãã ããã
å¥ã
ã«ã
- **create_optimizer** -- init ã§æž¡ãããªãã£ãå Žåã«ãªããã£ãã€ã¶ãŒãã»ããã¢ããããŸãã
- **create_scheduler** -- init ã§æž¡ãããªãã£ãå ŽåãåŠç¿çã¹ã±ãžã¥ãŒã©ãèšå®ããŸãã
- **compute_loss** - ãã¬ãŒãã³ã°å
¥åã®ãããã®æ倱ãèšç®ããŸãã
- **training_step** -- ãã¬ãŒãã³ã° ã¹ããããå®è¡ããŸãã
- **prediction_step** -- è©äŸ¡/ãã¹ã ã¹ããããå®è¡ããŸãã
- **evaluate** -- è©äŸ¡ã«ãŒããå®è¡ããã¡ããªã¯ã¹ãè¿ããŸãã
- **predict** -- ãã¹ã ã»ããã®äºæž¬ (ã©ãã«ã䜿çšå¯èœãªå Žåã¯ã¡ããªã¯ã¹ãå«ã) ãè¿ããŸãã
<Tip warning={true}>
[`Trainer`] ã¯ã©ã¹ã¯ ð€ Transformers ã¢ãã«çšã«æé©åãããŠãããé©ãã¹ãåäœãããå¯èœæ§ããããŸã
ä»ã®æ©çš®ã§äœ¿çšããå Žåãç¬èªã®ã¢ãã«ã§äœ¿çšããå Žåã¯ã次ã®ç¹ã確èªããŠãã ããã
- ã¢ãã«ã¯åžžã« [`~utils.ModelOutput`] ã®ã¿ãã«ãŸãã¯ãµãã¯ã©ã¹ãè¿ããŸãã
- `labels` åŒæ°ãæå®ããããã®æ倱ãæåã®å€ãšããŠè¿ãããå Žåãã¢ãã«ã¯æ倱ãèšç®ã§ããŸãã
ã¿ãã«ã®èŠçŽ (ã¢ãã«ãã¿ãã«ãè¿ãå Žå)
- ã¢ãã«ã¯è€æ°ã®ã©ãã«åŒæ°ãåãå
¥ããããšãã§ããŸã ([`TrainingArguments`] 㧠`label_names` ã䜿çšããŠããã®ååã [`Trainer`] ã«ç€ºããŸã) ãããããã®ãããã«ã `"label"` ãšããååãä»ããå¿
èŠã¯ãããŸããã
</Tip>
以äžã¯ãå éæ倱ã䜿çšããããã« [`Trainer`] ãã«ã¹ã¿ãã€ãºããæ¹æ³ã®äŸã§ã (äžåè¡¡ãªãã¬ãŒãã³ã° ã»ãããããå Žåã«åœ¹ç«ã¡ãŸã)ã
```python
from torch import nn
from transformers import Trainer
class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
# forward pass
outputs = model(**inputs)
logits = outputs.get("logits")
# compute custom loss (suppose one has 3 labels with different weights)
loss_fct = nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 3.0], device=model.device))
loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.view(-1))
return (loss, outputs) if return_outputs else loss
```
PyTorch [`Trainer`] ã®ãã¬ãŒãã³ã° ã«ãŒãã®åäœãã«ã¹ã¿ãã€ãºãããã 1 ã€ã®æ¹æ³ã¯ããã¬ãŒãã³ã° ã«ãŒãã®ç¶æ
ãæ€æ»ã§ãã [callbacks](ã³ãŒã«ããã¯) ã䜿çšããããšã§ã (é²è¡ç¶æ³ã¬ããŒããTensorBoard ãŸãã¯ä»ã® ML ãã©ãããã©ãŒã ã§ã®ãã°èšé²ãªã©)ã決å®ïŒæ©æåæ¢ãªã©ïŒã
## Trainer
[[autodoc]] Trainer
- all
## Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
## TrainingArguments
[[autodoc]] TrainingArguments
- all
## Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all
## Checkpoints
ããã©ã«ãã§ã¯ã[`Trainer`] ã¯ãã¹ãŠã®ãã§ãã¯ãã€ã³ããã
[`TrainingArguments`] ã䜿çšããŠããŸãããããã¯ãxxx ãå«ã`checkpoint-xxx`ãšããååã®ãµããã©ã«ããŒã«ä¿åãããŸãã
ããã¯ãã¬ãŒãã³ã°ã®æ®µéã§ããã
ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããã«ã¯ã次ã®ããããã䜿çšã㊠[`Trainer.train`] ãåŒã³åºããŸãã
- `resume_from_checkpoint=True` ã¯ææ°ã®ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããŸã
- `resume_from_checkpoint=checkpoint_dir` ãã£ã¬ã¯ããªå
ã®ç¹å®ã®ãã§ãã¯ãã€ã³ããããã¬ãŒãã³ã°ãåéããŸã
åæ Œããã
ããã«ã`push_to_hub=True` ã䜿çšãããšãã¢ãã« ããã«ãã§ãã¯ãã€ã³ããç°¡åã«ä¿åã§ããŸããããã©ã«ãã§ã¯ããã¹ãŠ
äžéãã§ãã¯ãã€ã³ãã«ä¿åãããã¢ãã«ã¯å¥ã®ã³ãããã«ä¿åãããŸããããªããã£ãã€ã¶ãŒã®ç¶æ
ã¯ä¿åãããŸãããé©å¿ã§ããŸã
[`TrainingArguments`] ã® `hub-strategy` å€ã次ã®ããããã«ããŸãã
- `"checkpoint"`: ææ°ã®ãã§ãã¯ãã€ã³ãã last-checkpoint ãšããååã®ãµããã©ã«ããŒã«ããã·ã¥ãããŸãã
`trainer.train(resume_from_checkpoint="output_dir/last-checkpoint")` ã䜿çšããŠãã¬ãŒãã³ã°ãç°¡åã«åéããŸãã
- `"all_checkpoints"`: ãã¹ãŠã®ãã§ãã¯ãã€ã³ãã¯ãåºåãã©ã«ããŒã«è¡šç€ºãããããã«ããã·ã¥ãããŸã (ãããã£ãŠã1 ã€ã®ãã§ãã¯ãã€ã³ããåŸãããŸã)
æçµãªããžããªå
ã®ãã©ã«ããŒããšã®ãã§ãã¯ãã€ã³ã ãã©ã«ããŒ)
## Logging
ããã©ã«ãã§ã¯ã[`Trainer`] ã¯ã¡ã€ã³ããã»ã¹ã« `logging.INFO` ã䜿çšããã¬ããªã«ãããå Žåã«ã¯ `logging.WARNING` ã䜿çšããŸãã
ãããã®ããã©ã«ãã¯ã[`TrainingArguments`] ã® 5 ã€ã® `logging` ã¬ãã«ã®ããããã䜿çšããããã«ãªãŒããŒã©ã€ãã§ããŸãã
åŒæ°:
- `log_level` - ã¡ã€ã³ããã»ã¹çš
- `log_level_replica` - ã¬ããªã«çš
ããã«ã[`TrainingArguments`] ã® `log_on_each_node` ã `False` ã«èšå®ãããŠããå Žåãã¡ã€ã³ ããŒãã®ã¿ã
ã¡ã€ã³ ããã»ã¹ã®ãã° ã¬ãã«èšå®ã䜿çšãããšãä»ã®ãã¹ãŠã®ããŒãã¯ã¬ããªã«ã®ãã° ã¬ãã«èšå®ã䜿çšããŸãã
[`Trainer`] ã¯ã`transformers` ã®ãã° ã¬ãã«ãããŒãããšã«åå¥ã«èšå®ããããšã«æ³šæããŠãã ããã
[`Trainer.__init__`]ããããã£ãŠãä»ã®æ©èœãå©çšããå Žåã¯ããããããæ©ãèšå®ããããšããå§ãããŸã (次ã®äŸãåç
§)ã
[`Trainer`] ãªããžã§ã¯ããäœæããåã® `transformers` æ©èœã
ãããã¢ããªã±ãŒã·ã§ã³ã§äœ¿çšããæ¹æ³ã®äŸã次ã«ç€ºããŸãã
```python
[...]
logger = logging.getLogger(__name__)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
# set the main code and the modules it uses to the same log-level according to the node
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
trainer = Trainer(...)
```
ãããŠãã¡ã€ã³ ããŒããšä»ã®ãã¹ãŠã®ããŒãã§éè€ããå¯èœæ§ãé«ããã®ãåºåããªãããã«èŠåããã ãã衚瀺ãããå Žåã¯ã
èŠå: 次ã®ããã«å®è¡ã§ããŸãã
```bash
my_app.py ... --log_level warning --log_level_replica error
```
ãã«ãããŒãç°å¢ã§ãåããŒãã®ã¡ã€ã³ããã»ã¹ã®ãã°ãç¹°ãè¿ããããªãå Žåã¯ã次ã®ããã«ããŸãã
äžèšã次ã®ããã«å€æŽããŸãã
```bash
my_app.py ... --log_level warning --log_level_replica error --log_on_each_node 0
```
ãã®åŸãæåã®ããŒãã®ã¡ã€ã³ ããã»ã¹ã®ã¿ããèŠåãã¬ãã«ã§ãã°ã«èšé²ãããã¡ã€ã³ ããŒãäžã®ä»ã®ãã¹ãŠã®ããã»ã¹ã¯ãã°ã«èšé²ãããŸãã
ããŒããšä»ã®ããŒãäžã®ãã¹ãŠã®ããã»ã¹ã¯ããšã©ãŒãã¬ãã«ã§ãã°ã«èšé²ãããŸãã
ã¢ããªã±ãŒã·ã§ã³ãã§ããã ãéãã«ããå¿
èŠãããå Žåã¯ã次ã®ããã«ããŸãã
```bash
my_app.py ... --log_level error --log_level_replica error --log_on_each_node 0
```
(ãã«ãããŒãç°å¢ã®å Žå㯠`--log_on_each_node 0` ãè¿œå ããŸã)
## Randomness
[`Trainer`] ã«ãã£ãŠçæããããã§ãã¯ãã€ã³ãããåéããå Žåããã¹ãŠã®åªåããã®ç¶æ
ã埩å
ããããã«è¡ãããŸãã
_python_ã_numpy_ãããã³ _pytorch_ ã® RNG ç¶æ
ã¯ããã®ãã§ãã¯ãã€ã³ããä¿åããæç¹ãšåãç¶æ
ã«ãªããŸãã
ããã«ããããåæ¢ããŠåéããšããã¹ã¿ã€ã«ã®ãã¬ãŒãã³ã°ãããã³ã¹ããããã¬ãŒãã³ã°ã«å¯èœãªéãè¿ã¥ããããã¯ãã§ãã
ãã ããããŸããŸãªããã©ã«ãã®é決å®ç㪠pytorch èšå®ã«ãããããã¯å®å
šã«æ©èœããªãå¯èœæ§ããããŸãããã«ããåžæã®å Žåã¯
決å®è«ã«ã€ããŠã¯ã[ã©ã³ãã æ§ã®ãœãŒã¹ã®å¶åŸ¡](https://pytorch.org/docs/stable/notes/randomness) ãåç
§ããŠãã ãããããã¥ã¡ã³ãã§èª¬æãããŠããããã«ããããã®èšå®ã®äžéšã¯
ç©äºã決å®è«çã«ãããã® (äŸ: `torch.backends.cudnn.deterministic`) ã¯ç©äºãé
ãããå¯èœæ§ããããããããã¯
ããã©ã«ãã§ã¯å®è¡ã§ããŸããããå¿
èŠã«å¿ããŠèªåã§æå¹ã«ããããšãã§ããŸãã
## Specific GPUs Selection
ã©ã® GPU ãã©ã®ãããªé åºã§äœ¿çšããããããã°ã©ã ã«æ瀺ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
[`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.Parallel.DistributedDataParallel.html) ã䜿çšã㊠GPU ã®ãµãã»ããã®ã¿ã䜿çšããå Žåã䜿çšãã GPU ã®æ°ãæå®ããã ãã§ãã ãããšãã°ãGPU ã 4 ã€ããããæåã® 2 ã€ã䜿çšãããå Žåã¯ã次ã®ããã«ããŸãã
```bash
torchrun --nproc_per_node=2 trainer-program.py ...
```
[`accelerate`](https://github.com/huggingface/accelerate) ãŸã㯠[`deepspeed`](https://github.com/microsoft/DeepSpeed) ãã€ã³ã¹ããŒã«ãããŠããå Žåã¯ã次ã䜿çšããŠåãããšãéæããããšãã§ããŸããã®äžã€ïŒ
```bash
accelerate launch --num_processes 2 trainer-program.py ...
```
```bash
deepspeed --num_gpus 2 trainer-program.py ...
```
ãããã®ã©ã³ãã£ãŒã䜿çšããããã«ãAccelerate ãŸã㯠[Deepspeed çµ±å](deepspeed) æ©èœã䜿çšããå¿
èŠã¯ãããŸããã
ãããŸã§ã¯ãããã°ã©ã ã«äœ¿çšãã GPU ã®æ°ãæ瀺ã§ããŸããã次ã«ãç¹å®ã® GPU ãéžæãããã®é åºãå¶åŸ¡ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
次ã®ç°å¢å€æ°ã¯ã䜿çšãã GPU ãšãã®é åºãå¶åŸ¡ããã®ã«åœ¹ç«ã¡ãŸãã
**`CUDA_VISIBLE_DEVICES`**
è€æ°ã® GPU ãããããã®ãã¡ã® 1 ã€ãŸãã¯ããã€ãã® GPU ã ãã䜿çšãããå Žåã¯ãç°å¢å€æ° `CUDA_VISIBLE_DEVICES` ã䜿çšãã GPU ã®ãªã¹ãã«èšå®ããŸãã
ããšãã°ã4 ã€ã® GPU (0ã1ã2ã3) ããããšããŸããç©ç GPU 0 ãš 2 ã®ã¿ã§å®è¡ããã«ã¯ã次ã®ããã«ããŸãã
```bash
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
```
ãããã£ãŠãpytorch 㯠2 ã€ã® GPU ã®ã¿ãèªèããç©ç GPU 0 ãš 2 ã¯ãããã `cuda:0` ãš `cuda:1` ã«ãããã³ã°ãããŸãã
é åºãå€æŽããããšãã§ããŸãã
```bash
CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ...
```
ããã§ã¯ãç©ç GPU 0 ãš 2 ããããã`cuda:1`ãš`cuda:0`ã«ãããã³ã°ãããŠããŸãã
äžèšã®äŸã¯ãã¹ãŠ `DistributedDataParallel` 䜿çšãã¿ãŒã³ã®ãã®ã§ãããåãæ¹æ³ã [`DataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) ã§ãæ©èœããŸãã
```bash
CUDA_VISIBLE_DEVICES=2,0 python trainer-program.py ...
```
GPU ã®ãªãç°å¢ããšãã¥ã¬ãŒãããã«ã¯ã次ã®ããã«ãã®ç°å¢å€æ°ã空ã®å€ã«èšå®ããã ãã§ãã
```bash
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
```
ä»ã®ç°å¢å€æ°ãšåæ§ã«ãããããã³ãã³ã ã©ã€ã³ã«è¿œå ãã代ããã«ã次ã®ããã«ãšã¯ã¹ããŒãããããšãã§ããŸãã
```bash
export CUDA_VISIBLE_DEVICES=0,2
torchrun trainer-program.py ...
```
ãã ãããã®æ¹æ³ã§ã¯ã以åã«ç°å¢å€æ°ãèšå®ããããšãå¿ããŠããªãééã£ã GPU ã䜿çšãããŠããã®ãç解ã§ããªãå¯èœæ§ããããããæ··ä¹±ãæãå¯èœæ§ããããŸãããããã£ãŠããã®ã»ã¯ã·ã§ã³ã®ã»ãšãã©ã®äŸã§ç€ºãããŠããããã«ãåãã³ãã³ã ã©ã€ã³ã§ç¹å®ã®å®è¡ã«å¯ŸããŠã®ã¿ç°å¢å€æ°ãèšå®ããã®ãäžè¬çã§ãã
**`CUDA_DEVICE_ORDER`**
ç©çããã€ã¹ã®é åºãå¶åŸ¡ããè¿œå ã®ç°å¢å€æ° `CUDA_DEVICE_ORDER` ããããŸããéžæè¢ã¯æ¬¡ã® 2 ã€ã§ãã
1. PCIe ãã¹ ID é (`nvidia-smi` ã®é åºãšäžèŽ) - ãããããã©ã«ãã§ãã
```bash
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
2. GPU ã³ã³ãã¥ãŒãã£ã³ã°èœåé ã«äžŠã¹ã
```bash
export CUDA_DEVICE_ORDER=FASTEST_FIRST
```
ã»ãšãã©ã®å Žåããã®ç°å¢å€æ°ãæ°ã«ããå¿
èŠã¯ãããŸããããå€ã GPU ãšæ°ãã GPU ãç©ççã«æ¿å
¥ãããŠãããããé
ãå€ãã«ãŒããé
ããªã£ãŠããããã«èŠãããããªåã£ãã»ããã¢ãããè¡ã£ãŠããå Žåã«ã¯ãéåžžã«åœ¹ç«ã¡ãŸããåããããã解決ãã 1 ã€ã®æ¹æ³ã¯ãã«ãŒãã亀æããããšã§ãããã ããã«ãŒãã亀æã§ããªãå Žå (ããã€ã¹ã®å·åŽã圱é¿ãåããå Žåãªã©)ã`CUDA_DEVICE_ORDER=FASTEST_FIRST`ãèšå®ãããšãåžžã«æ°ããé«éã«ãŒããæåã«é
眮ãããŸãããã ãã`nvidia-smi`ã¯äŸç¶ãšã㊠PCIe ã®é åºã§ã¬ããŒããããããå€å°æ··ä¹±ããã§ãããã
é åºãå
¥ãæ¿ãããã 1 ã€ã®è§£æ±ºçã¯ã以äžã䜿çšããããšã§ãã
```bash
export CUDA_VISIBLE_DEVICES=1,0
```
ãã®äŸã§ã¯ 2 ã€ã® GPU ã ãã䜿çšããŠããŸããããã¡ãããã³ã³ãã¥ãŒã¿ãŒã«æèŒãããŠããæ°ã® GPU ã«ãåãããšãåœãŠã¯ãŸããŸãã
ãŸãããã®ç°å¢å€æ°ãèšå®ããå Žåã¯ã`~/.bashrc` ãã¡ã€ã«ãŸãã¯ãã®ä»ã®èµ·åèšå®ãã¡ã€ã«ã«èšå®ããŠãå¿ããã®ãæåã§ãã
## Trainer Integrations
[`Trainer`] ã¯ããã¬ãŒãã³ã°ãåçã«æ¹åããå¯èœæ§ã®ããã©ã€ãã©ãªããµããŒãããããã«æ¡åŒµãããŸããã
æéãšã¯ããã«å€§ããªã¢ãã«ã«é©åããŸãã
çŸåšããµãŒãããŒãã£ã®ãœãªã¥ãŒã·ã§ã³ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ããã³ [PyTorch FSDP](https://pytorch.org/docs/stable/fsdp.html) ããµããŒãããŠããŸããè«æ [ZeRO: ã¡ã¢ãªã®æé©åå
ãã©ã¡ãŒã¿ ã¢ãã«ã®ãã¬ãŒãã³ã°ã«åããŠãSamyam RajbhandariãJeff RasleyãOlatunji RuwaseãYuxiong He è](https://arxiv.org/abs/1910.02054)ã
ãã®æäŸããããµããŒãã¯ããã®èšäºã®å·çæç¹ã§ã¯æ°ãããŠå®éšçãªãã®ã§ãã DeepSpeed ãš PyTorch FSDP ã®ãµããŒãã¯ã¢ã¯ãã£ãã§ãããããã«é¢ããåé¡ã¯æè¿ããŸãããFairScale çµ±å㯠PyTorch ã¡ã€ã³ã«çµ±åãããŠããããããããµããŒãããŠããŸãã ([PyTorch FSDP çµ±å](#pytorch-fully-sharded-data-parallel))
<a id='zero-install-notes'></a>
### CUDA Extension Installation Notes
ãã®èšäºã®å·çæç¹ã§ã¯ãDeepspeed ã䜿çšããã«ã¯ãCUDA C++ ã³ãŒããã³ã³ãã€ã«ããå¿
èŠããããŸãã
ãã¹ãŠã®ã€ã³ã¹ããŒã«ã®åé¡ã¯ã[Deepspeed](https://github.com/microsoft/DeepSpeed/issues) ã®å¯Ÿå¿ãã GitHub ã®åé¡ãéããŠå¯ŸåŠããå¿
èŠããããŸããããã«ãäžã«çºçããå¯èœæ§ã®ããäžè¬çãªåé¡ãããã€ããããŸãã
CUDA æ¡åŒµæ©èœãæ§ç¯ããå¿
èŠããã PyTorch æ¡åŒµæ©èœã
ãããã£ãŠã次ã®æäœãå®è¡äžã« CUDA é¢é£ã®ãã«ãã®åé¡ãçºçããå Žåã¯ã次ã®ãšããã§ãã
```bash
pip install deepspeed
```
ãŸã次ã®æ³šæäºé
ããèªã¿ãã ããã
ãããã®ããŒãã§ã¯ã`pytorch` ã CUDA `10.2` ã§ãã«ããããå Žåã«äœããã¹ããã®äŸã瀺ããŸããããªãã®ç¶æ³ã次ã®ãããªå Žå
ç°ãªãå Žåã¯ãããŒãžã§ã³çªå·ãç®çã®ããŒãžã§ã³ã«èª¿æŽããããšãå¿ããªãã§ãã ããã
#### Possible problem #1
Pytorch ã«ã¯ç¬èªã® CUDA ããŒã«ããããä»å±ããŠããŸãããããã 2 ã€ã®ãããžã§ã¯ãããã«ãããã«ã¯ãåäžããŒãžã§ã³ã® CUDA ãå¿
èŠã§ãã
ã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŸãã
ããšãã°ãPython ç°å¢ã« `cudatoolkit==10.2` ãæå®ã㊠`pytorch` ãã€ã³ã¹ããŒã«ããå Žåã¯ã次ã®ãã®ãå¿
èŠã§ãã
CUDA `10.2` ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŸããã
æ£ç¢ºãªå Žæã¯ã·ã¹ãã ã«ãã£ãŠç°ãªãå ŽåããããŸãããå€ãã®ã·ã¹ãã ã§ã¯`/usr/local/cuda-10.2`ãæãäžè¬çãªå Žæã§ãã
Unix ã·ã¹ãã ã CUDA ãæ£ããèšå®ããã`PATH`ç°å¢å€æ°ã«è¿œå ããããšã
次ã®ããã«ããŠã€ã³ã¹ããŒã«å Žæãæå®ããŸãã
```bash
which nvcc
```
CUDA ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŠããªãå Žåã¯ãæåã«ã€ã³ã¹ããŒã«ããŠãã ããããæ°ã«å
¥ãã䜿çšããŠæé ãèŠã€ããããšãã§ããŸã
æ€çŽ¢ãšã³ãžã³ãããšãã°ãUbuntu ã䜿çšããŠããå Žåã¯ã[ubuntu cuda 10.2 install](https://www.google.com/search?q=ubuntu+cuda+10.2+install) ãæ€çŽ¢ãããšããã§ãããã
#### Possible problem #2
ãã 1 ã€ã®èããããäžè¬çãªåé¡ã¯ãã·ã¹ãã å
šäœã«è€æ°ã® CUDA ããŒã«ããããã€ã³ã¹ããŒã«ãããŠããå¯èœæ§ãããããšã§ããããšãã°ããªã
ãããå¯èœæ§ãããïŒ
```bash
/usr/local/cuda-10.2
/usr/local/cuda-11.0
```
ãã®ç¶æ³ã§ã¯ã`PATH` ããã³ `LD_LIBRARY_PATH` ç°å¢å€æ°ã«ä»¥äžãå«ãŸããŠããããšã確èªããå¿
èŠããããŸãã
ç®çã® CUDA ããŒãžã§ã³ãžã®æ£ãããã¹ãéåžžãããã±ãŒãž ã€ã³ã¹ããŒã©ãŒã¯ããããã«ã
æåŸã®ããŒãžã§ã³ãã€ã³ã¹ããŒã«ãããŸãããé©åãªããã±ãŒãžãèŠã€ãããªãããã«ããã±ãŒãžã®ãã«ãã倱æãããšããåé¡ãçºçããå Žåã¯ã
CUDA ããŒãžã§ã³ãã·ã¹ãã å
šäœã«ã€ã³ã¹ããŒã«ãããŠããã«ãããããããåè¿°ã® 2 ã€ã調æŽããå¿
èŠãããããšãæå³ããŸã
ç°å¢å€æ°ã
ãŸãããã®å
容ãèŠãŠã¿ãŸãããã
```bash
echo $PATH
echo $LD_LIBRARY_PATH
```
ããã§ãäžã«äœãå
¥ã£ãŠããããããããŸãã
`LD_LIBRARY_PATH` ã空ã§ããå¯èœæ§ããããŸãã
`PATH` ã¯å®è¡å¯èœãã¡ã€ã«ãååšããå Žæããªã¹ããã`LD_LIBRARY_PATH` ã¯å
±æã©ã€ãã©ãªã®å Žæã瀺ããŸãã
æ¢ãããšã§ããã©ã¡ãã®å Žåããåã®ãšã³ããªãåŸã®ãšã³ããªããåªå
ãããŸãã `:` ã¯è€æ°ãåºåãããã«äœ¿çšãããŸã
ãšã³ããªã
ããã§ããã«ã ããã°ã©ã ã«ç¹å®ã® CUDA ããŒã«ãããã®å Žæãæ瀺ããã«ã¯ãæåã«ãªã¹ããããåžæã®ãã¹ãæ¿å
¥ããŸãã
ãã£ãŠããããšïŒ
```bash
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
```
æ¢åã®å€ãäžæžãããã®ã§ã¯ãªããå
é ã«è¿œå ããããšã«æ³šæããŠãã ããã
ãã¡ãããå¿
èŠã«å¿ããŠããŒãžã§ã³çªå·ããã«ãã¹ã調æŽããŸããå²ãåœãŠããã£ã¬ã¯ããªãå®éã«æ©èœããããšã確èªããŠãã ãã
ååšããã `lib64` ãµããã£ã¬ã¯ããªã¯ã`libcudart.so` ãªã©ã®ããŸããŸãª CUDA `.so` ãªããžã§ã¯ããååšããå Žæã§ãã
ã·ã¹ãã ã§ã¯å¥ã®ååãä»ããããŸãããçŸå®ãåæ ããããã«èª¿æŽããŠãã ããã
#### Possible problem #3
äžéšã®å€ã CUDA ããŒãžã§ã³ã¯ãæ°ããã³ã³ãã€ã©ã§ã®ãã«ããæåŠããå ŽåããããŸããããšãã°ãããªãã¯`gcc-9`ãæã£ãŠããŸããããããå¿
èŠã§ã
`gcc-7`ã
ããã«ã¯ããŸããŸãªæ¹æ³ããããŸãã
ææ°ã® CUDA ããŒã«ããããã€ã³ã¹ããŒã«ã§ããå Žåã¯ãéåžžãæ°ããã³ã³ãã€ã©ããµããŒããããŠããã¯ãã§ãã
ãããã¯ãæ¢ã«ææããŠããã³ã³ãã€ã©ã«å ããŠãäžäœããŒãžã§ã³ã®ã³ã³ãã€ã©ãã€ã³ã¹ããŒã«ããããšãã§ããŸãã
ãã§ã«ååšããŸãããããã©ã«ãã§ã¯ãªãããããã«ãã·ã¹ãã ã¯ãããèªèã§ããŸããã ãgcc-7ããã€ã³ã¹ããŒã«ãããŠãããã
ãã«ãã·ã¹ãã ãèŠã€ãããªããšããã¡ãã»ãŒãžã衚瀺ããå Žåã¯ã次ã®æ¹æ³ã§è§£æ±ºã§ããå¯èœæ§ããããŸãã
```bash
sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.2/bin/gcc
sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.2/bin/g++
```
ããã§ã¯ã`/usr/local/cuda-10.2/bin/gcc` ãã `gcc-7` ãžã®ã·ã³ããªãã¯ãªã³ã¯ãäœæããŠããŸãã
`/usr/local/cuda-10.2/bin/` 㯠`PATH` ç°å¢å€æ°å
ã«ããå¿
èŠããããŸã (åã®åé¡ã®è§£æ±ºçãåç
§)ã
`gcc-7` (ããã³ `g++7`) ãèŠã€ããã¯ãã§ããã«ãã¯æåããŸãã
ãã€ãã®ããã«ãç¶æ³ã«åãããŠäŸã®ãã¹ãç·šéããŠãã ããã
### PyTorch Fully Sharded Data parallel
ãã倧ããªããã ãµã€ãºã§å·šå€§ãªã¢ãã«ã®ãã¬ãŒãã³ã°ãé«éåããã«ã¯ãå®å
šã«ã·ã£ãŒãåãããããŒã¿äžŠåã¢ãã«ã䜿çšã§ããŸãã
ãã®ã¿ã€ãã®ããŒã¿äžŠåãã©ãã€ã ã§ã¯ããªããã£ãã€ã¶ãŒã®ç¶æ
ãåŸé
ããã©ã¡ãŒã¿ãŒãã·ã£ãŒãã£ã³ã°ããããšã§ãããå€ãã®ããŒã¿ãšå€§èŠæš¡ãªã¢ãã«ããã£ããã£ã³ã°ã§ããŸãã
ãã®æ©èœãšãã®å©ç¹ã®è©³çŽ°ã«ã€ããŠã¯ã[å®å
šã·ã£ãŒãã£ã³ã° ããŒã¿äžŠåããã°](https://pytorch.org/blog/introducing-pytorch-full-sharded-data-Parallel-api/) ãã芧ãã ããã
ææ°ã® PyTorch ã® Fully Sharded Data Parallel (FSDP) ãã¬ãŒãã³ã°æ©èœãçµ±åããŸããã
å¿
èŠãªã®ã¯ãèšå®ãéããŠæå¹ã«ããããšã ãã§ãã
**FSDP ãµããŒãã«å¿
èŠãª PyTorch ããŒãžã§ã³**: PyTorch Nightly (ãªãªãŒã¹åŸã«ãããèªãã å Žå㯠1.12.0)
FSDP ãæå¹ã«ããã¢ãã«ã®ä¿åã¯ãæè¿ã®ä¿®æ£ã§ã®ã¿å©çšã§ããããã§ãã
**䜿çšæ³**ïŒ
- é
åžãããã©ã³ãã£ãŒãè¿œå ãããŠããããšã確èªããŠãã ãã
ãŸã 䜿çšããŠããªãå Žåã¯ã`-m torch.distributed.launch --nproc_per_node=NUMBER_OF_GPUS_YOU_HAVE`ã䜿çšããŸãã
- **ã·ã£ãŒãã£ã³ã°æŠç¥**:
- FULL_SHARD : ããŒã¿äžŠåã¯ãŒã«ãŒ/GPU ã«ãããã·ã£ãŒã ãªããã£ãã€ã¶ãŒã®ç¶æ
+ åŸé
+ ã¢ãã« ãã©ã¡ãŒã¿ãŒã
ãã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp full_shard`ãè¿œå ããŸãã
- SHARD_GRAD_OP : ã·ã£ãŒã ãªããã£ãã€ã¶ãŒã®ç¶æ
+ ããŒã¿äžŠåã¯ãŒã«ãŒ/GPU å
šäœã®åŸé
ã
ãã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp shard_grad_op`ãè¿œå ããŸãã
- NO_SHARD : ã·ã£ãŒãã£ã³ã°ãªãããã®ããã«ã¯ãã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp no_shard`ãè¿œå ããŸãã
- ãã©ã¡ãŒã¿ãšåŸé
ã CPU ã«ãªãããŒãããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard offload"`ãŸãã¯`--fsdp "shard_grad_op offload"`ãè¿œå ããŸãã
- `default_auto_wrap_policy` ã䜿çšã㊠FSDP ã§ã¬ã€ã€ãŒãèªåçã«ååž°çã«ã©ããããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard auto_wrap"`ãŸãã¯`--fsdp "shard_grad_op auto_wrap"`ãè¿œå ããŸãã
- CPU ãªãããŒããšèªåã©ããã³ã°ã®äž¡æ¹ãæå¹ã«ããã«ã¯ã
ã³ãã³ãã©ã€ã³åŒæ°ã«`--fsdp "full_shard offload auto_wrap"`ãŸãã¯`--fsdp "shard_grad_op offload auto_wrap"`ãè¿œå ããŸãã
- æ®ãã® FSDP æ§æã¯ã`--fsdp_config <path_to_fsdp_config.json>`ãä»ããŠæž¡ãããŸããããã¯ã次ã®ããããã®å Žæã§ãã
FSDP json æ§æãã¡ã€ã« (äŸ: `fsdp_config.json`)ããŸãã¯ãã§ã«ããŒããããŠãã json ãã¡ã€ã«ã `dict` ãšããŠäœ¿çšããŸãã
- èªåã©ããã³ã°ãæå¹ãªå Žåã¯ããã©ã³ã¹ããŒã¹ã®èªåã©ãã ããªã·ãŒãŸãã¯ãµã€ãº ããŒã¹ã®èªåã©ãã ããªã·ãŒã䜿çšã§ããŸãã
- ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåãæ§æãã¡ã€ã«ã§ `fsdp_transformer_layer_cls_to_wrap` ãæå®ããããšããå§ãããŸããæå®ããªãå Žåã䜿çšå¯èœãªå Žåãããã©ã«ãå€ã¯ `model._no_split_modules` ã«ãªããŸãã
ããã¯ãã©ãããããã©ã³ã¹ãã©ãŒããŒå±€ã¯ã©ã¹åã®ãªã¹ã (倧æåãšå°æåãåºå¥) ãæå®ããŸã (äŸ: [`BertLayer`]ã[`GPTJBlock`]ã[`T5Block`] ...)ã
éã¿ãå
±æãããµãã¢ãžã¥ãŒã« (åã蟌ã¿å±€ãªã©) ãç°ãªã FSDP ã©ããããããŠãããã«ãªããªãããã«ããå¿
èŠããããããããã¯éèŠã§ãã
ãã®ããªã·ãŒã䜿çšãããšããã«ãããã ã¢ãã³ã·ã§ã³ãšããã«ç¶ãããã€ãã® MLP ã¬ã€ã€ãŒãå«ããããã¯ããšã«ã©ããã³ã°ãçºçããŸãã
å
±æåã蟌ã¿ãå«ãæ®ãã®å±€ã¯ãåãæãå€åŽã® FSDP ãŠãããã«ã©ãããããã®ã䟿å©ã§ãã
ãããã£ãŠããã©ã³ã¹ããŒã¹ã®ã¢ãã«ã«ã¯ããã䜿çšããŠãã ããã
- ãµã€ãºããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåã¯ãèšå®ãã¡ã€ã«ã«`fsdp_min_num_params`ãè¿œå ããŠãã ããã
èªåã©ããã³ã°ã®ããã® FSDP ã®ãã©ã¡ãŒã¿ã®æå°æ°ãæå®ããŸãã
- èšå®ãã¡ã€ã«ã§ `fsdp_backward_prefetch` ãæå®ã§ããããã«ãªããŸããã次ã®ãã©ã¡ãŒã¿ã®ã»ããããã€ããªãã§ããããããå¶åŸ¡ããŸãã
`backward_pre` ãš `backward_pos` ãå©çšå¯èœãªãªãã·ã§ã³ã§ãã
詳现ã«ã€ããŠã¯ã`torch.distributed.fsdp.full_sharded_data_Parallel.BackwardPrefetch`ãåç
§ããŠãã ããã
- èšå®ãã¡ã€ã«ã§ `fsdp_forward_prefetch` ãæå®ã§ããããã«ãªããŸããã次ã®ãã©ã¡ãŒã¿ã®ã»ããããã€ããªãã§ããããããå¶åŸ¡ããŸãã
`True`ã®å ŽåãFSDP ã¯ãã©ã¯ãŒã ãã¹ã§ã®å®è¡äžã«ã次ã«æ¥ããªãŒã«ã®ã£ã¶ãŒãæ瀺çã«ããªãã§ããããŸãã
- èšå®ãã¡ã€ã«ã§ `limit_all_gathers` ãæå®ã§ããããã«ãªããŸããã
`True`ã®å ŽåãFSDP 㯠CPU ã¹ã¬ãããæ瀺çã«åæããŠãå®è¡äžã®ãªãŒã«ã®ã£ã¶ãå€ãããã®ãé²ããŸãã
- `activation_checkpointing`ãèšå®ãã¡ã€ã«ã§æå®ã§ããããã«ãªããŸããã
`True`ã®å ŽåãFSDP ã¢ã¯ãã£ããŒã·ã§ã³ ãã§ãã¯ãã€ã³ãã¯ãFSDP ã®ã¢ã¯ãã£ããŒã·ã§ã³ãã¯ãªã¢ããããšã§ã¡ã¢ãªäœ¿çšéãåæžããææ³ã§ãã
ç¹å®ã®ã¬ã€ã€ãŒãåŠçããããã¯ã¯ãŒã ãã¹äžã«ããããåèšç®ããŸããäºå®äžãããã¯äœåãªèšç®æéãç ç²ã«ããŸã
ã¡ã¢ãªäœ¿çšéãåæžããŸãã
**泚æãã¹ã泚æç¹ãããã€ããããŸã**
- ãã㯠`generate` ãšäºææ§ããªãããã `--predict_with_generate` ãšãäºææ§ããããŸãã
ãã¹ãŠã® seq2seq/clm ã¹ã¯ãªãã (翻蚳/èŠçŽ/clm ãªã©)ã
åé¡ [#21667](https://github.com/huggingface/transformers/issues/21667) ãåç
§ããŠãã ããã
### PyTorch/XLA Fully Sharded Data parallel
TPU ãŠãŒã¶ãŒã®çæ§ã«æå ±ã§ãã PyTorch/XLA 㯠FSDP ããµããŒãããããã«ãªããŸããã
ææ°ã® Fully Sharded Data Parallel (FSDP) ãã¬ãŒãã³ã°ããã¹ãŠãµããŒããããŠããŸãã
詳现ã«ã€ããŠã¯ã[FSDP ã䜿çšãã Cloud TPU ã§ã® PyTorch ã¢ãã«ã®ã¹ã±ãŒãªã³ã°](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) ããã³ [PyTorch/XLA å®è£
ãåç
§ããŠãã ããã FSDP ã®](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp)
å¿
èŠãªã®ã¯ãèšå®ãéããŠæå¹ã«ããããšã ãã§ãã
**FSDP ãµããŒãã«å¿
èŠãª PyTorch/XLA ããŒãžã§ã³**: >=2.0
**䜿çšæ³**ïŒ
`--fsdp "full shard"` ãã`--fsdp_config <path_to_fsdp_config.json>` ã«å ãããã次ã®å€æŽãšãšãã«æž¡ããŸãã
- PyTorch/XLA FSDP ãæå¹ã«ããã«ã¯ã`xla`ã`True`ã«èšå®ããå¿
èŠããããŸãã
- `xla_fsdp_settings` å€ã¯ãXLA FSDP ã©ããã³ã° ãã©ã¡ãŒã¿ãæ ŒçŽããèŸæžã§ãã
ãªãã·ã§ã³ã®å®å
šãªãªã¹ãã«ã€ããŠã¯ã[ãã¡ã](
https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_full_sharded_data_Parallel.py)ã
- `xla_fsdp_grad_ckpt`ã `True`ã®å Žåããã¹ãããã XLA FSDP ã§ã©ãããããåã¬ã€ã€ãŒäžã§åŸé
ãã§ãã¯ãã€ã³ãã䜿çšããŸãã
ãã®èšå®ã¯ãxla ãã©ã°ã true ã«èšå®ãããŠãããèªåã©ããã³ã° ããªã·ãŒãæå®ãããŠããå Žåã«ã®ã¿äœ¿çšã§ããŸãã
`fsdp_min_num_params` ãŸã㯠`fsdp_transformer_layer_cls_to_wrap`ã
- ãã©ã³ã¹ãã©ãŒã㌠ããŒã¹ã®èªåã©ãã ããªã·ãŒãŸãã¯ãµã€ãº ããŒã¹ã®èªåã©ãã ããªã·ãŒã®ããããã䜿çšã§ããŸãã
- ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåãæ§æãã¡ã€ã«ã§ `fsdp_transformer_layer_cls_to_wrap` ãæå®ããããšããå§ãããŸããæå®ããªãå Žåã䜿çšå¯èœãªå Žåãããã©ã«ãå€ã¯ `model._no_split_modules` ã«ãªããŸãã
ããã¯ãã©ãããããã©ã³ã¹ãã©ãŒããŒå±€ã¯ã©ã¹åã®ãªã¹ã (倧æåãšå°æåãåºå¥) ãæå®ããŸã (äŸ: [`BertLayer`]ã[`GPTJBlock`]ã[`T5Block`] ...)ã
éã¿ãå
±æãããµãã¢ãžã¥ãŒã« (åã蟌ã¿å±€ãªã©) ãç°ãªã FSDP ã©ããããããŠãããã«ãªããªãããã«ããå¿
èŠããããããããã¯éèŠã§ãã
ãã®ããªã·ãŒã䜿çšãããšããã«ãããã ã¢ãã³ã·ã§ã³ãšããã«ç¶ãããã€ãã® MLP ã¬ã€ã€ãŒãå«ããããã¯ããšã«ã©ããã³ã°ãçºçããŸãã
å
±æåã蟌ã¿ãå«ãæ®ãã®å±€ã¯ãåãæãå€åŽã® FSDP ãŠãããã«ã©ãããããã®ã䟿å©ã§ãã
ãããã£ãŠããã©ã³ã¹ããŒã¹ã®ã¢ãã«ã«ã¯ããã䜿çšããŠãã ããã
- ãµã€ãºããŒã¹ã®èªåã©ããããªã·ãŒã®å Žåã¯ãèšå®ãã¡ã€ã«ã«`fsdp_min_num_params`ãè¿œå ããŠãã ããã
èªåã©ããã³ã°ã®ããã® FSDP ã®ãã©ã¡ãŒã¿ã®æå°æ°ãæå®ããŸãã
### Using Trainer for accelerated PyTorch Training on Mac
PyTorch v1.12 ãªãªãŒã¹ã«ãããéçºè
ãšç 究è
㯠Apple ã·ãªã³ã³ GPU ãå©çšããŠã¢ãã« ãã¬ãŒãã³ã°ã倧å¹
ã«é«éåã§ããŸãã
ããã«ããããããã¿ã€ãã³ã°ã埮調æŽãªã©ã®æ©æ¢°åŠç¿ã¯ãŒã¯ãããŒã Mac äžã§ããŒã«ã«ã§å®è¡ã§ããããã«ãªããŸãã
PyTorch ã®ããã¯ãšã³ããšããŠã® Apple ã® Metal Performance Shaders (MPS) ã¯ãããå¯èœã«ããæ°ãã `"mps"` ããã€ã¹çµç±ã§äœ¿çšã§ããŸãã
ããã«ãããèšç®ã°ã©ããšããªããã£ãã MPS Graph ãã¬ãŒã ã¯ãŒã¯ãš MPS ã«ãã£ãŠæäŸããã調æŽãããã«ãŒãã«ã«ãããã³ã°ãããŸãã
詳现ã«ã€ããŠã¯ãå
¬åŒããã¥ã¡ã³ã [Mac ã§ã® Accelerated PyTorch Training ã®çŽ¹ä»](https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/) ãåç
§ããŠãã ããã
ããã³ [MPS ããã¯ãšã³ã](https://pytorch.org/docs/stable/notes/mps.html)ã
<Tip warning={false}>
MacOS ãã·ã³ã« PyTorch >= 1.13 (å·çæç¹ã§ã¯ãã€ããªãŒ ããŒãžã§ã³) ãã€ã³ã¹ããŒã«ããããšã匷ããå§ãããŸãã
ãã©ã³ã¹ããŒã¹ã®ã¢ãã«ã®ã¢ãã«ã®æ£ç¢ºæ§ãšããã©ãŒãã³ã¹ã®åäžã«é¢é£ããäž»èŠãªä¿®æ£ãè¡ãããŠããŸãã
詳现ã«ã€ããŠã¯ãhttps://github.com/pytorch/pytorch/issues/82707 ãåç
§ããŠãã ããã
</Tip>
**Apple Silicon ãããã䜿çšãããã¬ãŒãã³ã°ãšæšè«ã®å©ç¹**
1. ãŠãŒã¶ãŒãããŒã«ã«ã§å€§èŠæš¡ãªãããã¯ãŒã¯ãããã ãµã€ãºããã¬ãŒãã³ã°ã§ããããã«ããŸã
2. ãŠããã¡ã€ã ã¡ã¢ãª ã¢ãŒããã¯ãã£ã«ãããããŒã¿ååŸã®é
延ãççž®ãããGPU ãã¡ã¢ãª ã¹ãã¢å
šäœã«çŽæ¥ã¢ã¯ã»ã¹ã§ããããã«ãªããŸãã
ãããã£ãŠããšã³ãããŒãšã³ãã®ããã©ãŒãã³ã¹ãåäžããŸãã
3. ã¯ã©ãŠãããŒã¹ã®éçºã«é¢é£ããã³ã¹ããè¿œå ã®ããŒã«ã« GPU ã®å¿
èŠæ§ãåæžããŸãã
**åææ¡ä»¶**: mps ãµããŒããåããããŒããã€ã³ã¹ããŒã«ããã«ã¯ã
ãã®çŽ æŽãããã¡ãã£ã¢èšäº [GPU ã¢ã¯ã»ã©ã¬ãŒã·ã§ã³ã M1 Mac ã® PyTorch ã«ç»å Ž](https://medium.com/towards-data-science/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1) ã«åŸã£ãŠãã ããã ã
**䜿çšæ³**ïŒ
`mps` ããã€ã¹ã¯ã`cuda` ããã€ã¹ã䜿çšãããæ¹æ³ãšåæ§ã«å©çšå¯èœãªå Žåãããã©ã«ãã§äœ¿çšãããŸãã
ãããã£ãŠããŠãŒã¶ãŒã«ããã¢ã¯ã·ã§ã³ã¯å¿
èŠãããŸããã
ããšãã°ã以äžã®ã³ãã³ãã䜿çšããŠãApple Silicon GPU ã䜿çšããŠå
¬åŒã® Glue ããã¹ãåé¡ã¿ã¹ã¯ã (ã«ãŒã ãã©ã«ããŒãã) å®è¡ã§ããŸãã
```bash
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
**泚æãã¹ãããã€ãã®æ³šæäºé
**
1. äžéšã® PyTorch æäœã¯ mps ã«å®è£
ãããŠããªãããããšã©ãŒãã¹ããŒãããŸãã
ãããåé¿ãã 1 ã€ã®æ¹æ³ã¯ãç°å¢å€æ° `PYTORCH_ENABLE_MPS_FALLBACK=1` ãèšå®ããããšã§ãã
ãããã®æäœã§ã¯ CPU ã«ãã©ãŒã«ããã¯ããŸãããã ããããã§ã UserWarning ãã¹ããŒãããŸãã
2. åæ£ã»ããã¢ãã`gloo`ããã³`nccl`ã¯ã`mps`ããã€ã¹ã§ã¯åäœããŸããã
ããã¯ãçŸåšãmpsãããã€ã¹ ã¿ã€ãã®åäž GPU ã®ã¿ã䜿çšã§ããããšãæå³ããŸãã
æåŸã«ãèŠããŠãããŠãã ããã ð€ `Trainer` 㯠MPS ããã¯ãšã³ãã®ã¿ãçµ±åããããã
MPS ããã¯ãšã³ãã®äœ¿çšã«é¢ããŠåé¡ã質åãããå Žåã¯ã
[PyTorch GitHub](https://github.com/pytorch/pytorch/issues) ã«åé¡ãæåºããŠãã ããã
## Using Accelerate Launcher with Trainer
å éããŠãã¬ãŒããŒã«ãã¯ãŒãäžããŸãããããŠãŒã¶ãŒãæåŸ
ããããšã«é¢ããŠã¯ã次ã®ãšããã§ãã
- ãã¬ãŒããŒåŒæ°ã«å¯Ÿã㊠FSDPãDeepSpeed ãªã©ã®ãã¬ãŒã㌠ã€ã³ãã¬ãŒã·ã§ã³ãå€æŽããã«äœ¿çšãç¶ããããšãã§ããŸãã
- ãã¬ãŒããŒã§ Accelerate Launcher ã䜿çšã§ããããã«ãªããŸãã (æšå¥š)ã
ãã¬ãŒããŒã§ Accelerate Launcher ã䜿çšããæé :
1. ð€ Accelerate ãã€ã³ã¹ããŒã«ãããŠããããšã確èªããŠãã ãããAccelerate ããªããš `Trainer` ã䜿çšããããšã¯ã§ããŸãããããã§ãªãå Žåã¯ã`pip install accelerate`ããŠãã ããã Accelerate ã®ããŒãžã§ã³ãæŽæ°ããå¿
èŠãããå ŽåããããŸã: `pip install activate --upgrade`
2. `accelerate config`ãå®è¡ããã¢ã³ã±ãŒãã«èšå
¥ããŸãã以äžã¯å éèšå®ã®äŸã§ãã
ïœïŒ DDP ãã«ãããŒã ãã«ã GPU æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0 #change rank as per the node
main_process_ip: 192.168.20.1
main_process_port: 9898
main_training_function: main
mixed_precision: fp16
num_machines: 2
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
b. FSDP config:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: BertLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
c.ãã¡ã€ã«ãæã DeepSpeed æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_config_file: /home/user/configs/ds_zero3_config.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
d.å éãã©ã°ã€ã³ã䜿çšãã DeepSpeed æ§æ:
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 0.7
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
3. å éèšå®ãŸãã¯ã©ã³ãã£ãŒåŒæ°ã«ãã£ãŠäžèšã§åŠçãããåŒæ°ä»¥å€ã®åŒæ°ã䜿çšããŠããã¬ãŒã㌠ã¹ã¯ãªãããå®è¡ããŸãã
以äžã¯ãäžèšã® FSDP æ§æã§`accelerate launcher`ã䜿çšããŠ`run_glue.py`ãå®è¡ããäŸã§ãã
```bash
cd transformers
accelerate launch \
./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
4. `accelerate launch`ããããã® cmd åŒæ°ãçŽæ¥äœ¿çšããããšãã§ããŸããäžã®äŸã¯æ¬¡ã®ããã«ãããã³ã°ãããŸãã
```bash
cd transformers
accelerate launch --num_processes=2 \
--use_fsdp \
--mixed_precision=bf16 \
--fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP \
--fsdp_transformer_layer_cls_to_wrap="BertLayer" \
--fsdp_sharding_strategy=1 \
--fsdp_state_dict_type=FULL_STATE_DICT \
./examples/pytorch/text-classification/run_glue.py
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir
```
詳现ã«ã€ããŠã¯ãð€ Accelerate CLI ã¬ã€ããåç
§ããŠãã ãã: [ð€ Accelerate ã¹ã¯ãªããã®èµ·å](https://huggingface.co/docs/accelerate/basic_tutorials/launch)ã
移åãããã»ã¯ã·ã§ã³:
[ <a href="./deepspeed#deepspeed-trainer-integration">DeepSpeed</a><a id="deepspeed"></a>
| <a href="./deepspeed#deepspeed-installation">Installation</a><a id="installation"></a>
| <a href="./deepspeed#deepspeed-multi-gpu">Deployment with multiple GPUs</a><a id="deployment-with-multiple-gpus"></a>
| <a href="./deepspeed#deepspeed-one-gpu">Deployment with one GPU</a><a id="deployment-with-one-gpu"></a>
| <a href="./deepspeed#deepspeed-notebook">Deployment in Notebooks</a><a id="deployment-in-notebooks"></a>
| <a href="./deepspeed#deepspeed-config">Configuration</a><a id="configuration"></a>
| <a href="./deepspeed#deepspeed-config-passing">Passing Configuration</a><a id="passing-configuration"></a>
| <a href="./deepspeed#deepspeed-config-shared">Shared Configuration</a><a id="shared-configuration"></a>
| <a href="./deepspeed#deepspeed-zero">ZeRO</a><a id="zero"></a>
| <a href="./deepspeed#deepspeed-zero2-config">ZeRO-2 Config</a><a id="zero-2-config"></a>
| <a href="./deepspeed#deepspeed-zero3-config">ZeRO-3 Config</a><a id="zero-3-config"></a>
| <a href="./deepspeed#deepspeed-nvme">NVMe Support</a><a id="nvme-support"></a>
| <a href="./deepspeed#deepspeed-zero2-zero3-performance">ZeRO-2 vs ZeRO-3 Performance</a><a id="zero-2-vs-zero-3-performance"></a>
| <a href="./deepspeed#deepspeed-zero2-example">ZeRO-2 Example</a><a id="zero-2-example"></a>
| <a href="./deepspeed#deepspeed-zero3-example">ZeRO-3 Example</a><a id="zero-3-example"></a>
| <a href="./deepspeed#deepspeed-optimizer">Optimizer</a><a id="optimizer"></a>
| <a href="./deepspeed#deepspeed-scheduler">Scheduler</a><a id="scheduler"></a>
| <a href="./deepspeed#deepspeed-fp32">fp32 Precision</a><a id="fp32-precision"></a>
| <a href="./deepspeed#deepspeed-amp">Automatic Mixed Precision</a><a id="automatic-mixed-precision"></a>
| <a href="./deepspeed#deepspeed-bs">Batch Size</a><a id="batch-size"></a>
| <a href="./deepspeed#deepspeed-grad-acc">Gradient Accumulation</a><a id="gradient-accumulation"></a>
| <a href="./deepspeed#deepspeed-grad-clip">Gradient Clipping</a><a id="gradient-clipping"></a>
| <a href="./deepspeed#deepspeed-weight-extraction">Getting The Model Weights Out</a><a id="getting-the-model-weights-out"></a>
]
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/feature_extractor.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Feature Extractor
ãã£ãŒãã£ãŒãšã¯ã¹ãã©ã¯ã¿ã¯ããªãŒãã£ãªãŸãã¯ããžã§ã³ã¢ãã«ã®ããã®å
¥åãã£ãŒãã£ãŒã®æºåãæ
åœããŠããŸããããã«ã¯ãã·ãŒã±ã³ã¹ããã®ãã£ãŒãã£ãŒæœåºïŒäŸïŒãªãŒãã£ãªãã¡ã€ã«ã®ååŠçããLog-Melã¹ãã¯ããã°ã©ã ãã£ãŒãã£ãŒãžã®å€æïŒãç»åããã®ãã£ãŒãã£ãŒæœåºïŒäŸïŒç»åãã¡ã€ã«ã®ã¯ãããã³ã°ïŒããŸãããã£ã³ã°ãæ£èŠåããããŠNumpyãPyTorchãTensorFlowãã³ãœã«ãžã®å€æãå«ãŸããŸãã
## FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
## SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
## BatchFeature
[[autodoc]] BatchFeature
## ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/main_classes/tokenizer.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tokenizer
ããŒã¯ãã€ã¶ãŒã¯ãã¢ãã«ã®å
¥åã®æºåãæ
åœããŸããã©ã€ãã©ãªã«ã¯ããã¹ãŠã®ã¢ãã«ã®ããŒã¯ãã€ã¶ãŒãå«ãŸããŠããŸããã»ãšãã©
ããŒã¯ãã€ã¶ãŒã®äžéšã¯ãå®å
šãª Python å®è£
ãšã
Rust ã©ã€ãã©ãª [ð€ Tokenizers](https://github.com/huggingface/tokenizers)ã ãé«éãå®è£
ã§ã¯æ¬¡ã®ããšãå¯èœã«ãªããŸãã
1. ç¹ã«ãããããŒã¯ã³åãè¡ãå Žåã®å€§å¹
ãªã¹ããŒãã¢ãããš
2. å
ã®æåå (æåãšåèª) ãšããŒã¯ã³ç©ºéã®éã§ãããã³ã°ããè¿œå ã®ã¡ãœãã (äŸ:
ç¹å®ã®æåãå«ãããŒã¯ã³ã®ã€ã³ããã¯ã¹ããŸãã¯ç¹å®ã®ããŒã¯ã³ã«å¯Ÿå¿ããæåã®ç¯å²ïŒã
åºæ¬ã¯ã©ã¹ [`PreTrainedTokenizer`] ããã³ [`PreTrainedTokenizerFast`]
ã¢ãã«å
¥åã®æååå
¥åããšã³ã³ãŒãã (以äžãåç
§)ãPython ãã€ã³ã¹ã¿ã³ã¹å/ä¿åããããã®äžè¬çãªã¡ãœãããå®è£
ããŸãã
ããŒã«ã« ãã¡ã€ã«ãŸãã¯ãã£ã¬ã¯ããªããŸãã¯ã©ã€ãã©ãªã«ãã£ãŠæäŸãããäºåãã¬ãŒãã³ã°æžã¿ããŒã¯ãã€ã¶ãŒããã®ãé«éãããŒã¯ãã€ã¶ãŒ
(HuggingFace ã® AWS S3 ãªããžããªããããŠã³ããŒã)ãäºäººãšãé Œãã«ããŠããã®ã¯ã
å
±éã¡ãœãããå«ã [`~tokenization_utils_base.PreTrainedTokenizerBase`]
[`~tokenization_utils_base.SpecialTokensMixin`]ã
ãããã£ãŠã[`PreTrainedTokenizer`] ãš [`PreTrainedTokenizerFast`] ã¯ã¡ã€ã³ãå®è£
ããŸãã
ãã¹ãŠã®ããŒã¯ãã€ã¶ãŒã䜿çšããããã®ã¡ãœãã:
- ããŒã¯ã³å (æååããµãã¯ãŒã ããŒã¯ã³æååã«åå²)ãããŒã¯ã³æååã ID ã«å€æãããããã®éã®å€æãè¡ã£ããããŸãã
ãšã³ã³ãŒã/ãã³ãŒã (ã€ãŸããããŒã¯ã³åãšæŽæ°ãžã®å€æ)ã
- åºç€ãšãªãæ§é (BPEãSentencePiece...) ããç¬ç«ããæ¹æ³ã§ãèªåœã«æ°ããããŒã¯ã³ãè¿œå ããŸãã
- ç¹å¥ãªããŒã¯ã³ (ãã¹ã¯ãæã®å§ãŸããªã©) ã®ç®¡ç: ããŒã¯ã³ã®è¿œå ãå±æ§ãžã®å²ãåœãŠã
ããŒã¯ãã€ã¶ãŒã«ãããç°¡åã«ã¢ã¯ã»ã¹ã§ããããŒã¯ã³åäžã«åå²ãããªãããã«ããããšãã§ããŸãã
[`BatchEncoding`] ã¯ã
[`~tokenization_utils_base.PreTrainedTokenizerBase`] ã®ãšã³ã³ãŒã ã¡ãœãã (`__call__`ã
`encode_plus` ããã³ `batch_encode_plus`) ã§ãããPython èŸæžãã掟çããŠããŸããããŒã¯ãã€ã¶ãŒãçŽç²ãª Python ã®å Žå
tokenizer ã®å Žåããã®ã¯ã©ã¹ã¯æšæºã® Python èŸæžãšåãããã«åäœããã«ãã£ãŠèšç®ãããããŸããŸãªã¢ãã«å
¥åãä¿æããŸãã
ãããã®ã¡ãœãã (`input_ids`ã`attention_mask`...)ãããŒã¯ãã€ã¶ãŒããé«éãããŒã¯ãã€ã¶ãŒã§ããå Žå (ã€ãŸãã
HuggingFace [ããŒã¯ãã€ã¶ãŒ ã©ã€ãã©ãª](https://github.com/huggingface/tokenizers))ããã®ã¯ã©ã¹ã¯ããã«æäŸããŸã
å
ã®æåå (æåãšåèª) ãš
ããŒã¯ã³ã¹ããŒã¹ (äŸ: æå®ãããæåãŸãã¯å¯Ÿå¿ããæåã®ç¯å²ãæ§æããããŒã¯ã³ã®ã€ã³ããã¯ã¹ã®ååŸ)
äžããããããŒã¯ã³ã«ïŒã
## PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## PreTrainedTokenizerFast
[`PreTrainedTokenizerFast`] 㯠[tokenizers](https://huggingface.co/docs/tokenizers) ã©ã€ãã©ãªã«äŸåããŸãã ð€ ããŒã¯ãã€ã¶ãŒ ã©ã€ãã©ãªããååŸããããŒã¯ãã€ã¶ãŒã¯ã
ð€ ãã©ã³ã¹ã«éåžžã«ç°¡åã«ããŒããããŸãããããã©ã®ããã«è¡ãããããç解ããã«ã¯ã[ð€ tokenizers ããã® tokenizers ã䜿çšãã](../fast_tokenizers) ããŒãžãåç
§ããŠãã ããã
[[autodoc]] PreTrainedTokenizerFast
- __call__
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## BatchEncoding
[[autodoc]] BatchEncoding
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/align.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ALIGN
## æŠèŠ
ALIGNã¢ãã«ã¯ãã[Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)ããšããè«æã§Chao JiaãYinfei YangãYe XiaãYi-Ting ChenãZarana ParekhãHieu PhamãQuoc V. LeãYunhsuan SungãZhen LiãTom Duerigã«ãã£ãŠææ¡ãããŸãããALIGNã¯ãã«ãã¢ãŒãã«ãªèŠèŠèšèªã¢ãã«ã§ããããã¯ç»åãšããã¹ãã®é¡äŒŒåºŠãããŒãã·ã§ããç»ååé¡ã«äœ¿çšã§ããŸããALIGNã¯[EfficientNet](efficientnet)ãèŠèŠãšã³ã³ãŒããŒãšããŠã[BERT](bert)ãããã¹ããšã³ã³ãŒããŒãšããŠæèŒãããã¥ã¢ã«ãšã³ã³ãŒããŒæ§é ãç¹åŸŽãšãã察ç
§åŠç¿ã«ãã£ãŠèŠèŠãšããã¹ãã®è¡šçŸãæŽåãããããšãåŠã³ãŸãããããŸã§ã®ç 究ãšã¯ç°ãªããALIGNã¯å·šå€§ã§ãã€ãžãŒãªããŒã¿ã»ããã掻çšããã³ãŒãã¹ã®ã¹ã±ãŒã«ãå©çšããŠåçŽãªæ¹æ³ãªããæå
端ã®è¡šçŸãéæã§ããããšã瀺ããŠããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*äºååŠç¿ãããè¡šçŸã¯ãå€ãã®èªç¶èšèªåŠçïŒNLPïŒããã³ç¥èŠã¿ã¹ã¯ã«ãšã£ãŠéèŠã«ãªã£ãŠããŸããNLPã«ãããè¡šçŸåŠç¿ã¯ã人éã®ã¢ãããŒã·ã§ã³ã®ãªãçã®ããã¹ãã§ã®åŠç¿ãžãšç§»è¡ããŠããŸãããèŠèŠããã³èŠèŠèšèªã®è¡šçŸã¯äŸç¶ãšããŠç²Ÿå·§ãªåŠç¿ããŒã¿ã»ããã«å€§ããäŸåããŠãããããã¯é«äŸ¡ã§ãã£ããå°éç¥èãå¿
èŠãšãããããŸããèŠèŠã¢ããªã±ãŒã·ã§ã³ã®å ŽåãImageNetãOpenImagesã®ãããªæ瀺çãªã¯ã©ã¹ã©ãã«ãæã€ããŒã¿ã»ããã䜿çšããŠåŠç¿ãããããšãã»ãšãã©ã§ããèŠèŠèšèªã®å ŽåãConceptual CaptionsãMSCOCOãCLIPãªã©ã®äººæ°ã®ããããŒã¿ã»ããã¯ãã¹ãŠãããããç¡èŠã§ããªãããŒã¿åéïŒããã³ã¯ãªãŒãã³ã°ïŒããã»ã¹ãå«ã¿ãŸãããã®ã³ã¹ãã®ããããã¥ã¬ãŒã·ã§ã³ããã»ã¹ã¯ããŒã¿ã»ããã®ãµã€ãºãå¶éããèšç·Žãããã¢ãã«ã®ã¹ã±ãŒãªã³ã°ã劚ããŸããæ¬è«æã§ã¯ãConceptual CaptionsããŒã¿ã»ããã®é«äŸ¡ãªãã£ã«ã¿ãªã³ã°ãåŸåŠçã¹ããããªãã§åŸãããã10åãè¶
ããç»åalt-textãã¢ã®ãã€ãºã®å€ãããŒã¿ã»ããã掻çšããŸããã·ã³ãã«ãªãã¥ã¢ã«ãšã³ã³ãŒããŒã¢ãŒããã¯ãã£ã¯ã察ç
§æ倱ã䜿çšããŠç»åãšããã¹ããã¢ã®èŠèŠçããã³èšèªçè¡šçŸãæŽåãããããšãåŠç¿ããŸããæã
ã¯ãã³ãŒãã¹ã®èŠæš¡ããã®ãã€ãºãè£ãããã®ãããªåçŽãªåŠç¿ã¹ããŒã ã§ãæå
端ã®è¡šçŸã«ã€ãªããããšã瀺ããŸããæã
ã®èŠèŠè¡šçŸã¯ãImageNetãVTABãªã©ã®åé¡ã¿ã¹ã¯ãžã®è»¢ç§»ã«ãããŠåŒ·åãªæ§èœãçºæ®ããŸããæŽåããèŠèŠçããã³èšèªçè¡šçŸã¯ããŒãã·ã§ããç»ååé¡ãå¯èœã«ãããŸããããæŽç·Žãããã¯ãã¹ã¢ãã³ã·ã§ã³ã¢ãã«ãšæ¯èŒããŠããFlickr30Kããã³MSCOCOç»åããã¹ãæ€çŽ¢ãã³ãããŒã¯ã«ãããŠæ°ããªæå
端ã®çµæãéæããŸãããŸãããããã®è¡šçŸã¯ãè€éãªããã¹ãããã³ããã¹ã+ç»åã®ã¯ãšãªãçšããã¯ãã¹ã¢ãŒãã«æ€çŽ¢ãå¯èœã«ããŸãã*
ãã®ã¢ãã«ã¯[Alara Dirik](https://huggingface.co/adirik)ã«ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯å
¬éãããŠãããããã®å®è£
ã¯å
è«æã«åºã¥ããKakao Brainã®å®è£
ãããŒã¹ã«ããŠããŸãã
## 䜿çšäŸ
ALIGNã¯EfficientNetã䜿çšããŠèŠèŠçç¹åŸŽããBERTã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸããããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽã¯ãåäžã®æ¬¡å
ãæã€æœåšç©ºéã«å°åœ±ãããŸããå°åœ±ãããç»åãšããã¹ãç¹åŸŽéã®ãããç©ãé¡äŒŒåºŠã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
[`AlignProcessor`]ã¯ãããã¹ãã®ãšã³ã³ãŒããšç»åã®ååŠçãäž¡æ¹è¡ãããã«ã[`EfficientNetImageProcessor`]ãš[`BertTokenizer`]ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããŸãã以äžã®äŸã¯ã[`AlignProcessor`]ãš[`AlignModel`]ã䜿çšããŠç»å-ããã¹ãé¡äŒŒåºŠã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
```python
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# this is the image-text similarity score
logits_per_image = outputs.logits_per_image
# we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
```
## åèè³æ
ALIGNã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceãšã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèŠ§ã§ãã
- [ALIGNãšCOYO-700MããŒã¿ã»ãã](https://huggingface.co/blog/vit-align)ã«é¢ããããã°æçš¿ã
- ãŒãã·ã§ããç»ååé¡[ãã¢](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification)ã
- `kakaobrain/align-base` ã¢ãã«ã®[ã¢ãã«ã«ãŒã](https://huggingface.co/kakaobrain/align-base)ã
ããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
## AlignConfig
[[autodoc]] AlignConfig
- from_text_vision_configs
## AlignTextConfig
[[autodoc]] AlignTextConfig
## AlignVisionConfig
[[autodoc]] AlignVisionConfig
## AlignProcessor
[[autodoc]] AlignProcessor
## AlignModel
[[autodoc]] AlignModel
- forward
- get_text_features
- get_image_features
## AlignTextModel
[[autodoc]] AlignTextModel
- forward
## AlignVisionModel
[[autodoc]] AlignVisionModel
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/clap.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLAP
## Overview
CLAP ã¢ãã«ã¯ã[Large Scale Contrastive Language-Audio pretraining with
feature fusion and keyword-to-caption augmentation](https://arxiv.org/pdf/2211.06687.pdf)ãYusong WuãKe ChenãTianyu ZhangãYuchen HuiãTaylor Berg-KirkpatrickãShlomo Dubnov èã
CLAP (Contrastive Language-Audio Pretraining) ã¯ãããŸããŸãª (é³å£°ãããã¹ã) ãã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã« ãããã¯ãŒã¯ã§ããã¿ã¹ã¯ã«åãããŠçŽæ¥æé©åããããšãªããé³å£°ãäžããããå Žåã«æãé¢é£æ§ã®é«ãããã¹ã ã¹ãããããäºæž¬ããããã«æ瀺ã§ããŸãã CLAP ã¢ãã«ã¯ãSWINTransformer ã䜿çšã㊠log-Mel ã¹ãã¯ããã°ã©ã å
¥åãããªãŒãã£ãªç¹åŸŽãååŸããRoBERTa ã¢ãã«ã䜿çšããŠããã¹ãç¹åŸŽãååŸããŸãã次ã«ãããã¹ããšãªãŒãã£ãªã®äž¡æ¹ã®ç¹åŸŽããåã次å
ã®æœåšç©ºéã«æ圱ãããŸããæ圱ããããªãŒãã£ãªãšããã¹ãã®ç¹åŸŽã®éã®ãããç©ããåæ§ã®ã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*察ç
§åŠç¿ã¯ããã«ãã¢ãŒãã«è¡šçŸåŠç¿ã®åéã§ç®èŠãŸããæåãåããŠããŸãããã®è«æã§ã¯ãé³å£°ããŒã¿ãšèªç¶èšèªèšè¿°ãçµã¿åãããŠé³å£°è¡šçŸãéçºããã察ç
§çãªèšèªé³å£°äºåãã¬ãŒãã³ã°ã®ãã€ãã©ã€ã³ãææ¡ããŸãããã®ç®æšãéæããããã«ãç§ãã¡ã¯ãŸããããŸããŸãªããŒã¿ ãœãŒã¹ããã® 633,526 åã®é³å£°ãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªã³ã¬ã¯ã·ã§ã³ã§ãã LAION-Audio-630K ããªãªãŒã¹ããŸãã次ã«ãããŸããŸãªãªãŒãã£ãª ãšã³ã³ãŒããšããã¹ã ãšã³ã³ãŒããèæ
®ããŠã察ç
§çãªèšèªãšãªãŒãã£ãªã®äºåãã¬ãŒãã³ã° ã¢ãã«ãæ§ç¯ããŸããæ©èœèåã¡ã«ããºã ãšããŒã¯ãŒããããã£ãã·ã§ã³ãžã®æ¡åŒµãã¢ãã«èšèšã«çµã¿èŸŒãã§ãã¢ãã«ãå¯å€é·ã®é³å£°å
¥åãåŠçã§ããããã«ããããã©ãŒãã³ã¹ãåäžãããŸãã 3 çªç®ã«ãå
æ¬çãªå®éšãå®è¡ããŠãããã¹ãããé³å£°ãžã®ååŸããŒãã·ã§ããé³å£°åé¡ãæåž«ä»ãé³å£°åé¡ã® 3 ã€ã®ã¿ã¹ã¯ã«ããã£ãŠã¢ãã«ãè©äŸ¡ããŸããçµæã¯ãç§ãã¡ã®ã¢ãã«ãããã¹ãããé³å£°ãžã®æ€çŽ¢ã¿ã¹ã¯ã«ãããŠåªããããã©ãŒãã³ã¹ãéæããŠããããšã瀺ããŠããŸãããªãŒãã£ãªåé¡ã¿ã¹ã¯ã§ã¯ãã¢ãã«ã¯ãŒãã·ã§ããèšå®ã§æå
端ã®ããã©ãŒãã³ã¹ãéæããéãŒãã·ã§ããèšå®ã§ãã¢ãã«ã®çµæã«å¹æµããããã©ãŒãã³ã¹ãåŸãããšãã§ããŸãã LAION-ãªãŒãã£ãª-6*
ãã®ã¢ãã«ã¯ã[Younes Belkada](https://huggingface.co/ybelkada) ããã³ [Arthur Zucker](https://huggingface.co/ArthurZ) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/LAION-AI/Clap) ã«ãããŸãã
## ClapConfig
[[autodoc]] ClapConfig
- from_text_audio_configs
## ClapTextConfig
[[autodoc]] ClapTextConfig
## ClapAudioConfig
[[autodoc]] ClapAudioConfig
## ClapFeatureExtractor
[[autodoc]] ClapFeatureExtractor
## ClapProcessor
[[autodoc]] ClapProcessor
## ClapModel
[[autodoc]] ClapModel
- forward
- get_text_features
- get_audio_features
## ClapTextModel
[[autodoc]] ClapTextModel
- forward
## ClapTextModelWithProjection
[[autodoc]] ClapTextModelWithProjection
- forward
## ClapAudioModel
[[autodoc]] ClapAudioModel
- forward
## ClapAudioModelWithProjection
[[autodoc]] ClapAudioModelWithProjection
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bloom.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLOOM
## Overview
BLOOM ã¢ãã«ã¯ã[BigScience Workshop](https://bigscience.huggingface.co/) ãéããŠããŸããŸãªããŒãžã§ã³ã§ææ¡ãããŠããŸãã BigScience ã¯ãç 究è
ãæéãšãªãœãŒã¹ãããŒã«ããŠå
±åã§ããé«ãå¹æãéæããä»ã®ãªãŒãã³ ãµã€ãšã³ã¹ ã€ãã·ã¢ããããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãŠããŸãã
BLOOM ã®ã¢ãŒããã¯ãã£ã¯åºæ¬çã« GPT3 (次ã®ããŒã¯ã³äºæž¬ã®ããã®èªå·±ååž°ã¢ãã«) ã«äŒŒãŠããŸããã46 ã®ç°ãªãèšèªãš 13 ã®ããã°ã©ãã³ã°èšèªã§ãã¬ãŒãã³ã°ãããŠããŸãã
ã¢ãã«ã®ããã€ãã®å°ããããŒãžã§ã³ãåãããŒã¿ã»ããã§ãã¬ãŒãã³ã°ãããŠããŸãã BLOOM ã¯æ¬¡ã®ããŒãžã§ã³ã§å©çšã§ããŸãã
- [bloom-560m](https://huggingface.co/bigscience/bloom-560m)
- [bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
- [bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
- [bloom-3b](https://huggingface.co/bigscience/bloom-3b)
- [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
- [bloom](https://huggingface.co/bigscience/bloom) (176B parameters)
## Resources
BLOOM ã䜿ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="text-generation"/>
- [`BloomForCausalLM`] ããã«ãã£ãŠãµããŒããããŠããŸã [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
以äžãåç
§ããŠãã ããã
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
â¡ïž æšè«
- ã«é¢ããããã° [æé©åã®è©±: ãã«ãŒã æšè«](https://huggingface.co/blog/bloom-inference-optimization)ã
- ã«é¢ããããã° [DeepSpeed ãš Accelerate ã䜿çšããä¿¡ããããªãã»ã©é«é㪠BLOOM æšè«](https://huggingface.co/blog/bloom-inference-pytorch-scripts)ã
âïžãã¬ãŒãã³ã°
- ã«é¢ããããã° [BLOOM ãã¬ãŒãã³ã°ã®èåŸã«ãããã¯ãããžãŒ](https://huggingface.co/blog/bloom-megatron-deepspeed)ã
## BloomConfig
[[autodoc]] BloomConfig
- all
## BloomTokenizerFast
[[autodoc]] BloomTokenizerFast
- all
<frameworkcontent>
<pt>
## BloomModel
[[autodoc]] BloomModel
- forward
## BloomForCausalLM
[[autodoc]] BloomForCausalLM
- forward
## BloomForSequenceClassification
[[autodoc]] BloomForSequenceClassification
- forward
## BloomForTokenClassification
[[autodoc]] BloomForTokenClassification
- forward
## BloomForQuestionAnswering
[[autodoc]] BloomForQuestionAnswering
- forward
</pt>
<jax>
## FlaxBloomModel
[[autodoc]] FlaxBloomModel
- __call__
## FlaxBloomForCausalLM
[[autodoc]] FlaxBloomForCausalLM
- __call__
</jax>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/beit.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BEiT
## Overview
BEiT ã¢ãã«ã¯ã[BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) ã§ææ¡ãããŸããã
ãã³ãã»ããªããªãŒã»ãã³ããã«ã»ãŠã§ã€ã BERT ã«è§Šçºããã BEiT ã¯ãèªå·±æåž«ããã®äºåãã¬ãŒãã³ã°ãäœæããæåã®è«æã§ãã
ããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(ViT) ã¯ãæåž«ä»ãäºåãã¬ãŒãã³ã°ãããåªããããã©ãŒãã³ã¹ãçºæ®ããŸããã¯ã©ã¹ãäºæž¬ããããã«ã¢ãã«ãäºåãã¬ãŒãã³ã°ããã®ã§ã¯ãªã
([ãªãªãžãã«ã® ViT è«æ](https://arxiv.org/abs/2010.11929) ã§è¡ãããããã«) ç»åã® BEiT ã¢ãã«ã¯ã次ã®ããã«äºåãã¬ãŒãã³ã°ãããŠããŸãã
ãã¹ã¯ããã OpenAI ã® [DALL-E ã¢ãã«](https://arxiv.org/abs/2102.12092) ã®ã³ãŒãããã¯ããããžã¥ã¢ã« ããŒã¯ã³ãäºæž¬ããŸã
ãããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èªå·±æåž«ããèŠèŠè¡šçŸã¢ãã« BEiT (Bidirectional Encoderpresentation) ãå°å
¥ããŸãã
ã€ã¡ãŒãžãã©ã³ã¹ãã©ãŒããŒãããèªç¶èšèªåŠçåéã§éçºãããBERTã«å£ãããã¹ã¯ç»åãææ¡ããŸãã
ããžã§ã³ãã©ã³ã¹ãã©ãŒããŒãäºåã«ãã¬ãŒãã³ã°ããããã®ã¢ããªã³ã°ã¿ã¹ã¯ãå
·äœçã«ã¯ãäºåãã¬ãŒãã³ã°ã§ã¯åç»åã« 2 ã€ã®ãã¥ãŒããããŸãã
ããã (16x16 ãã¯ã»ã«ãªã©)ãããã³ããžã¥ã¢ã« ããŒã¯ã³ (ã€ãŸããåå¥ã®ããŒã¯ã³)ããŸããå
ã®ç»åããããŒã¯ã³åãããŠã
ããžã¥ã¢ã«ããŒã¯ã³ã次ã«ãããã€ãã®ç»åããããã©ã³ãã ã«ãã¹ã¯ããããããããã¯ããŒã³ã® Transformer ã«äŸçµŠããŸããäºåãã¬ãŒãã³ã°
ç®çã¯ãç Žæããã€ã¡ãŒãž ãããã«åºã¥ããŠå
ã®ããžã¥ã¢ã« ããŒã¯ã³ãå埩ããããšã§ãã BEiTã®äºåãã¬ãŒãã³ã°åŸã
äºåãã¬ãŒãã³ã°ããããšã³ã³ãŒããŒã«ã¿ã¹ã¯ ã¬ã€ã€ãŒãè¿œå ããããšã§ãããŠã³ã¹ããªãŒã ã¿ã¹ã¯ã®ã¢ãã« ãã©ã¡ãŒã¿ãŒãçŽæ¥åŸ®èª¿æŽããŸãã
ç»ååé¡ãšã»ãã³ãã£ãã¯ã»ã°ã¡ã³ããŒã·ã§ã³ã«é¢ããå®éšçµæã¯ãç§ãã¡ã®ã¢ãã«ã競äºåã®ããçµæãéæããããšã瀺ããŠããŸã
以åã®äºåãã¬ãŒãã³ã°æ¹æ³ã䜿çšããŠãããšãã°ãåºæ¬ãµã€ãºã® BEiT ã¯ãImageNet-1K 㧠83.2% ã®ããã 1 粟床ãéæããŸãã
åãèšå®ã§ãŒãããã® DeiT ãã¬ãŒãã³ã° (81.8%) ã倧å¹
ã«äžåããŸããããŸãã倧åBEiTã¯
86.3% 㯠ImageNet-1K ã®ã¿ã䜿çšããŠãããImageNet-22K ã§ã®æåž«ä»ãäºåãã¬ãŒãã³ã°ã䜿çšãã ViT-L (85.2%) ãäžåã£ãŠããŸãã*
## Usage tips
- BEiT ã¢ãã«ã¯éåžžã®ããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã§ãããæåž«ããã§ã¯ãªãèªå·±æåž«ããã®æ¹æ³ã§äºåãã¬ãŒãã³ã°ãããŠããŸãã圌ãã¯
ImageNet-1K ããã³ CIFAR-100 ã§åŸ®èª¿æŽãããšã[ãªãªãžãã« ã¢ãã« (ViT)](vit) ãš [ããŒã¿å¹çã®é«ãã€ã¡ãŒãž ãã©ã³ã¹ãã©ãŒã㌠(DeiT)](deit) ã®äž¡æ¹ãäžåãããã©ãŒãã³ã¹ãçºæ®ããŸããæšè«ã«é¢ãããã¢ããŒãããã¯ããã§ãã¯ã§ããŸãã
ã«ã¹ã¿ã ããŒã¿ã®åŸ®èª¿æŽã¯ [ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (眮ãæããã ãã§æžã¿ãŸã)
[`BeitImageProcessor`] ã«ãã [`ViTFeatureExtractor`] ãš
[`ViTForImageClassification`] by [`BeitForImageClassification`])ã
- DALL-E ã®ç»åããŒã¯ãã€ã¶ãŒãš BEiT ãçµã¿åãããæ¹æ³ã玹ä»ããã㢠ããŒãããã¯ãå©çšå¯èœã§ãã
ãã¹ã¯ãããç»åã¢ããªã³ã°ãå®è¡ããŸãã [ãã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BEiT) ã§èŠã€ããããšãã§ããŸãã
- BEiT ã¢ãã«ã¯åç»åãåããµã€ãº (解å床) ã§ããããšãæåŸ
ããŠããããã次ã®ããã«äœ¿çšã§ããŸãã
[`BeitImageProcessor`] ã䜿çšããŠãã¢ãã«ã®ç»åã®ãµã€ãºãå€æŽ (ãŸãã¯åã¹ã±ãŒã«) ããæ£èŠåããŸãã
- äºåãã¬ãŒãã³ã°ãŸãã¯åŸ®èª¿æŽäžã«äœ¿çšãããããã解å床ãšç»å解å床ã®äž¡æ¹ãååã«åæ ãããŸãã
åãã§ãã¯ãã€ã³ããããšãã°ã`microsoft/beit-base-patch16-224`ã¯ããããä»ãã®åºæ¬ãµã€ãºã®ã¢ãŒããã¯ãã£ãæããŸãã
解å床㯠16x16ã埮調æŽè§£å床㯠224x224 ã§ãããã¹ãŠã®ãã§ãã¯ãã€ã³ã㯠[ãã](https://huggingface.co/models?search=microsoft/beit) ã§èŠã€ããããšãã§ããŸãã
- å©çšå¯èœãªãã§ãã¯ãã€ã³ãã¯ã(1) [ImageNet-22k](http://www.image-net.org/) ã§äºåãã¬ãŒãã³ã°ãããŠããŸã (
1,400 äžã®ç»åãš 22,000 ã®ã¯ã©ã¹) ã®ã¿ã(2) ImageNet-22k ã§ã埮調æŽããŸã㯠(3) [ImageNet-1k](http://www.image-net.org/challenges/LSVRC)ã§ã埮調æŽ/2012/) (ILSVRC 2012 ãšãåŒã°ãã130 äžä»¶ã®ã³ã¬ã¯ã·ã§ã³)
ç»åãš 1,000 ã¯ã©ã¹)ã
- BEiT ã¯ãT5 ã¢ãã«ããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçžå¯Ÿäœçœ®åã蟌ã¿ã䜿çšããŸããäºåãã¬ãŒãã³ã°äžã«ãèè
ã¯æ¬¡ã®ããšãå
±æããŸããã
ããã€ãã®èªå·±æ³šæå±€éã®çžå¯Ÿçãªäœçœ®ã®åãã埮調æŽäžãåã¬ã€ã€ãŒã®çžå¯Ÿäœçœ®
ãã€ã¢ã¹ã¯ãäºåãã¬ãŒãã³ã°åŸã«ååŸãããå
±æçžå¯Ÿäœçœ®ãã€ã¢ã¹ã§åæåãããŸãããåžæã®å Žåã¯ã
ã¢ãã«ãæåããäºåãã¬ãŒãã³ã°ããã«ã¯ã`use_relative_position_bias` ãŸãã¯
è¿œå ããã«ã¯ã[`BeitConfig`] ã® `use_relative_position_bias` å±æ§ã `True` ã«èšå®ããŸãã
äœçœ®ã®åã蟌ã¿ã
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/beit_architecture.jpg"
alt="drawing" width="600"/>
<small> BEiT ã®äºåãã¬ãŒãã³ã°ã <a href="https://arxiv.org/abs/2106.08254">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããããã®ã¢ãã«ã® JAX/FLAX ããŒãžã§ã³ã¯ã
[kamalkraj](https://huggingface.co/kamalkraj) ã«ããæçš¿ãå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/unilm/tree/master/beit) ã«ãããŸãã
## Resources
BEiT ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`BeitForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
**ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³**
- [ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ ã¬ã€ã](../tasks/semantic_segmentation)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## BEiT specific outputs
[[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling
[[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling
## BeitConfig
[[autodoc]] BeitConfig
## BeitFeatureExtractor
[[autodoc]] BeitFeatureExtractor
- __call__
- post_process_semantic_segmentation
## BeitImageProcessor
[[autodoc]] BeitImageProcessor
- preprocess
- post_process_semantic_segmentation
## BeitModel
[[autodoc]] BeitModel
- forward
## BeitForMaskedImageModeling
[[autodoc]] BeitForMaskedImageModeling
- forward
## BeitForImageClassification
[[autodoc]] BeitForImageClassification
- forward
## BeitForSemanticSegmentation
[[autodoc]] BeitForSemanticSegmentation
- forward
## FlaxBeitModel
[[autodoc]] FlaxBeitModel
- __call__
## FlaxBeitForMaskedImageModeling
[[autodoc]] FlaxBeitForMaskedImageModeling
- __call__
## FlaxBeitForImageClassification
[[autodoc]] FlaxBeitForImageClassification
- __call__
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/convbert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvBERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=convbert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-convbert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/conv-bert-base">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
ConvBERT ã¢ãã«ã¯ã[ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 㧠Zihang JiangãWeihao YuãDaquan ZhouãYunpeng ChenãJiashi FengãShuicheng Yan ã«ãã£ãŠææ¡ãããŸããã
ããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ããã®ããªã¢ã³ããªã©ã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãæè¿ãããŸããŸãªç°å¢ã§ç®èŠãŸããããã©ãŒãã³ã¹ãéæããŠããŸãã
èªç¶èšèªç解ã¿ã¹ã¯ããã ããBERT ã¯ã°ããŒãã«ãªèªå·±æ³šæãããã¯ã«å€§ããäŸåããŠãããããåé¡ãçºçããŸãã
ã¡ã¢ãªäœ¿çšéãšèšç®ã³ã¹ãã倧ãããªããŸãããã¹ãŠã®æ³šæãå
¥åã·ãŒã±ã³ã¹å
šäœã«å¯ŸããŠã¯ãšãªãå®è¡ããŸããã
ã°ããŒãã«ãªèŠ³ç¹ããã¢ãã³ã·ã§ã³ ããããçæãããšãäžéšã®ãããã¯ããŒã«ã«ãªäŸåé¢ä¿ã®ã¿ãåŠç¿ããå¿
èŠãããããšãããããŸãã
ããã¯ãèšç®ã®åé·æ§ãååšããããšãæå³ããŸãããããã£ãŠãæã
ã¯ãæ°ããã¹ãã³ããŒã¹ã®åçç³ã¿èŸŒã¿ãææ¡ããŸãã
ãããã®ã»ã«ãã¢ãã³ã·ã§ã³ ãããã眮ãæããŠãããŒã«ã«ã®äŸåé¢ä¿ãçŽæ¥ã¢ãã«åããŸããæ°ããã³ã³ããªã¥ãŒã·ã§ã³ããããšã
èªå·±æ³šæã®é ãäŒããã°ããŒãã«ãšããŒã«ã«ã®äž¡æ¹ã®ç¶æ³ã§ããå¹ççãªæ°ããæ··å泚æãããã¯ã圢æããŸã
åŠã¶ããã®æ··å泚æèšèšã BERT ã«è£
åããConvBERT ã¢ãã«ãæ§ç¯ããŸããå®éšã§ããã£ãããšã¯ã
ConvBERT ã¯ããã¬ãŒãã³ã° ã³ã¹ããäœããããŸããŸãªäžæµã¿ã¹ã¯ã«ãã㊠BERT ããã³ãã®äºçš®ããã倧å¹
ã«åªããããã©ãŒãã³ã¹ãçºæ®ããŸãã
ã¢ãã«ãã©ã¡ãŒã¿ãå°ãªããªããŸãã泚ç®ãã¹ãããšã«ãConvBERTbase ã¢ãã«ã¯ 86.4 GLUE ã¹ã³ã¢ãéæããELECTRAbase ããã 0.7 é«ãã®ã«å¯Ÿãã
ãã¬ãŒãã³ã°ã³ã¹ã㯠1/4 æªæºã§ããã³ãŒããšäºåãã¬ãŒãã³ã°ãããã¢ãã«ããªãªãŒã¹ãããŸãã*
ãã®ã¢ãã«ã¯ã[abhishek](https://huggingface.co/abhishek) ã«ãã£ãŠæäŸãããŸããããªãªãžãã«ã®å®è£
ãèŠã€ãããŸã
ãã: https://github.com/yitu-opensource/ConvBert
## Usage tips
ConvBERT ãã¬ãŒãã³ã°ã®ãã³ã㯠BERT ã®ãã³ããšäŒŒãŠããŸãã䜿çšäžã®ãã³ãã«ã€ããŠã¯ã[BERT ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## ConvBertConfig
[[autodoc]] ConvBertConfig
## ConvBertTokenizer
[[autodoc]] ConvBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## ConvBertTokenizerFast
[[autodoc]] ConvBertTokenizerFast
<frameworkcontent>
<pt>
## ConvBertModel
[[autodoc]] ConvBertModel
- forward
## ConvBertForMaskedLM
[[autodoc]] ConvBertForMaskedLM
- forward
## ConvBertForSequenceClassification
[[autodoc]] ConvBertForSequenceClassification
- forward
## ConvBertForMultipleChoice
[[autodoc]] ConvBertForMultipleChoice
- forward
## ConvBertForTokenClassification
[[autodoc]] ConvBertForTokenClassification
- forward
## ConvBertForQuestionAnswering
[[autodoc]] ConvBertForQuestionAnswering
- forward
</pt>
<tf>
## TFConvBertModel
[[autodoc]] TFConvBertModel
- call
## TFConvBertForMaskedLM
[[autodoc]] TFConvBertForMaskedLM
- call
## TFConvBertForSequenceClassification
[[autodoc]] TFConvBertForSequenceClassification
- call
## TFConvBertForMultipleChoice
[[autodoc]] TFConvBertForMultipleChoice
- call
## TFConvBertForTokenClassification
[[autodoc]] TFConvBertForTokenClassification
- call
## TFConvBertForQuestionAnswering
[[autodoc]] TFConvBertForQuestionAnswering
- call
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deformable_detr.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Deformable DETR
## Overview
å€åœ¢å¯èœ DETR ã¢ãã«ã¯ãXizhou ZhuãWeijie SuãLewei LuãBin LiãXiaogang Wang, Jifeng Dai ã«ãã£ãŠ [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) ã§ææ¡ãããŸãã
å€åœ¢å¯èœãª DETR ã¯ãåç
§åšå²ã®å°æ°ã®äž»èŠãªãµã³ããªã³ã° ãã€ã³ãã®ã¿ã«æ³šç®ããæ°ããå€åœ¢å¯èœãªã¢ãã³ã·ã§ã³ ã¢ãžã¥ãŒã«ãå©çšããããšã«ãããåæã®é
ãã®åé¡ãšå
ã® [DETR](detr) ã®å¶éãããç¹åŸŽã®ç©ºé解å床ã軜æžããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*DETR ã¯ãåªããããã©ãŒãã³ã¹ãå®èšŒããªãããç©äœæ€åºã«ãããå€ãã®æäœæ¥ã§èšèšãããã³ã³ããŒãã³ãã®å¿
èŠæ§ãæé€ããããã«æè¿ææ¡ãããŸããããã ããç»åç¹åŸŽãããã®åŠçã«ããã Transformer ã¢ãã³ã·ã§ã³ ã¢ãžã¥ãŒã«ã®å¶éã«ãããåæãé
ããç¹åŸŽã®ç©ºé解å床ãå¶éããããšããåé¡ããããŸãããããã®åé¡ã軜æžããããã«ãç§ãã¡ã¯ Deformable DETR ãææ¡ããŸããããã® DETR ã®ã¢ãã³ã·ã§ã³ ã¢ãžã¥ãŒã«ã¯ãåç
§åšå²ã®å°æ°ã®äž»èŠãªãµã³ããªã³ã° ãã€ã³ãã®ã¿ã«æ³šç®ããŸããå€åœ¢å¯èœãª DETR ã¯ã10 åã® 1 ã®ãã¬ãŒãã³ã° ãšããã¯ã§ãDETR ãããåªããããã©ãŒãã³ã¹ (ç¹ã«å°ããªãªããžã§ã¯ãã®å Žå) ãéæã§ããŸãã COCO ãã³ãããŒã¯ã«é¢ããåºç¯ãªå®éšã«ãããç§ãã¡ã®ã¢ãããŒãã®æå¹æ§ãå®èšŒãããŸããã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png"
alt="æç»" width="600"/>
<small> å€åœ¢å¯èœãª DETR ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2010.04159">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/fundamentalvision/Deformable-DETR) ã«ãããŸãã
## Usage tips
- ãã¬ãŒãã³ã° Deformable DETR ã¯ãå
ã® [DETR](detr) ã¢ãã«ããã¬ãŒãã³ã°ããããšãšåçã§ããã㢠ããŒãããã¯ã«ã€ããŠã¯ã以äžã® [resources](#resources) ã»ã¯ã·ã§ã³ãåç
§ããŠãã ããã
## Resources
Deformable DETR ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="object-detection"/>
- [`DeformableDetrForObjectDetection`] ã®ã«ã¹ã¿ã ããŒã¿ã»ããã§ã®æšè«ãšåŸ®èª¿æŽã«é¢ããã㢠ããŒãããã¯ã¯ã[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Deformable-DETR) ã«ãããŸãã
- [ç©äœæ€åºã¿ã¹ã¯ã¬ã€ã](../tasks/object_detection) ãåç
§ããŠãã ããã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## DeformableDetrImageProcessor
[[autodoc]] DeformableDetrImageProcessor
- preprocess
- post_process_object_detection
## DeformableDetrFeatureExtractor
[[autodoc]] DeformableDetrFeatureExtractor
- __call__
- post_process_object_detection
## DeformableDetrConfig
[[autodoc]] DeformableDetrConfig
## DeformableDetrModel
[[autodoc]] DeformableDetrModel
- forward
## DeformableDetrForObjectDetection
[[autodoc]] DeformableDetrForObjectDetection
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/barthez.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BARThez
## Overview
BARThez ã¢ãã«ã¯ãMoussa Kamal EddineãAntoine J.-P ã«ãã£ãŠ [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) ã§ææ¡ãããŸããããã£ã¯ã·ãšããã«ãªã¹ã»ãŽã¡ãžã«ãžã£ã³ãã¹ã10æ23æ¥ã
2020幎ã
è«æã®èŠçŽ:
*åž°çŽç転移åŠç¿ã¯ãèªå·±æåž«ããåŠç¿ã«ãã£ãŠå¯èœã«ãªããèªç¶èšèªåŠçå
šäœãå®è¡ããŸãã
(NLP) åéã¯ãBERT ã BART ãªã©ã®ã¢ãã«ã«ãããç¡æ°ã®èªç¶èšèªã«æ°ããªæå
端æè¡ã確ç«ããåµãå·»ãèµ·ãããŠããŸãã
ã¿ã¹ã¯ãç解ããããšãããã€ãã®æ³šç®ãã¹ãäŸå€ã¯ãããŸãããå©çšå¯èœãªã¢ãã«ãšç 究ã®ã»ãšãã©ã¯ã
è±èªã察象ã«å®æœãããŸããããã®äœåã§ã¯ããã©ã³ã¹èªçšã®æåã® BART ã¢ãã«ã§ãã BARTez ã玹ä»ããŸãã
ïŒæã
ã®ç¥ãéãã«ïŒã BARThez ã¯ãéå»ã®ç 究ããåŸãéåžžã«å€§èŠæš¡ãªåäžèšèªãã©ã³ã¹èªã³ãŒãã¹ã§äºåãã¬ãŒãã³ã°ãããŸãã
BART ã®æåã¹ããŒã ã«åãããŠèª¿æŽããŸãããæ¢åã® BERT ããŒã¹ã®ãã©ã³ã¹èªã¢ãã«ãšã¯ç°ãªãã
CamemBERT ãš FlauBERTãBARThez ã¯ããšã³ã³ãŒãã ãã§ãªãã
ãã®ãã³ãŒãã¯äºåãã¬ãŒãã³ã°ãããŠããŸãã FLUE ãã³ãããŒã¯ããã®èå¥ã¿ã¹ã¯ã«å ããŠãBARThez ãæ°ããè©äŸ¡ã«åºã¥ããŠè©äŸ¡ããŸãã
ãã®è«æãšãšãã«ãªãªãŒã¹ããèŠçŽããŒã¿ã»ãããOrangeSumããŸãããã§ã«è¡ãããŠããäºåãã¬ãŒãã³ã°ãç¶ç¶ããŸãã
BARTHez ã®ã³ãŒãã¹äžã§å€èšèª BART ãäºåèšç·ŽããçµæãšããŠåŸãããã¢ãã« (mBARTHez ãšåŒã¶) ã次ã®ããšã瀺ããŸãã
ããã©ã® BARThez ã倧å¹
ã«åŒ·åããCamemBERT ã FlauBERT ãšåçããããäžåããŸãã*
ãã®ã¢ãã«ã¯ [moussakam](https://huggingface.co/moussakam) ã«ãã£ãŠå¯çš¿ãããŸãããèè
ã®ã³ãŒãã¯[ãã](https://github.com/moussaKam/BARThez)ã«ãããŸãã
<Tip>
BARThez ã®å®è£
ã¯ãããŒã¯ã³åãé€ã㊠BART ãšåãã§ãã詳现ã«ã€ããŠã¯ã[BART ããã¥ã¡ã³ã](bart) ãåç
§ããŠãã ããã
æ§æã¯ã©ã¹ãšãã®ãã©ã¡ãŒã¿ã BARThez åºæã®ããŒã¯ãã€ã¶ãŒã«ã€ããŠã¯ä»¥äžã«èšèŒãããŠããŸãã
</Tip>
### Resources
- BARThez ã¯ãBART ãšåæ§ã®æ¹æ³ã§ã·ãŒã±ã³ã¹éã®ã¿ã¹ã¯ã埮調æŽã§ããŸãã以äžã確èªããŠãã ããã
[examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md)ã
## BarthezTokenizer
[[autodoc]] BarthezTokenizer
## BarthezTokenizerFast
[[autodoc]] BarthezTokenizerFast
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/convnextv2.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvNeXt V2
## Overview
ConvNeXt V2 ã¢ãã«ã¯ãSanghyun WooãShobhik DebnathãRonghang HuãXinlei ChenãZhuang Liu, In So Kweon, Saining Xie. ã«ãã£ãŠ [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) ã§ææ¡ãããŸããã
ConvNeXt V2 ã¯ãVision Transformers ã®èšèšããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçŽç²ãªç³ã¿èŸŒã¿ã¢ãã« (ConvNet) ã§ããã[ConvNeXT](convnext) ã®åŸç¶ã§ãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ã¢ãŒããã¯ãã£ã®æ¹åãšè¡šçŸåŠç¿ãã¬ãŒã ã¯ãŒã¯ã®æ¹åã«ãããèŠèŠèªèã®åé㯠2020 幎代åé ã«æ¥éãªè¿ä»£åãšããã©ãŒãã³ã¹ã®åäžãå®çŸããŸãããããšãã°ãConvNeXt ã«ä»£è¡šãããææ°ã® ConvNet ã¯ãããŸããŸãªã·ããªãªã§åŒ·åãªããã©ãŒãã³ã¹ãå®èšŒããŠããŸãããããã®ã¢ãã«ã¯ããšããš ImageNet ã©ãã«ã䜿çšããæåž«ããåŠç¿çšã«èšèšãããŸãããããã¹ã¯ ãªãŒããšã³ã³ãŒã㌠(MAE) ãªã©ã®èªå·±æåž«ããåŠç¿ææ³ãããæœåšçã«æ©æµãåããããšãã§ããŸãããã ããããã 2 ã€ã®ã¢ãããŒããåçŽã«çµã¿åããããšãããã©ãŒãã³ã¹ãæšæºä»¥äžã«ãªãããšãããããŸããããã®è«æã§ã¯ãå®å
šç³ã¿èŸŒã¿ãã¹ã¯ ãªãŒããšã³ã³ãŒã ãã¬ãŒã ã¯ãŒã¯ãšããã£ãã«éã®æ©èœç«¶åã匷åããããã« ConvNeXt ã¢ãŒããã¯ãã£ã«è¿œå ã§ããæ°ãã Global Response Normalization (GRN) å±€ãææ¡ããŸãããã®èªå·±æåž«ããåŠç¿ææ³ãšã¢ãŒããã¯ãã£ã®æ¹åã®å
±åèšèšã«ãããConvNeXt V2 ãšåŒã°ããæ°ããã¢ãã« ãã¡ããªãèªçããŸãããããã«ãããImageNet åé¡ãCOCO æ€åºãADE20K ã»ã°ã¡ã³ããŒã·ã§ã³ãªã©ã®ããŸããŸãªèªèãã³ãããŒã¯ã«ãããçŽç²ãª ConvNet ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããŸãããŸããImageNet ã§ããã 1 ã®ç²ŸåºŠ 76.7% ãèªãå¹çç㪠370 äžãã©ã¡ãŒã¿ã® Atto ã¢ãã«ãããæå
端㮠88.9% ãéæãã 650M Huge ã¢ãã«ãŸã§ãããŸããŸãªãµã€ãºã®äºåãã¬ãŒãã³ã°æžã¿ ConvNeXt V2 ã¢ãã«ãæäŸããŠããŸããå
¬éãã¬ãŒãã³ã° ããŒã¿ã®ã¿ã䜿çšãã粟床*ã
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png"
alt="æç»" width="600"/>
<small> ConvNeXt V2 ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2301.00808">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ [adirik](https://huggingface.co/adirik) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/ConvNeXt-V2) ã«ãããŸãã
## Resources
ConvNeXt V2 ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`ConvNextV2ForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## ConvNextV2Config
[[autodoc]] ConvNextV2Config
## ConvNextV2Model
[[autodoc]] ConvNextV2Model
- forward
## ConvNextV2ForImageClassification
[[autodoc]] ConvNextV2ForImageClassification
- forward
## TFConvNextV2Model
[[autodoc]] TFConvNextV2Model
- call
## TFConvNextV2ForImageClassification
[[autodoc]] TFConvNextV2ForImageClassification
- call
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deta.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DETA
## Overview
DETA ã¢ãã«ã¯ã[NMS Strikes Back](https://arxiv.org/abs/2212.06137) 㧠Jeffrey Ouyang-ZhangãJang Hyun ChoãXingyi ZhouãPhilipp KrÀhenbÃŒhl ã«ãã£ãŠææ¡ãããŸããã
DETA (Detection Transformers with Assignment ã®ç¥) ã¯ã1 察 1 ã® 2 éšãã³ã¬ãªã¢ã³ ãããã³ã°æ倱ã眮ãæããããšã«ããã[Deformable DETR](deformable_detr) ãæ¹åããŸãã
éæ倧æå¶ (NMS) ãåããåŸæ¥ã®æ€åºåšã§äœ¿çšããã 1 察å€ã®ã©ãã«å²ãåœãŠã䜿çšããŸããããã«ãããæ倧 2.5 mAP ã®å€§å¹
ãªå¢å ãåŸãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*Detection Transformer (DETR) ã¯ããã¬ãŒãã³ã°äžã« 1 察 1 ã® 2 éšãããã³ã°ã䜿çšããŠã¯ãšãªãäžæã®ãªããžã§ã¯ãã«çŽæ¥å€æãããšã³ãããŒãšã³ãã®ãªããžã§ã¯ãæ€åºãå¯èœã«ããŸããæè¿ããããã®ã¢ãã«ã¯ãçŽãããªãåªé
ã㧠COCO ã®åŸæ¥ã®æ€åºåšãäžåããŸããããã ããã¢ãã« ã¢ãŒããã¯ãã£ããã¬ãŒãã³ã° ã¹ã±ãžã¥ãŒã«ãªã©ãããŸããŸãªèšèšã«ãããŠåŸæ¥ã®æ€åºåšãšã¯ç°ãªãããã1 察 1 ãããã³ã°ã®æå¹æ§ã¯å®å
šã«ã¯ç解ãããŠããŸããããã®ç 究ã§ã¯ãDETR ã§ã® 1 察 1 ã®ãã³ã¬ãªãŒèªãããã³ã°ãšãéæ倧ç£èŠ (NMS) ãåããåŸæ¥ã®æ€åºåšã§ã® 1 察å€ã®ã©ãã«å²ãåœãŠãšã®éã®å³å¯ãªæ¯èŒãè¡ããŸããé©ãã¹ãããšã«ãNMS ã䜿çšãã 1 察å€ã®å²ãåœãŠã¯ãåãèšå®ã®äžã§æšæºç㪠1 察 1 ã®ãããã³ã°ãããäžè²«ããŠåªããŠãããæ倧 2.5 mAP ãšãã倧å¹
ãªåäžãèŠãããŸããåŸæ¥ã® IoU ããŒã¹ã®ã©ãã«å²ãåœãŠã䜿çšã㊠Deformable-DETR ããã¬ãŒãã³ã°ããåœç€Ÿã®æ€åºåšã¯ãResNet50 ããã¯ããŒã³ã䜿çšã㊠12 ãšãã㯠(1x ã¹ã±ãžã¥ãŒã«) 以å
ã« 50.2 COCO mAP ãéæãããã®èšå®ã§æ¢åã®ãã¹ãŠã®åŸæ¥ã®æ€åºåšãŸãã¯ãã©ã³ã¹ããŒã¹ã®æ€åºåšãäžåããŸãããè€æ°ã®ããŒã¿ã»ãããã¹ã±ãžã¥ãŒã«ãã¢ãŒããã¯ãã£ã«é¢ããŠãç§ãã¡ã¯äžè²«ããŠãããã©ãŒãã³ã¹ã®é«ãæ€åºãã©ã³ã¹ãã©ãŒããŒã«ã¯äºéšãããã³ã°ãäžèŠã§ããããšã瀺ããŠããŸããããã«ãæ€åºãã©ã³ã¹ã®æåã¯ãè¡šçŸåè±ããªãã©ã³ã¹ ã¢ãŒããã¯ãã£ã«ãããã®ã§ãããšèããŠããŸãã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/deta_architecture.jpg"
alt="drawing" width="600"/>
<small> DETA ã®æŠèŠã <a href="https://arxiv.org/abs/2212.06137">å
ã®è«æ</a>ããæç²ã </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/jozhang97/DETA) ã«ãããŸãã
## Resources
DETA ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
- DETA ã®ã㢠ããŒãããã¯ã¯ [ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETA) ã«ãããŸãã
- åç
§: [ãªããžã§ã¯ãæ€åºã¿ã¹ã¯ ã¬ã€ã](../tasks/object_detection)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## DetaConfig
[[autodoc]] DetaConfig
## DetaImageProcessor
[[autodoc]] DetaImageProcessor
- preprocess
- post_process_object_detection
## DetaModel
[[autodoc]] DetaModel
- forward
## DetaForObjectDetection
[[autodoc]] DetaForObjectDetection
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/codegen.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CodeGen
## Overview
CodeGen ã¢ãã«ã¯ã[A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 㧠Erik NijkampãBo Pangãæå®æãLifu TuãHuan WangãYingbo ZhouãSilvio SavareseãCaiming Xiong ããã³ã«ã€ãã³ã»ã·ã§ã³ããã
CodeGen ã¯ã[The Pile](https://pile.eleuther.ai/)ãBigQueryãBigPython ã§é 次ãã¬ãŒãã³ã°ãããããã°ã©ã åæçšã®èªå·±ååž°èšèªã¢ãã«ã§ãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ããã°ã©ã åæã¯ãäžããããåé¡ä»æ§ã®è§£æ±ºçãšããŠã³ã³ãã¥ãŒã¿ãŒ ããã°ã©ã ãçæããããšãç®çãšããŠããŸããæã
ã¯ã倧èŠæš¡ãªèšèªã¢ãã«ãä»ããäŒè©±åããã°ã©ã åæã¢ãããŒããææ¡ããŸããããã¯ãåŸæ¥ã®ã¢ãããŒãã§çŽé¢ããåºå€§ãªããã°ã©ã 空éãšãŠãŒã¶ãŒã®æå³ã®ä»æ§ãæ€çŽ¢ãããšãã課é¡ã«å¯ŸåŠããŸããç§ãã¡ã®æ°ããã¢ãããŒãã§ã¯ãä»æ§ãšããã°ã©ã ãäœæããããã»ã¹ãããŠãŒã¶ãŒãšã·ã¹ãã ã®éã®è€æ°åã®å¯Ÿè©±ãšããŠæããŸããããã¯ããã°ã©ã åæãã·ãŒã±ã³ã¹äºæž¬åé¡ãšããŠæ±ããä»æ§ãèªç¶èšèªã§è¡šçŸãããç®çã®ããã°ã©ã ãæ¡ä»¶ä»ãã§ãµã³ããªã³ã°ãããŸããç§ãã¡ã¯ãèªç¶èšèªãšããã°ã©ãã³ã°èšèªã®ããŒã¿ã«åºã¥ããŠãCodeGen ãšåŒã°ãã倧èŠæš¡ãªèšèªã¢ãã«ã®ãã¡ããªãŒããã¬ãŒãã³ã°ããŸããããŒã¿ã®ç£èŠã匱ããããŒã¿ ãµã€ãºãšã¢ãã« ãµã€ãºãæ¡å€§ãããšãåçŽãªèªå·±ååž°èšèªã¢ããªã³ã°ããäŒè©±èœåãçãŸããŸããäŒè©±åããã°ã©ã åæã«ãããã¢ãã«ã®åäœãç 究ããããã«ããã«ãã¿ãŒã³ ããã°ã©ãã³ã° ãã³ãããŒã¯ (MTPB) ãéçºããŸãããã®ãã³ãããŒã¯ã§ã¯ãååé¡ã解決ããã«ã¯ããŠãŒã¶ãŒãšã¢ãã«éã®ãã«ãã¿ãŒã³äŒè©±ãä»ãããã«ãã¹ãããåæãå¿
èŠã§ããç§ãã¡ã®èª¿æ»çµæã¯ãäŒè©±æ©èœã®åºçŸãšãææ¡ãããŠããäŒè©±ããã°ã©ã åæãã©ãã€ã ã®æå¹æ§ã瀺ããŠããŸããããã«ãç§ãã¡ã®ã¢ãã« CodeGen (TPU-v4 ã§ãã¬ãŒãã³ã°ãããæ倧 16B ãã©ã¡ãŒã¿ãŒãå«ã) ã¯ãHumanEval ãã³ãããŒã¯ã§ OpenAI ã® Codex ãäžåããŸããç§ãã¡ã¯ãã§ãã¯ãã€ã³ããå«ããã¬ãŒãã³ã° ã©ã€ãã©ãª JaxFormer ããªãŒãã³ ãœãŒã¹ã®ã³ã³ããªãã¥ãŒã·ã§ã³ãšããŠå©çšã§ããããã«ããŠããŸã: [ãã® https URL](https://github.com/salesforce/codegen)*ã
ãã®ã¢ãã«ã¯ [æ å®æ](https://huggingface.co/rooa) ã«ãã£ãŠå¯çš¿ãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/codegen) ã«ãããŸãã
## Checkpoint Naming
* CodeGen ã¢ãã« [ãã§ãã¯ãã€ã³ã](https://huggingface.co/models?other=codegen) ã¯ãå¯å€ãµã€ãºã®ããŸããŸãªäºåãã¬ãŒãã³ã° ããŒã¿ã§å©çšã§ããŸãã
* 圢åŒã¯ãSalesforce/codegen-{size}-{data}ãã§ããããã§ã
* `size`: `350M`ã`2B`ã`6B`ã`16B`
* `data`:
* `nl`: ãã€ã«ã§äºåãã¬ãŒãã³ã°æžã¿
* `multi`: `nl` ã§åæåãããè€æ°ã®ããã°ã©ãã³ã°èšèªããŒã¿ã§ããã«äºåãã¬ãŒãã³ã°ãããŸãã
* `mono`: `multi` ã§åæåãããPython ããŒã¿ã§ããã«äºåãã¬ãŒãã³ã°ãããŸãã
* ããšãã°ã`Salesforce/codegen-350M-mono` ã¯ãPileãè€æ°ã®ããã°ã©ãã³ã°èšèªãããã³ Python ã§é 次äºåãã¬ãŒãã³ã°ããã 3 å 5,000 äžã®ãã©ã¡ãŒã¿ãŒã®ãã§ãã¯ãã€ã³ããæäŸããŸãã
## Usage example
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> checkpoint = "Salesforce/codegen-350M-mono"
>>> model = AutoModelForCausalLM.from_pretrained(checkpoint)
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
>>> text = "def hello_world():"
>>> completion = model.generate(**tokenizer(text, return_tensors="pt"))
>>> print(tokenizer.decode(completion[0]))
def hello_world():
print("Hello World")
hello_world()
```
## Resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
## CodeGenConfig
[[autodoc]] CodeGenConfig
- all
## CodeGenTokenizer
[[autodoc]] CodeGenTokenizer
- save_vocabulary
## CodeGenTokenizerFast
[[autodoc]] CodeGenTokenizerFast
## CodeGenModel
[[autodoc]] CodeGenModel
- forward
## CodeGenForCausalLM
[[autodoc]] CodeGenForCausalLM
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/cvt.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Convolutional Vision Transformer (CvT)
## Overview
CvT ã¢ãã«ã¯ãHaping WuãBin XiaoãNoel CodellaãMengchen LiuãXiyang DaiãLu YuanãLei Zhang ã«ãã£ãŠ [CvT: Introduction Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) ã§ææ¡ãããŸãããç³ã¿èŸŒã¿ããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(CvT) ã¯ãViT ã«ç³ã¿èŸŒã¿ãå°å
¥ããŠäž¡æ¹ã®èšèšã®é·æãåŒãåºãããšã«ããã[ããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(ViT)](vit) ã®ããã©ãŒãã³ã¹ãšå¹çãåäžãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ãã®è«æã§ã¯ãããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(ViT) ãæ¹åãããç³ã¿èŸŒã¿ããžã§ã³ ãã©ã³ã¹ãã©ãŒã㌠(CvT) ãšåŒã°ããæ°ããã¢ãŒããã¯ãã£ã玹ä»ããŸãã
ViT ã«ç³ã¿èŸŒã¿ãå°å
¥ããŠäž¡æ¹ã®èšèšã®é·æãåŒãåºãããšã§ãããã©ãŒãã³ã¹ãšå¹çãåäžãããŸããããã¯æ¬¡ã®ããã«ããŠå®çŸãããŸãã
2 ã€ã®äž»èŠãªå€æŽ: æ°ããç³ã¿èŸŒã¿ããŒã¯ã³ã®åã蟌ã¿ãå«ããã©ã³ã¹ãã©ãŒããŒã®éå±€ãšãç³ã¿èŸŒã¿ãã©ã³ã¹ãã©ãŒããŒ
ç³ã¿èŸŒã¿å°åœ±ãå©çšãããããã¯ããããã®å€æŽã«ãããç³ã¿èŸŒã¿ãã¥ãŒã©ã« ãããã¯ãŒã¯ (CNN) ã®æãŸããç¹æ§ãå°å
¥ãããŸãã
ãã©ã³ã¹ãã©ãŒããŒã®å©ç¹ (åçãªæ³šæåã
ã°ããŒãã«ãªã³ã³ããã¹ããšããè¯ãäžè¬å)ãç§ãã¡ã¯åºç¯ãªå®éšãå®æœããããšã§ CvT ãæ€èšŒãããã®ã¢ãããŒããéæã§ããããšã瀺ããŠããŸãã
ImageNet-1k äžã®ä»ã®ããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã ResNet ãããããã©ã¡ãŒã¿ãå°ãªããFLOP ãäœããæå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸããå ããŠã
ãã倧ããªããŒã¿ã»ãã (äŸ: ImageNet-22k) ã§äºåãã¬ãŒãã³ã°ããäžæµã®ã¿ã¹ã¯ã«åãããŠåŸ®èª¿æŽãããšãããã©ãŒãã³ã¹ã®åäžãç¶æãããŸããäºåãã¬ãŒãã³ã°æžã¿
ImageNet-22kãåœç€Ÿã® CvT-W24 ã¯ãImageNet-1k val set 㧠87.7\% ãšããããã 1 ã®ç²ŸåºŠãç²åŸããŠããŸããæåŸã«ãç§ãã¡ã®çµæã¯ãäœçœ®ãšã³ã³ãŒãã£ã³ã°ãã
æ¢åã®ããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã®éèŠãªã³ã³ããŒãã³ãã§ãããã®ã³ã³ããŒãã³ãã¯ãã¢ãã«ã§ã¯å®å
šã«åé€ã§ãããããé«è§£å床ã®ããžã§ã³ ã¿ã¹ã¯ã®èšèšãç°¡çŽ åãããŸãã*
ãã®ã¢ãã«ã¯ [anugunj](https://huggingface.co/anugunj) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/CvT) ã«ãããŸãã
## Usage tips
- CvT ã¢ãã«ã¯éåžžã® Vision Transformer ã§ãããç³ã¿èŸŒã¿ã§ãã¬ãŒãã³ã°ãããŠããŸãã ImageNet-1K ããã³ CIFAR-100 ã§åŸ®èª¿æŽãããšã[ãªãªãžãã« ã¢ãã« (ViT)](vit) ãããåªããããã©ãŒãã³ã¹ãçºæ®ããŸãã
- ã«ã¹ã¿ã ããŒã¿ã®åŸ®èª¿æŽã ãã§ãªãæšè«ã«é¢ããã㢠ããŒãããã¯ã [ãã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) ã§ç¢ºèªã§ããŸã ([`ViTFeatureExtractor ã眮ãæããã ãã§æžã¿ãŸã) `] ã«ãã [`AutoImageProcessor`] ããã³ [`ViTForImageClassification`] ã«ãã [`CvtForImageClassification`])ã
- å©çšå¯èœãªãã§ãã¯ãã€ã³ãã¯ã(1) [ImageNet-22k](http://www.image-net.org/) (1,400 äžã®ç»åãš 22,000 ã®ã¯ã©ã¹ã®ã³ã¬ã¯ã·ã§ã³) ã§ã®ã¿äºåãã¬ãŒãã³ã°ãããŠããã(2) ãåé¡ãããŸããã ImageNet-22k ã§èª¿æŽããŸã㯠(3) [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (ILSVRC 2012 ãšãåŒã°ããã³ã¬ã¯ã·ã§ã³) ã§ã埮調æŽ130äžã®
ç»åãš 1,000 ã¯ã©ã¹)ã
## Resources
CvT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`CvtForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## CvtConfig
[[autodoc]] CvtConfig
<frameworkcontent>
<pt>
## CvtModel
[[autodoc]] CvtModel
- forward
## CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
</pt>
<tf>
## TFCvtModel
[[autodoc]] TFCvtModel
- call
## TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bert-generation.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BertGeneration
## Overview
BertGeneration ã¢ãã«ã¯ã次ã䜿çšããŠã·ãŒã±ã³ã¹éã®ã¿ã¹ã¯ã«å©çšã§ãã BERT ã¢ãã«ã§ãã
[Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) ã§ææ¡ãããŠãã [`EncoderDecoderModel`]
ã¿ã¹ã¯ãSascha RotheãSishi NagayanãAliaksei Severyn èã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*倧èŠæš¡ãªãã¥ãŒã©ã« ã¢ãã«ã®æåž«ãªãäºåãã¬ãŒãã³ã°ã¯ãæè¿ãèªç¶èšèªåŠçã«é©åœããããããŸãããã«ãã
NLP å®è·µè
ã¯ãå
¬éããããã§ãã¯ãã€ã³ããããŠã©ãŒã ã¹ã¿ãŒãããŠãè€æ°ã®é
ç®ã§æå
端ã®æè¡ãæšé²ããŠããŸããã
ã³ã³ãã¥ãŒãã£ã³ã°æéã倧å¹
ã«ç¯çŽããªãããã³ãããŒã¯ãå®è¡ããŸãããããŸã§ã®ãšãããäž»ã«èªç¶èšèªã«çŠç¹ãåœãŠãŠããŸããã
ã¿ã¹ã¯ãç解ããããã®è«æã§ã¯ãã·ãŒã±ã³ã¹çæã®ããã®äºåãã¬ãŒãã³ã°ããããã§ãã¯ãã€ã³ãã®æå¹æ§ãå®èšŒããŸããç§ãã¡ã¯
å
¬éãããŠããäºåãã¬ãŒãã³ã°æžã¿ BERT ãšäºææ§ã®ãã Transformer ããŒã¹ã®ã·ãŒã±ã³ã¹éã¢ãã«ãéçºããŸããã
GPT-2 ããã³ RoBERTa ãã§ãã¯ãã€ã³ãã䜿çšããã¢ãã«ã®åæåã®æçšæ§ã«ã€ããŠåºç¯ãªå®èšŒç 究ãå®æœããŸããã
ãšã³ã³ãŒããšãã³ãŒãããããã®ãã§ãã¯ãã€ã³ããç§ãã¡ã®ã¢ãã«ã¯ãæ©æ¢°ç¿»èš³ã«é¢ããæ°ããæå
端ã®çµæããããããŸãã
ããã¹ãã®èŠçŽãæã®åå²ãããã³æã®èåã*
## Usage examples and tips
- ã¢ãã«ã [`EncoderDecoderModel`] ãšçµã¿åãããŠäœ¿çšââããŠã2 ã€ã®äºåãã¬ãŒãã³ã°ãããã¢ãã«ã掻çšã§ããŸãã
åŸç¶ã®åŸ®èª¿æŽã®ããã® BERT ãã§ãã¯ãã€ã³ãã
```python
>>> # leverage checkpoints for Bert2Bert model...
>>> # use BERT's cls token as BOS token and sep token as EOS token
>>> encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102)
>>> # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
>>> decoder = BertGenerationDecoder.from_pretrained(
... "google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
... )
>>> bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
>>> # create tokenizer...
>>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased")
>>> input_ids = tokenizer(
... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
>>> # train...
>>> loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
>>> loss.backward()
```
- äºåãã¬ãŒãã³ã°ããã [`EncoderDecoderModel`] ãã¢ãã« ããã§çŽæ¥å©çšã§ããŸãã
```python
>>> # instantiate sentence fusion model
>>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> input_ids = tokenizer(
... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
... ).input_ids
>>> outputs = sentence_fuser.generate(input_ids)
>>> print(tokenizer.decode(outputs[0]))
```
ãããïŒ
- [`BertGenerationEncoder`] ãš [`BertGenerationDecoder`] ã¯ã
[`EncoderDecoder`] ãšçµã¿åãããŸãã
- èŠçŽãæã®åå²ãæã®èåãããã³ç¿»èš³ã®å Žåãå
¥åã«ç¹å¥ãªããŒã¯ã³ã¯å¿
èŠãããŸããã
ãããã£ãŠãå
¥åã®æ«å°Ÿã« EOS ããŒã¯ã³ãè¿œå ããªãã§ãã ããã
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder) ããããŸãã
## BertGenerationConfig
[[autodoc]] BertGenerationConfig
## BertGenerationTokenizer
[[autodoc]] BertGenerationTokenizer
- save_vocabulary
## BertGenerationEncoder
[[autodoc]] BertGenerationEncoder
- forward
## BertGenerationDecoder
[[autodoc]] BertGenerationDecoder
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/dialogpt.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DialoGPT
## Overview
DialoGPT ã¯ã[DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 㧠Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Jianfeng Gao, Jingjing Liu, Bill Dolan.ããã¯ãããæœåºããã 147M äžã®äŒè©±ã®ãããªãããšãã§ãã¬ãŒãã³ã°ããã GPT2 ã¢ãã«ã§ãã
ã¬ãã£ããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç§ãã¡ã¯ã倧èŠæš¡ã§èª¿æŽå¯èœãªãã¥ãŒã©ã«äŒè©±å¿ççæã¢ãã« DialoGPT (察話çæäºåãã¬ãŒãã³ã°æžã¿) ã玹ä»ããŸãã
å€æåšïŒã Reddit ã®ã³ã¡ã³ã ãã§ãŒã³ããæœåºããã 1 å 4,700 äžä»¶ã®äŒè©±ã®ãããªããåãã察象ã«ãã¬ãŒãã³ã°ãããŸããã
2005 幎ãã 2017 幎ã«ãããŠãDialoGPT ã¯äººéã«è¿ãããã©ãŒãã³ã¹ãéæããããã« Hugging Face PyTorch ãã©ã³ã¹ãã©ãŒããŒãæ¡åŒµããŸããã
ã·ã³ã°ã«ã¿ãŒã³ãã€ã¢ãã°èšå®ã«ãããèªåè©äŸ¡ãšäººéã«ããè©äŸ¡ã®äž¡æ¹ãäŒè©±ã·ã¹ãã ã
DialoGPT ã掻çšãããšã匷åãªããŒã¹ã©ã€ã³ãããé¢é£æ§ãé«ããå
容ãå
å®ããã³ã³ããã¹ãã«äžè²«æ§ã®ããå¿çãçæãããŸãã
ã·ã¹ãã ãç¥çµåå¿ã®ç 究ãä¿é²ããããã«ãäºåãã¬ãŒãã³ã°ãããã¢ãã«ãšãã¬ãŒãã³ã° ãã€ãã©ã€ã³ãå
¬éãããŠããŸãã
ããã€ã³ããªãžã§ã³ããªãªãŒãã³ãã¡ã€ã³å¯Ÿè©±ã·ã¹ãã ã®çæãšéçºã*
å
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/DialoGPT) ã«ãããŸãã
## Usage tips
- DialoGPT ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šãããã
- DialoGPT ã¯ãäŒè©±ããŒã¿ã®å æèšèªã¢ããªã³ã° (CLM) ç®æšã«åºã¥ããŠãã¬ãŒãã³ã°ãããŠããããã匷åã§ã
ãªãŒãã³ãã¡ã€ã³å¯Ÿè©±ã·ã¹ãã ã«ãããå¿ççææã
- DialoGPT ã䜿çšãããšã[DialoGPT's model card](https://huggingface.co/microsoft/DialoGPT-medium) ã«ç€ºãããŠããããã«ããŠãŒã¶ãŒã¯ããã 10 è¡ã®ã³ãŒãã§ãã£ãã ããããäœæã§ããŸãã
ãã¬ãŒãã³ã°ïŒ
DialoGPT ããã¬ãŒãã³ã°ãŸãã¯åŸ®èª¿æŽããã«ã¯ãå æèšèªã¢ããªã³ã° ãã¬ãŒãã³ã°ã䜿çšã§ããŸããå
¬åŒè«æãåŒçšãããšïŒ *ç§ãã¡ã¯
OpenAI GPT-2ã«åŸã£ãŠããã«ãã¿ãŒã³å¯Ÿè©±ã»ãã·ã§ã³ãé·ãããã¹ããšããŠã¢ãã«åããçæã¿ã¹ã¯ãèšèªãšããŠãã¬ãŒã åããŸã
ã¢ããªã³ã°ããŸãããã€ã¢ãã° ã»ãã·ã§ã³å
ã®ãã¹ãŠã®ãã€ã¢ãã° ã¿ãŒã³ãé·ãããã¹ã x_1,..., x_N ã«é£çµããŸã (N ã¯
* 詳现ã«ã€ããŠã¯ãå
ã®è«æãåç
§ããŠãã ããã
<Tip>
DialoGPT ã®ã¢ãŒããã¯ãã£ã¯ GPT2 ã¢ãã«ã«åºã¥ããŠããŸããAPI ãªãã¡ã¬ã³ã¹ãšäŸã«ã€ããŠã¯ã[GPT2 ã®ããã¥ã¡ã³ã ããŒãž](openai-community/gpt2) ãåç
§ããŠãã ããã
</Tip>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/convnext.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ConvNeXT
## Overview
ConvNeXT ã¢ãã«ã¯ã[A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 㧠Zhuang LiuãHanzi MaoãChao-Yuan WuãChristoph FeichtenhoferãTrevor DarrellãSaining Xie ã«ãã£ãŠææ¡ãããŸããã
ConvNeXT ã¯ãããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã®èšèšããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãçŽç²ãªç³ã¿èŸŒã¿ã¢ãã« (ConvNet) ã§ãããããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒãããåªããããã©ãŒãã³ã¹ãçºæ®ãããšäž»åŒµããŠããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èŠèŠèªèã®ãçéšã® 20 幎代ãã¯ãæå
端ã®ç»ååé¡ã¢ãã«ãšã㊠ConvNet ã«ããã«åã£ãŠä»£ãããã Vision Transformers (ViT) ã®å°å
¥ããå§ãŸããŸããã
äžæ¹ãããã© ViT ã¯ããªããžã§ã¯ãæ€åºãã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ãªã©ã®äžè¬çãªã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã¿ã¹ã¯ã«é©çšãããšå°é£ã«çŽé¢ããŸããéå±€åãã©ã³ã¹ãã©ãŒããŒã§ã
(Swin Transformers ãªã©) ã¯ãããã€ãã® ConvNet ã®ä»¥åã®æ©èœãåå°å
¥ããTransformers ãæ±çšããžã§ã³ ããã¯ããŒã³ãšããŠå®çšçã«å¯èœã«ããå¹
åºãç°å¢ã§é¡èãªããã©ãŒãã³ã¹ãå®èšŒããŸããã
ããŸããŸãªèŠèŠã¿ã¹ã¯ããã ãããã®ãããªãã€ããªãã ã¢ãããŒãã®æå¹æ§ã¯ãäŸç¶ãšããŠãåºæã®èªå°æ§ã§ã¯ãªãããã©ã³ã¹ãã©ãŒããŒã®æ¬è³ªçãªåªäœæ§ã«ãããšããã倧ãããšèããããŠããŸãã
ç³ã¿èŸŒã¿ã®ãã€ã¢ã¹ããã®äœæ¥ã§ã¯ãèšèšç©ºéãåæ€èšããçŽç²ãª ConvNet ãéæã§ããéçããã¹ãããŸããæšæº ResNet ãèšèšã«åããŠåŸã
ã«ãææ°åãããŸãã
ããžã§ã³ Transformer ã®æŠèŠã確èªããéäžã§ããã©ãŒãã³ã¹ã®éãã«å¯äžããããã€ãã®éèŠãªã³ã³ããŒãã³ããçºèŠããŸãããã®èª¿æ»ã®çµæã¯ãçŽç²ãª ConvNet ã¢ãã«ã®ãã¡ããªãŒã§ãã
ConvNextãšåŒã°ããŸãã ConvNeXts ã¯å®å
šã«æšæºã® ConvNet ã¢ãžã¥ãŒã«ããæ§ç¯ãããŠããã粟床ãšæ¡åŒµæ§ã®ç¹ã§ Transformers ãšæå©ã«ç«¶åãã87.8% ã® ImageNet ããã 1 粟床ãéæããŠããŸãã
æšæº ConvNet ã®ã·ã³ãã«ããšå¹çãç¶æããªãããCOCO æ€åºãš ADE20K ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ Swin Transformers ãããåªããããã©ãŒãã³ã¹ãçºæ®ããŸãã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.jpg"
alt="æç»" width="600"/>
<small> ConvNeXT ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2201.03545">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã TensorFlow ããŒãžã§ã³ã®ã¢ãã«ã¯ [ariG23498](https://github.com/ariG23498) ã«ãã£ãŠæäŸãããŸããã
[gante](https://github.com/gante)ãããã³ [sayakpaul](https://github.com/sayakpaul) (åçã®è²¢ç®)ãå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/ConvNeXt) ã«ãããŸãã
## Resources
ConvNeXT ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`ConvNextForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## ConvNextConfig
[[autodoc]] ConvNextConfig
## ConvNextFeatureExtractor
[[autodoc]] ConvNextFeatureExtractor
## ConvNextImageProcessor
[[autodoc]] ConvNextImageProcessor
- preprocess
<frameworkcontent>
<pt>
## ConvNextModel
[[autodoc]] ConvNextModel
- forward
## ConvNextForImageClassification
[[autodoc]] ConvNextForImageClassification
- forward
</pt>
<tf>
## TFConvNextModel
[[autodoc]] TFConvNextModel
- call
## TFConvNextForImageClassification
[[autodoc]] TFConvNextForImageClassification
- call
</tf>
</frameworkcontent> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/albert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ALBERT
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=albert">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-albert-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/albert-base-v2">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## æŠèŠ
ALBERTã¢ãã«ã¯ãã[ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)ããšããè«æã§Zhenzhong LanãMingda ChenãSebastian GoodmanãKevin GimpelãPiyush SharmaãRadu Soricutã«ãã£ãŠææ¡ãããŸãããBERTã®ã¡ã¢ãªæ¶è²»ãæžãããã¬ãŒãã³ã°ãé«éåããããã®ãã©ã¡ãŒã¿åæžæè¡ã2ã€ç€ºããŠããŸãïŒ
- åã蟌ã¿è¡åã2ã€ã®å°ããªè¡åã«åå²ããã
- ã°ã«ãŒãéã§åå²ãããç¹°ãè¿ãå±€ã䜿çšããã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*èªç¶èšèªè¡šçŸã®äºååŠç¿æã«ã¢ãã«ã®ãµã€ãºãå¢ãããšãäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžããããšããã°ãã°ãããŸããããããããæç¹ã§ãããªãã¢ãã«ã®å¢å€§ã¯ãGPU/TPUã®ã¡ã¢ãªå¶éãé·ãèšç·Žæéãäºæãã¬ã¢ãã«ã®å£åãšãã£ãåé¡ã®ããã«å°é£ã«ãªããŸãããããã®åé¡ã«å¯ŸåŠããããã«ãæã
ã¯BERTã®ã¡ã¢ãªæ¶è²»ãäœæžããèšç·Žé床ãé«ããããã®2ã€ã®ãã©ã¡ãŒã¿åæžæè¡ãææ¡ããŸããå
æ¬çãªå®èšŒç蚌æ ã¯ãæã
ã®ææ¡æ¹æ³ãå
ã®BERTã«æ¯ã¹ãŠã¯ããã«ããã¹ã±ãŒã«ããã¢ãã«ãçã¿åºãããšã瀺ããŠããŸãããŸããæéã®äžè²«æ§ãã¢ããªã³ã°ã«çŠç¹ãåœãŠãèªå·±æåž«ããæ倱ã䜿çšããè€æ°ã®æãå«ãŸããäžæµã¿ã¹ã¯ã«äžè²«ããŠå©ããšãªãããšã瀺ããŸãããã®çµæãæã
ã®æè¯ã®ã¢ãã«ã¯ãBERT-largeã«æ¯ã¹ãŠãã©ã¡ãŒã¿ãå°ãªãã«ãããããããGLUEãRACEãSQuADãã³ãããŒã¯ã§æ°ããªæå
端ã®çµæã確ç«ããŸãã*
ãã®ã¢ãã«ã¯[lysandre](https://huggingface.co/lysandre)ã«ããæäŸãããŸããããã®ã¢ãã«ã®jaxããŒãžã§ã³ã¯[kamalkraj](https://huggingface.co/kamalkraj)ã«ããæäŸãããŸããããªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/google-research/ALBERT)ã§èŠãããšãã§ããŸãã
## 䜿çšäžã®ãã³ã
- ALBERTã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ã䜿çšããã¢ãã«ãªã®ã§ãéåžžãå
¥åãå·ŠåŽã§ã¯ãªãå³åŽã«ããã£ã³ã°ããããšãæšå¥šãããŸãã
- ALBERTã¯ç¹°ãè¿ãå±€ã䜿çšããããã¡ã¢ãªäœ¿çšéã¯å°ãããªããŸãããåãæ°ã®ïŒç¹°ãè¿ãïŒå±€ãå埩ããªããã°ãªããªããããé ãå±€ã®æ°ãåãã§ããã°BERTã®ãããªã¢ãŒããã¯ãã£ãšåæ§ã®èšç®ã³ã¹ããããããŸãã
- åã蟌ã¿ãµã€ãºEã¯é ããµã€ãºHãšç°ãªããŸãããããã¯åã蟌ã¿ãæèã«äŸåããªãïŒäžã€ã®åã蟌ã¿ãã¯ãã«ãäžã€ã®ããŒã¯ã³ãè¡šãïŒã®ã«å¯Ÿããé ãç¶æ
ã¯æèã«äŸåããïŒ1ã€ã®é ãç¶æ
ãããŒã¯ã³ç³»åãè¡šãïŒãããH >> Eãšããããšãããè«ççã§ãããŸããåã蟌ã¿è¡åã®ãµã€ãºã¯V x Eãšå€§ããã§ãïŒVã¯èªåœãµã€ãºïŒãE < Hã§ããã°ããã©ã¡ãŒã¿ã¯å°ãªããªããŸãã
- å±€ã¯ãã©ã¡ãŒã¿ãå
±æããã°ã«ãŒãã«åå²ãããŠããŸãïŒã¡ã¢ãªç¯çŽã®ããïŒã次æäºæž¬ïŒNSP: Next Sentence PredictionïŒã¯æã®é åºäºæž¬ã«çœ®ãæããããŸãïŒå
¥åã§ã¯ã2ã€ã®æAãšBïŒãããã¯é£ç¶ããŠããïŒããããAã«ç¶ããŠBãäžããããBã«ç¶ããŠAãäžããŸããã¢ãã«ã¯ããããå
¥ãæ¿ãã£ãŠãããã©ãããäºæž¬ããå¿
èŠããããŸãã
## åèè³æ
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質åå¿çã¿ã¹ã¯ã¬ã€ã](../tasks/question_answering)
- [ãã¹ã¯ãããèšèªã¢ãã«ã¿ã¹ã¯ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ã¬ã€ã](../tasks/multiple_choice)
## AlbertConfig
[[autodoc]] AlbertConfig
## AlbertTokenizer
[[autodoc]] AlbertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## AlbertTokenizerFast
[[autodoc]] AlbertTokenizerFast
## Albert specific outputs
[[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput
[[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
<frameworkcontent>
<pt>
## AlbertModel
[[autodoc]] AlbertModel
- forward
## AlbertForPreTraining
[[autodoc]] AlbertForPreTraining
- forward
## AlbertForMaskedLM
[[autodoc]] AlbertForMaskedLM
- forward
## AlbertForSequenceClassification
[[autodoc]] AlbertForSequenceClassification
- forward
## AlbertForMultipleChoice
[[autodoc]] AlbertForMultipleChoice
## AlbertForTokenClassification
[[autodoc]] AlbertForTokenClassification
- forward
## AlbertForQuestionAnswering
[[autodoc]] AlbertForQuestionAnswering
- forward
</pt>
<tf>
## TFAlbertModel
[[autodoc]] TFAlbertModel
- call
## TFAlbertForPreTraining
[[autodoc]] TFAlbertForPreTraining
- call
## TFAlbertForMaskedLM
[[autodoc]] TFAlbertForMaskedLM
- call
## TFAlbertForSequenceClassification
[[autodoc]] TFAlbertForSequenceClassification
- call
## TFAlbertForMultipleChoice
[[autodoc]] TFAlbertForMultipleChoice
- call
## TFAlbertForTokenClassification
[[autodoc]] TFAlbertForTokenClassification
- call
## TFAlbertForQuestionAnswering
[[autodoc]] TFAlbertForQuestionAnswering
- call
</tf>
<jax>
## FlaxAlbertModel
[[autodoc]] FlaxAlbertModel
- __call__
## FlaxAlbertForPreTraining
[[autodoc]] FlaxAlbertForPreTraining
- __call__
## FlaxAlbertForMaskedLM
[[autodoc]] FlaxAlbertForMaskedLM
- __call__
## FlaxAlbertForSequenceClassification
[[autodoc]] FlaxAlbertForSequenceClassification
- __call__
## FlaxAlbertForMultipleChoice
[[autodoc]] FlaxAlbertForMultipleChoice
- __call__
## FlaxAlbertForTokenClassification
[[autodoc]] FlaxAlbertForTokenClassification
- __call__
## FlaxAlbertForQuestionAnswering
[[autodoc]] FlaxAlbertForQuestionAnswering
- __call__
</jax>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bros.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# BROS
## Overview
BROS ã¢ãã«ã¯ãTeakgyu HonãDonghyun KimãMingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park ã«ãã£ãŠ [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539) ã§ææ¡ãããŸããã
BROS 㯠*BERT Relying On Spatality* ã®ç¥ã§ããããã¯ãäžé£ã®ããŒã¯ã³ãšãã®å¢çããã¯ã¹ãå
¥åãšããŠåãåããäžé£ã®é ãç¶æ
ãåºåãããšã³ã³ãŒããŒå°çšã® Transformer ã¢ãã«ã§ãã BROS ã¯ã絶察çãªç©ºéæ
å ±ã䜿çšãã代ããã«ãçžå¯Ÿçãªç©ºéæ
å ±ããšã³ã³ãŒãããŸãã
BERT ã§äœ¿çšãããããŒã¯ã³ãã¹ã¯èšèªã¢ããªã³ã°ç®æš (TMLM) ãšæ°ãããšãªã¢ãã¹ã¯èšèªã¢ããªã³ã°ç®æš (AMLM) ã® 2 ã€ã®ç®æšã§äºåãã¬ãŒãã³ã°ãããŠããŸãã
TMLM ã§ã¯ãããŒã¯ã³ã¯ã©ã³ãã ã«ãã¹ã¯ãããã¢ãã«ã¯ç©ºéæ
å ±ãšä»ã®ãã¹ã¯ãããŠããªãããŒã¯ã³ã䜿çšããŠãã¹ã¯ãããããŒã¯ã³ãäºæž¬ããŸãã
AMLM 㯠TMLM ã® 2D ããŒãžã§ã³ã§ããããã¹ã ããŒã¯ã³ãã©ã³ãã ã«ãã¹ã¯ããTMLM ãšåãæ
å ±ã§äºæž¬ããŸãããããã¹ã ããã㯠(é å) ããã¹ã¯ããŸãã
`BrosForTokenClassification`ã«ã¯ãBrosModel ã®äžã«åçŽãªç·åœ¢å±€ããããŸããåããŒã¯ã³ã®ã©ãã«ãäºæž¬ããŸãã
`BrosSpadeEEForTokenClassification`ã«ã¯ãBrosModel ã®äžã«`initial_token_classifier`ãš`subsequent_token_classifier`ããããŸãã `initial_token_classifier` ã¯åãšã³ãã£ãã£ã®æåã®ããŒã¯ã³ãäºæž¬ããããã«äœ¿çšããã`subsequent_token_classifier` ã¯ãšã³ãã£ãã£å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããããã«äœ¿çšãããŸãã `BrosSpadeELForTokenClassification`ã«ã¯ BrosModel ã®äžã«`entity_linker`ããããŸãã `entity_linker` 㯠2 ã€ã®ãšã³ãã£ãã£éã®é¢ä¿ãäºæž¬ããããã«äœ¿çšãããŸãã
`BrosForTokenClassification`ãš`BrosSpadeEEForTokenClassification`ã¯åºæ¬çã«åããžã§ããå®è¡ããŸãããã ãã`BrosForTokenClassification`ã¯å
¥åããŒã¯ã³ãå®å
šã«ã·ãªã¢ã«åãããŠããããšãåæãšããŠããŸã (ããŒã¯ã³ã¯ 2D 空éã«ååšãããããããã¯éåžžã«å°é£ãªäœæ¥ã§ã)ãäžæ¹ã`BrosSpadeEEForTokenClassification`㯠1 ã€ã®ããŒã¯ã³ãã次ã®æ¥ç¶ããŒã¯ã³ãäºæž¬ãããããã·ãªã¢ã«åãšã©ãŒã®åŠçãããæè»ã«è¡ãããšãã§ããŸãã
`BrosSpadeELForTokenClassification` ã¯ãšã³ãã£ãã£å
ã®ãªã³ã¯ ã¿ã¹ã¯ãå®è¡ããŸããããã 2 ã€ã®ãšã³ãã£ãã£ãäœããã®é¢ä¿ãå
±æããå Žåã(ãããšã³ãã£ãã£ã®) 1 ã€ã®ããŒã¯ã³ãã (å¥ã®ãšã³ãã£ãã£ã®) å¥ã®ããŒã¯ã³ãžã®é¢ä¿ãäºæž¬ããŸãã
BROS ã¯ãæ瀺çãªèŠèŠæ©èœã«äŸåããã«ãFUNSDãSROIEãCORDãSciTSR ãªã©ã® Key Information Extraction (KIE) ãã³ãããŒã¯ã§åç以äžã®çµæãéæããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ææžç»åããã®éèŠæ
å ±æœåº (KIE) ã«ã¯ã2 次å
(2D) 空éã«ãããããã¹ãã®æèçããã³ç©ºéçæå³è«ãç解ããå¿
èŠããããŸããæè¿ã®ç 究ã®å€ãã¯ãææžç»åã®èŠèŠçç¹åŸŽãšããã¹ãããã³ãã®ã¬ã€ã¢ãŠããçµã¿åãããããšã«éç¹ã眮ããäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ãéçºããããšã§ããã®èª²é¡ã解決ããããšããŠããŸããäžæ¹ããã®ããŒããŒã§ã¯ãããã¹ããšã¬ã€ã¢ãŠãã®å¹æçãªçµã¿åãããšããåºæ¬ã«ç«ã¡è¿ã£ãŠãã®åé¡ã«åãçµã¿ãŸããå
·äœçã«ã¯ãBROS (BERT Relying On Spatality) ãšããååã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ãææ¡ããŸãããã®èšèªã¢ãã«ã¯ã2D 空éå
ã®ããã¹ãã®çžå¯Ÿäœçœ®ããšã³ã³ãŒããããšãªã¢ ãã¹ãã³ã°æŠç¥ã䜿çšããŠã©ãã«ã®ãªãããã¥ã¡ã³ãããåŠç¿ããŸãã 2D 空éå
ã®ããã¹ããç解ããããã®ãã®æé©åããããã¬ãŒãã³ã° ã¹ããŒã ã«ãããBROS ã¯ãèŠèŠçãªç¹åŸŽã«äŸåããããšãªãã4 ã€ã® KIE ãã³ãããŒã¯ (FUNSDãSROIE*ãCORDãããã³ SciTSR) ã§ä»¥åã®æ¹æ³ãšæ¯èŒããŠåç以äžã®ããã©ãŒãã³ã¹ã瀺ããŸããããŸãããã®è«æã§ã¯ãKIE ã¿ã¹ã¯ã«ããã 2 ã€ã®çŸå®äžçã®èª²é¡ ((1) ééã£ãããã¹ãé åºã«ãããšã©ãŒã®æå°åãããã³ (2) å°æ°ã®äžæµäŸããã®å¹ççãªåŠç¿) ãæããã«ãã以åã®æ¹æ³ã«å¯Ÿãã BROS ã®åªäœæ§ãå®èšŒããŸãã*
ãã®ã¢ãã«ã¯ [jinho8345](https://huggingface.co/jinho8345) ã«ãã£ãŠå¯çš¿ãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/clovaai/bros) ã«ãããŸãã
## Usage tips and examples
- [`~transformers.BrosModel.forward`] ã«ã¯ã`input_ids` ãš `bbox` (ããŠã³ãã£ã³ã° ããã¯ã¹) ãå¿
èŠã§ããåå¢çããã¯ã¹ã¯ã(x0ãy0ãx1ãy1) åœ¢åŒ (å·Šäžé
ãå³äžé
) ã§ããå¿
èŠããããŸããå¢çããã¯ã¹ã®ååŸã¯å€éš OCR ã·ã¹ãã ã«äŸåããŸãã ãxã座æšã¯ããã¥ã¡ã³ãç»åã®å¹
ã§æ£èŠåããå¿
èŠãããããyã座æšã¯ããã¥ã¡ã³ãç»åã®é«ãã§æ£èŠåããå¿
èŠããããŸãã
```python
def expand_and_normalize_bbox(bboxes, doc_width, doc_height):
# here, bboxes are numpy array
# Normalize bbox -> 0 ~ 1
bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width
bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height
```
- [`~transformers.BrosForTokenClassification.forward`ã`~transformers.BrosSpadeEEForTokenClassification.forward`ã`~transformers.BrosSpadeEEForTokenClassification.forward`] ã§ã¯ãæ倱èšç®ã« `input_ids` ãš `bbox` ã ãã§ãªã `box_first_token_mask` ãå¿
èŠã§ããããã¯ãåããã¯ã¹ã®å
é 以å€ã®ããŒã¯ã³ãé€å€ããããã®ãã¹ã¯ã§ãããã®ãã¹ã¯ã¯ãåèªãã `input_ids` ãäœæãããšãã«å¢çããã¯ã¹ã®éå§ããŒã¯ã³ ã€ã³ããã¯ã¹ãä¿åããããšã§ååŸã§ããŸãã次ã®ã³ãŒãã§`box_first_token_mask`ãäœæã§ããŸãã
```python
def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512):
box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_)
# encode(tokenize) each word from words (List[str])
input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words]
# get the length of each box
tokens_length_list: List[int] = [len(l) for l in input_ids_list]
box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list)))
box_start_token_indices = box_end_token_indices - np.array(tokens_length_list)
# filter out the indices that are out of max_seq_length
box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1]
if len(box_start_token_indices) > len(box_end_token_indices):
box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)]
# set box_start_token_indices to True
box_first_token_mask[box_start_token_indices] = True
return box_first_token_mask
```
## Resources
- ã㢠ã¹ã¯ãªãã㯠[ãã¡ã](https://github.com/clovaai/bros) ã«ãããŸãã
## BrosConfig
[[autodoc]] BrosConfig
## BrosProcessor
[[autodoc]] BrosProcessor
- __call__
## BrosModel
[[autodoc]] BrosModel
- forward
## BrosForTokenClassification
[[autodoc]] BrosForTokenClassification
- forward
## BrosSpadeEEForTokenClassification
[[autodoc]] BrosSpadeEEForTokenClassification
- forward
## BrosSpadeELForTokenClassification
[[autodoc]] BrosSpadeELForTokenClassification
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/detr.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DETR
## Overview
DETR ã¢ãã«ã¯ã[Transformers ã䜿çšãããšã³ãããŒãšã³ãã®ãªããžã§ã¯ãæ€åº](https://arxiv.org/abs/2005.12872) ã§ææ¡ãããŸããã
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko ã«ã€ã³ã DETR
ç³ã¿èŸŒã¿ããã¯ããŒã³ãšããã®åŸã«ãšã³ãããŒãšã³ãã§ãã¬ãŒãã³ã°ã§ãããšã³ã³ãŒããŒ/ãã³ãŒã㌠Transformer ã§æ§æãããŸãã
ç©äœã®æ€åºã Faster-R-CNN ã Mask-R-CNN ãªã©ã®ã¢ãã«ã®è€éãã®å€ãã倧å¹
ã«ç°¡çŽ åãããŸãã
é åææ¡ãéæ倧æå¶æé ãã¢ã³ã«ãŒçæãªã©ã§ããããã«ãDETR ã¯æ¬¡ã®ããã«ããããšãã§ããŸãã
ãã³ãŒãåºåã®äžã«ãã¹ã¯ ããããè¿œå ããã ãã§ãããããã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ãå®è¡ã§ããããã«èªç¶ã«æ¡åŒµãããŠããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç©äœæ€åºãçŽæ¥éåäºæž¬åé¡ãšããŠèŠãæ°ããæ¹æ³ã玹ä»ããŸããç§ãã¡ã®ã¢ãããŒãã¯ã
æ€åºãã€ãã©ã€ã³ã«ãããéæ倧æå¶ãªã©ã®å€ãã®æäœæ¥ã§èšèšãããã³ã³ããŒãã³ãã®å¿
èŠæ§ãå¹æçã«æé€ãããŸãã
ã¿ã¹ã¯ã«é¢ããäºåã®ç¥èãæ瀺çã«ãšã³ã³ãŒãããããã·ãŒãžã£ãŸãã¯ã¢ã³ã«ãŒã®çæãã®äž»ãªæåã¯ã
DEtection TRansformer ãŸã㯠DETR ãšåŒã°ããæ°ãããã¬ãŒã ã¯ãŒã¯ã¯ãã»ããããŒã¹ã®ã°ããŒãã«æ倱ã§ããã
äºéšãããã³ã°ãããã³ãã©ã³ã¹ãã©ãŒã㌠ãšã³ã³ãŒããŒ/ãã³ãŒã㌠ã¢ãŒããã¯ãã£ãåŠç¿ããããªããžã§ã¯ã ã¯ãšãªã®åºå®ãããå°ããªã»ãããäžãããããšã
DETR ã¯ããªããžã§ã¯ããšã°ããŒãã« ã€ã¡ãŒãž ã³ã³ããã¹ãã®é¢ä¿ã«ã€ããŠæšè«ããæçµã»ãããçŽæ¥åºåããŸãã
䞊è¡ããŠäºæ³ããæ°ããã¢ãã«ã¯æŠå¿µçã«ã·ã³ãã«ã§ãããå€ãã®ã¢ãã«ãšã¯ç°ãªããç¹æ®ãªã©ã€ãã©ãªãå¿
èŠãšããŸããã
ä»ã®ææ°ã®æ€åºåšã DETR ã¯ã確ç«ããããããã³åçã®ç²ŸåºŠãšå®è¡æã®ããã©ãŒãã³ã¹ãå®èšŒããŸãã
å°é£ãª COCO ç©äœæ€åºããŒã¿ã»ããã«åºã¥ããé«åºŠã«æé©åããã Faster RCNN ããŒã¹ã©ã€ã³ãããã«ãDETR ã¯ç°¡åã«å®è¡ã§ããŸãã
çµ±äžãããæ¹æ³ã§ããããã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ãçæããããã«äžè¬åãããŸããã競åä»ç€Ÿã倧å¹
ã«äžåãããã©ãŒãã³ã¹ã瀺ããŠããŸã
ããŒã¹ã©ã€ã³*
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/detr) ã«ãããŸãã
## How DETR works
[`~transformers.DetrForObjectDetection`] ãã©ã®ããã«æ©èœãããã説æãã TLDR ã¯æ¬¡ã®ãšããã§ãã
ãŸããäºåã«ãã¬ãŒãã³ã°ãããç³ã¿èŸŒã¿ããã¯ããŒã³ãéããŠç»åãéä¿¡ãããŸã (è«æã§ã¯ãèè
ãã¯æ¬¡ã®ããã«äœ¿çšããŠããŸã)ã
ResNet-50/ResNet-101)ãããã ãã£ã¡ã³ã·ã§ã³ãè¿œå ãããšä»®å®ããŸããããã¯ãããã¯ããŒã³ãžã®å
¥åã
ç»åã« 3 ã€ã®ã«ã©ãŒ ãã£ãã« (RGB) ããããšä»®å®ããå Žåã®ãåœ¢ç¶ `(batch_size, 3, height, width)` ã®ãã³ãœã«ã CNNã®ããã¯ããŒã³
é垞㯠`(batch_size, 2048, height/32, width/32)` ã®åœ¢ç¶ã®ãæ°ããäœè§£å床ã®ç¹åŸŽããããåºåããŸããããã¯
次ã«ãDETR ã® Transformer ã®é ã次å
(ããã©ã«ãã§ã¯ `256`) ã«äžèŽããããã«æ圱ãããŸãã
`nn.Conv2D` ã¬ã€ã€ãŒãããã§ãåœ¢ç¶ `(batch_size, 256, height/32, width/32)` ã®ãã³ãœã«ãå®æããŸããã
ç¹åŸŽãããã¯å¹³åŠåããã³è»¢çœ®ãããåœ¢ç¶ `(batch_size, seq_len, d_model)` ã®ãã³ãœã«ãååŸããŸã =
`(batch_size, width/32*height/32, 256)`ããããã£ãŠãNLP ã¢ãã«ãšã®éãã¯ãã·ãŒã±ã³ã¹ã®é·ããå®éã«ã¯
éåžžãããé·ããªããŸããããd_modelãã¯å°ãããªããŸã (NLP ã§ã¯éåžž 768 以äžã§ã)ã
次ã«ãããããšã³ã³ãŒããä»ããŠéä¿¡ãããåã圢ç¶ã® `encoder_hidden_ââstates` ãåºåãããŸã (次ã®ããã«èããããšãã§ããŸã)ã
ãããã¯ç»åã®ç¹åŸŽãšããŠïŒã次ã«ããããã **ãªããžã§ã¯ã ã¯ãšãª**ããã³ãŒããéããŠéä¿¡ãããŸããããã¯åœ¢ç¶ã®ãã³ãœã«ã§ã
`(batch_size, num_queries, d_model)`ãéåžžã`num_queries` 㯠100 ã«èšå®ããããŒãã§åæåãããŸãã
ãããã®å
¥ååã蟌ã¿ã¯åŠç¿ãããäœçœ®ãšã³ã³ãŒãã£ã³ã°ã§ãããäœæè
ã¯ããããªããžã§ã¯ã ã¯ãšãªãšåŒã³ãåæ§ã«
ãšã³ã³ãŒãã§ã¯ããããã¯åã¢ãã³ã·ã§ã³å±€ã®å
¥åã«è¿œå ãããŸããåãªããžã§ã¯ã ã¯ãšãªã¯ç¹å®ã®ãªããžã§ã¯ããæ€çŽ¢ããŸãã
ç»åã§ã¯ããã³ãŒãã¯ãè€æ°ã®ã»ã«ã ã¢ãã³ã·ã§ã³ ã¬ã€ã€ãšãšã³ã³ãŒã ãã³ãŒã ã¢ãã³ã·ã§ã³ ã¬ã€ã€ãéããŠãããã®åã蟌ã¿ãæŽæ°ããŸãã
åã圢ç¶ã® `decoder_hidden_ââstates` ãåºåããŸã: `(batch_size, num_queries, d_model)`ã次ã«é ãïŒã€
ãªããžã§ã¯ãæ€åºã®ããã«äžéšã«è¿œå ãããŸããåãªããžã§ã¯ã ã¯ãšãªããªããžã§ã¯ãã® 1 ã€ã«åé¡ããããã®ç·åœ¢ã¬ã€ã€ãŒããŸãã¯ããããã
ãªããžã§ã¯ãããããã³åã¯ãšãªã®å¢çããã¯ã¹ãäºæž¬ãã MLPã
ã¢ãã«ã¯ **2 éšãããã³ã°æ倱**ã䜿çšããŠãã¬ãŒãã³ã°ãããŸããã€ãŸããå®éã«è¡ãããšã¯ãäºæž¬ãããã¯ã©ã¹ãæ¯èŒããããšã§ã +
ã°ã©ãŠã³ã ãã¥ã«ãŒã¹ ã¢ãããŒã·ã§ã³ã«å¯Ÿãã N = 100 åã®åãªããžã§ã¯ã ã¯ãšãªã®å¢çããã¯ã¹ (åãé·ã N ãŸã§ããã£ã³ã°)
(ãããã£ãŠãç»åã«ãªããžã§ã¯ãã 4 ã€ããå«ãŸããŠããªãå Žåã96 åã®æ³šéã«ã¯ã¯ã©ã¹ãšããŠããªããžã§ã¯ããªãããããã³ã¯ã©ã¹ãšããŠãå¢çããã¯ã¹ãªãããå«ãŸããã ãã«ãªããŸãã
å¢çããã¯ã¹)ã [Hungarian matching algorithm](https://en.wikipedia.org/wiki/Hungarian_algorithm) ã¯ãæ€çŽ¢ã«äœ¿çšãããŸãã
N åã®ã¯ãšãªã®ãããããã N åã®æ³šéã®ãããããžã®æé©ãª 1 察 1 ã®ãããã³ã°ã次ã«ãæšæºã¯ãã¹ãšã³ããã㌠(
ã¯ã©ã¹)ãããã³ L1 ãš [generalized IoU loss](https://giou.stanford.edu/) ã®ç·åœ¢çµå (
å¢çããã¯ã¹) ã¯ãã¢ãã«ã®ãã©ã¡ãŒã¿ãŒãæé©åããããã«äœ¿çšãããŸãã
DETR ã¯ãããããã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ (ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ãšã€ã³ã¹ã¿ã³ã¹ãçµ±åãã) ãå®è¡ããããã«èªç¶ã«æ¡åŒµã§ããŸãã
ã»ã°ã¡ã³ããŒã·ã§ã³ïŒã [`~transformers.DetrForSegmentation`] ã¯ã»ã°ã¡ã³ããŒã·ã§ã³ ãã¹ã¯ ããããäžã«è¿œå ããŸã
[`~transformers.DetrForObjectDetection`]ããã¹ã¯ ãããã¯ãå
±åã§ãã¬ãŒãã³ã°ããããšãã2 段éã®ããã»ã¹ã§ãã¬ãŒãã³ã°ããããšãã§ããŸãã
ããã§ãæåã« [`~transformers.DetrForObjectDetection`] ã¢ãã«ããã¬ãŒãã³ã°ããŠãäž¡æ¹ã®åšå²ã®å¢çããã¯ã¹ãæ€åºããŸãã
ããã®ãïŒã€ã³ã¹ã¿ã³ã¹ïŒãšããã®ãïŒæšãéè·¯ã空ãªã©ã®èæ¯ã®ãã®ïŒããã¹ãŠåçµãããã¹ãŠã®éã¿ãããªãŒãºããŠã®ã¿ãã¬ãŒãã³ã°ããŸãã
25 ãšããã¯ã®ãã¹ã¯ããããå®éšçã«ã¯ãããã 2 ã€ã®ã¢ãããŒãã¯åæ§ã®çµæããããããŸããããã¯ã¹ã®äºæž¬ã¯
ãã³ã¬ãªãŒèªã®ãããã³ã°ã¯ããã¯ã¹éã®è·é¢ã䜿çšããŠèšç®ãããããããã¬ãŒãã³ã°ãå¯èœã«ããããã«ã¯ãããå¿
èŠã§ãã
## Usage tips
- DETR ã¯ããããã **ãªããžã§ã¯ã ã¯ãšãª** ã䜿çšããŠãç»åå
ã®ãªããžã§ã¯ããæ€åºããŸããã¯ãšãªã®æ°ã«ãã£ãŠæ倧å€ã決ãŸããŸã
åäžã®ç»åå
ã§æ€åºã§ãããªããžã§ã¯ãã®æ°ãããã©ã«ãã§ã¯ 100 ã«èšå®ãããŸã (ãã©ã¡ãŒã¿ãŒãåç
§)
[`~transformers.DetrConfig`] ã® `num_queries`)ãããçšåºŠã®äœè£ãããã®ã¯è¯ãããšã§ã (COCO ã§ã¯ã
èè
㯠100 ã䜿çšããŸããããCOCO ã€ã¡ãŒãžå
ã®ãªããžã§ã¯ãã®æ倧æ°ã¯çŽ 70 ã§ã)ã
- DETR ã®ãã³ãŒããŒã¯ãã¯ãšãªã®åã蟌ã¿ã䞊è¡ããŠæŽæ°ããŸãããã㯠GPT-2 ã®ãããªèšèªã¢ãã«ãšã¯ç°ãªããŸãã
䞊åã§ã¯ãªãèªå·±ååž°ãã³ãŒãã䜿çšããŸãããããã£ãŠãå æç泚æãã¹ã¯ã¯äœ¿çšãããŸããã
- DETR ã¯ãæ圱åã«åã»ã«ãã¢ãã³ã·ã§ã³å±€ãšã¯ãã¹ã¢ãã³ã·ã§ã³å±€ã®é ãç¶æ
ã«äœçœ®åã蟌ã¿ãè¿œå ããŸãã
ã¯ãšãªãšããŒã«ãç»åã®äœçœ®åã蟌ã¿ã«ã€ããŠã¯ãåºå®æ£åŒŠæ³¢ãŸãã¯åŠç¿æžã¿ã®ã©ã¡ãããéžæã§ããŸãã
絶察äœçœ®åã蟌ã¿ãããã©ã«ãã§ã¯ããã©ã¡ãŒã¿ `position_embedding_type` ã¯
[`~transformers.DetrConfig`] 㯠`"sine"` ã«èšå®ãããŸãã
- DETR ã®äœæè
ã¯ããã¬ãŒãã³ã°äžã«ãç¹ã«ãã³ãŒãã§è£å©æ倱ã䜿çšãããšåœ¹ç«ã€ããšã«æ°ã¥ããŸããã
ã¢ãã«ã¯åã¯ã©ã¹ã®æ£ããæ°ã®ãªããžã§ã¯ããåºåããŸãããã©ã¡ãŒã¿ `auxiliary_loss` ãèšå®ãããšã
[`~transformers.DetrConfig`] ã`True`ã«èšå®ãããã£ãŒããã©ã¯ãŒã ãã¥ãŒã©ã« ãããã¯ãŒã¯ãšãã³ã¬ãªãŒæ倱ãäºæž¬ããŸã
ã¯åãã³ãŒãå±€ã®åŸã«è¿œå ãããŸã (FFN ããã©ã¡ãŒã¿ãå
±æãã)ã
- è€æ°ã®ããŒãã«ãããåæ£ç°å¢ã§ã¢ãã«ããã¬ãŒãã³ã°ããå Žåã¯ã
_modeling_detr.py_ ã® _DetrLoss_ ã¯ã©ã¹ã® _num_boxes_ å€æ°ãè€æ°ã®ããŒãã§ãã¬ãŒãã³ã°ããå Žåãããã¯æ¬¡ã®ããã«ããå¿
èŠããããŸã
å
ã®å®è£
ã§èŠãããããã«ããã¹ãŠã®ããŒãã«ãããã¿ãŒã²ãã ããã¯ã¹ã®å¹³åæ°ã«èšå®ãããŸã [ãã¡ã](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L227-L232) ã
- [`~transformers.DetrForObjectDetection`] ããã³ [`~transformers.DetrForSegmentation`] ã¯æ¬¡ã®ããã«åæåã§ããŸãã
[timm ã©ã€ãã©ãª](https://github.com/rwightman/pytorch-image-models) ã§å©çšå¯èœãªç³ã¿èŸŒã¿ããã¯ããŒã³ã
ããšãã°ãMobileNet ããã¯ããŒã³ã䜿çšããåæåã¯ã次㮠`backbone` å±æ§ãèšå®ããããšã§å®è¡ã§ããŸãã
[`~transformers.DetrConfig`] ã `"tf_mobilenetv3_small_075"` ã«èšå®ããããã䜿çšããŠã¢ãã«ãåæåããŸãã
æ§æã
- DETR ã¯ãæç蟺ãäžå®ã®ãã¯ã»ã«æ°ä»¥äžã«ãªããæé·èŸºãäžå®é以äžã«ãªãããã«å
¥åç»åã®ãµã€ãºãå€æŽããŸãã
æ倧 1333 ãã¯ã»ã«ããã¬ãŒãã³ã°æã«ãæç蟺ãã©ã³ãã ã« ã«èšå®ãããããã«ã¹ã±ãŒã«æ¡åŒµã䜿çšãããŸãã
æå° 480ãæ倧 800 ãã¯ã»ã«ãæšè«æã«ã¯ãæç蟺ã 800 ã«èšå®ãããŸãã
䜿çšã§ããŸã
[`~transformers.DetrImageProcessor`] çšã®ç»å (ããã³ãªãã·ã§ã³ã® COCO 圢åŒã®æ³šé) ãæºåããŸãã
ã¢ãã«ããã®ãµã€ãºå€æŽã«ããããããå
ã®ç»åã®ãµã€ãºãç°ãªãå ŽåããããŸãã DETR ã¯ãç»åãæ倧ãŸã§ããã£ã³ã°ããããšã§ãã®åé¡ã解決ããŸãã
ã©ã®ãã¯ã»ã«ãå®æ°ã§ã©ã®ãã¯ã»ã«ãããã£ã³ã°ã§ãããã瀺ããã¯ã»ã« ãã¹ã¯ãäœæããããšã«ãã£ãŠããããå
ã®æ倧ãµã€ãºã決å®ããŸãã
ãããã¯ãç»åããããåŠçããããã«ã«ã¹ã¿ã ã® `collatââe_fn` ãå®çŸ©ããããšãã§ããŸãã
[`~transformers.DetrImageProcessor.pad_and_create_pixel_mask`]ã
- ç»åã®ãµã€ãºã«ãã£ãŠäœ¿çšãããã¡ã¢ãªã®éã決ãŸãããããã£ãŠãbatch_sizeãã決ãŸããŸãã
GPU ããã 2 ã®ããã ãµã€ãºã䜿çšããããšããå§ãããŸãã詳现ã«ã€ããŠã¯ã[ãã® Github ã¹ã¬ãã](https://github.com/facebookresearch/detr/issues/150) ãåç
§ããŠãã ããã
DETR ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããã«ã¯ 3 ã€ã®æ¹æ³ããããŸã (奜ã¿ã«å¿ããŠ)ã
ãªãã·ã§ã³ 1: ã¢ãã«å
šäœã®äºåãã¬ãŒãã³ã°ãããéã¿ã䜿çšã㊠DETR ãã€ã³ã¹ã¿ã³ã¹åãã
```py
>>> from transformers import DetrForObjectDetection
>>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
```
ãªãã·ã§ã³ 2: Transformer ã«ã€ããŠã¯ã©ã³ãã ã«åæåãããéã¿ã䜿çšã㊠DETR ãã€ã³ã¹ã¿ã³ã¹åããŸãããããã¯ããŒã³ã«ã€ããŠã¯äºåã«ãã¬ãŒãã³ã°ãããéã¿ã䜿çšããŸã
```py
>>> from transformers import DetrConfig, DetrForObjectDetection
>>> config = DetrConfig()
>>> model = DetrForObjectDetection(config)
```
ãªãã·ã§ã³ 3: ããã¯ããŒã³ + ãã©ã³ã¹ãã©ãŒããŒã®ã©ã³ãã ã«åæåãããéã¿ã䜿çšã㊠DETR ãã€ã³ã¹ã¿ã³ã¹åããŸãã
```py
>>> config = DetrConfig(use_pretrained_backbone=False)
>>> model = DetrForObjectDetection(config)
```
| Task | Object detection | Instance segmentation | Panoptic segmentation |
|------|------------------|-----------------------|-----------------------|
| **Description** |ç»åå
ã®ãªããžã§ã¯ãã®åšå²ã®å¢çããã¯ã¹ãšã¯ã©ã¹ ã©ãã«ãäºæž¬ãã | ç»åå
ã®ãªããžã§ã¯ã (ã€ãŸãã€ã³ã¹ã¿ã³ã¹) ã®åšå²ã®ãã¹ã¯ãäºæž¬ãã | ç»åå
ã®ãªããžã§ã¯ã (ã€ã³ã¹ã¿ã³ã¹) ãšããã®ã (æšãéè·¯ãªã©ã®èæ¯) ã®äž¡æ¹ã®åšå²ã®ãã¹ã¯ãäºæž¬ããŸã |
| **Model** | [`~transformers.DetrForObjectDetection`] | [`~transformers.DetrForSegmentation`] | [`~transformers.DetrForSegmentation`] |
| **Example dataset** | COCO detection | COCO detection, COCO panoptic | COCO panoptic | |
| **Format of annotations to provide to** [`~transformers.DetrImageProcessor`] | {'image_id': `int`, 'annotations': `List[Dict]`} each Dict being a COCO object annotation | {'image_id': `int`, 'annotations': `List[Dict]`} (in case of COCO detection) or {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} (in case of COCO panoptic) | {'file_name': `str`, 'image_id': `int`, 'segments_info': `List[Dict]`} and masks_path (path to directory containing PNG files of the masks) |
| **Postprocessing** (i.e. converting the output of the model to Pascal VOC format) | [`~transformers.DetrImageProcessor.post_process`] | [`~transformers.DetrImageProcessor.post_process_segmentation`] | [`~transformers.DetrImageProcessor.post_process_segmentation`], [`~transformers.DetrImageProcessor.post_process_panoptic`] |
| **evaluators** | `CocoEvaluator` with `iou_types="bbox"` | `CocoEvaluator` with `iou_types="bbox"` or `"segm"` | `CocoEvaluator` with `iou_tupes="bbox"` or `"segm"`, `PanopticEvaluator` |
ã€ãŸããCOCO æ€åºãŸã㯠COCO ããããã£ãã¯åœ¢åŒã§ããŒã¿ãæºåããŠããã次ã䜿çšããå¿
èŠããããŸãã
[`~transformers.DetrImageProcessor`] `pixel_values`ã`pixel_mask`ãããã³ãªãã·ã§ã³ãäœæããŸãã
ãã©ãã«ããããã䜿çšããŠã¢ãã«ããã¬ãŒãã³ã° (ãŸãã¯åŸ®èª¿æŽ) ã§ããŸããè©äŸ¡ããã«ã¯ããŸãã
[`~transformers.DetrImageProcessor`] ã®åŸåŠçã¡ãœããã® 1 ã€ã䜿çšããã¢ãã«ã®åºåããããã¯ã§ããŸã
`CocoEvaluator` ãŸã㯠`PanopticEvaluator` ã®ããããã«æäŸããã次ã®ãããªã¡ããªã¯ã¹ãèšç®ã§ããŸãã
å¹³åå¹³å粟床 (mAP) ãšããã©ãå質 (PQ)ãåŸè
ã®ãªããžã§ã¯ã㯠[å
ã®ãªããžããª](https://github.com/facebookresearch/detr) ã«å®è£
ãããŠããŸããè©äŸ¡ã®è©³çŽ°ã«ã€ããŠã¯ã[ãµã³ãã« ããŒãããã¯](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR) ãåç
§ããŠãã ããã
## Resources
DETR ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="object-detection"/>
- ã«ã¹ã¿ã ããŒã¿ã»ããã® [`DetrForObjectDetection`] ãš [`DetrForSegmentation`] ã®åŸ®èª¿æŽã説æãããã¹ãŠã®ãµã³ãã« ããŒãããã¯ã¯ã[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR) ã§èŠã€ããããšãã§ããŸãã ã
- åç
§: [ãªããžã§ã¯ãæ€åºã¿ã¹ã¯ ã¬ã€ã](../tasks/object_detection)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## DetrConfig
[[autodoc]] DetrConfig
## DetrImageProcessor
[[autodoc]] DetrImageProcessor
- preprocess
- post_process_object_detection
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
## DetrFeatureExtractor
[[autodoc]] DetrFeatureExtractor
- __call__
- post_process_object_detection
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
## DETR specific outputs
[[autodoc]] models.detr.modeling_detr.DetrModelOutput
[[autodoc]] models.detr.modeling_detr.DetrObjectDetectionOutput
[[autodoc]] models.detr.modeling_detr.DetrSegmentationOutput
## DetrModel
[[autodoc]] DetrModel
- forward
## DetrForObjectDetection
[[autodoc]] DetrForObjectDetection
- forward
## DetrForSegmentation
[[autodoc]] DetrForSegmentation
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/blip.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLIP
## Overview
BLIP ã¢ãã«ã¯ã[BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) 㧠Junnan LiãDongxu LiãCaiming XiongãSteven Hoi ã«ãã£ãŠææ¡ãããŸããã ã
BLIP ã¯ã次ã®ãããªããŸããŸãªãã«ãã¢ãŒãã« ã¿ã¹ã¯ãå®è¡ã§ããã¢ãã«ã§ãã
- èŠèŠçãªè³ªåå¿ç
- ç»åãšããã¹ãã®æ€çŽ¢ïŒç»åãšããã¹ãã®ãããã³ã°ïŒ
- ç»åãã£ãã·ã§ã³
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èŠèŠèšèªäºåãã¬ãŒãã³ã° (VLP) ã«ãããå€ãã®èŠèŠèšèªã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ãåäžããŸããã
ãã ããæ¢åã®äºåãã¬ãŒãã³ã°æžã¿ã¢ãã«ã®ã»ãšãã©ã¯ãç解ããŒã¹ã®ã¿ã¹ã¯ãŸãã¯äžä»£ããŒã¹ã®ã¿ã¹ã¯ã®ããããã§ã®ã¿åªããŠããŸããããã«ãæé©ã§ã¯ãªãç£èŠãœãŒã¹ã§ãã Web ããåéããããã€ãºã®å€ãç»åãšããã¹ãã®ãã¢ã䜿çšããŠããŒã¿ã»ãããã¹ã±ãŒã«ã¢ããããããšã§ãããã©ãŒãã³ã¹ã®åäžã倧å¹
ã«éæãããŸããããã®è«æã§ã¯ãèŠèŠèšèªã®ç解ãšçæã¿ã¹ã¯ã®äž¡æ¹ã«æè»ã«ç§»è¡ããæ°ãã VLP ãã¬ãŒã ã¯ãŒã¯ã§ãã BLIP ãææ¡ããŸãã BLIP ã¯ããã£ãã·ã§ã³ãããŒãã¹ãã©ããããããšã§ãã€ãºã®å€ã Web ããŒã¿ãå¹æçã«å©çšããŸãããã£ãã·ã§ããŒãåæãã£ãã·ã§ã³ãçæãããã£ã«ã¿ãŒããã€ãºã®å€ããã£ãã·ã§ã³ãé€å»ããŸããç»åããã¹ãæ€çŽ¢ (å¹³ååçŸç +2.7%@1)ãç»åãã£ãã·ã§ã³äœæ (CIDEr 㧠+2.8%)ãVQA ( VQA ã¹ã³ã¢ã¯ +1.6%)ã BLIP ã¯ããŒãã·ã§ããæ¹åŒã§ãããªèšèªã¿ã¹ã¯ã«çŽæ¥è»¢éããå Žåã«ãã匷åãªäžè¬åèœåãçºæ®ããŸããã³ãŒããã¢ãã«ãããŒã¿ã»ããããªãªãŒã¹ãããŠããŸãã*
![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif)
ãã®ã¢ãã«ã¯ [ybelkada](https://huggingface.co/ybelkada) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/BLIP) ã«ãããŸãã
## Resources
- [Jupyter ããŒãããã¯](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) ã«ã¹ã¿ã ããŒã¿ã»ããã®ç»åãã£ãã·ã§ã³çšã« BLIP ã埮調æŽããæ¹æ³
## BlipConfig
[[autodoc]] BlipConfig
- from_text_vision_configs
## BlipTextConfig
[[autodoc]] BlipTextConfig
## BlipVisionConfig
[[autodoc]] BlipVisionConfig
## BlipProcessor
[[autodoc]] BlipProcessor
## BlipImageProcessor
[[autodoc]] BlipImageProcessor
- preprocess
<frameworkcontent>
<pt>
## BlipModel
[[autodoc]] BlipModel
- forward
- get_text_features
- get_image_features
## BlipTextModel
[[autodoc]] BlipTextModel
- forward
## BlipVisionModel
[[autodoc]] BlipVisionModel
- forward
## BlipForConditionalGeneration
[[autodoc]] BlipForConditionalGeneration
- forward
## BlipForImageTextRetrieval
[[autodoc]] BlipForImageTextRetrieval
- forward
## BlipForQuestionAnswering
[[autodoc]] BlipForQuestionAnswering
- forward
</pt>
<tf>
## TFBlipModel
[[autodoc]] TFBlipModel
- call
- get_text_features
- get_image_features
## TFBlipTextModel
[[autodoc]] TFBlipTextModel
- call
## TFBlipVisionModel
[[autodoc]] TFBlipVisionModel
- call
## TFBlipForConditionalGeneration
[[autodoc]] TFBlipForConditionalGeneration
- call
## TFBlipForImageTextRetrieval
[[autodoc]] TFBlipForImageTextRetrieval
- call
## TFBlipForQuestionAnswering
[[autodoc]] TFBlipForQuestionAnswering
- call
</tf>
</frameworkcontent> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/conditional_detr.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Conditional DETR
## Overview
æ¡ä»¶ä»ã DETR ã¢ãã«ã¯ã[Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 㧠Depu MengãXiaokang ChenãZejia FanãGang ZengãHouqiang LiãYuhui YuanãLei Sun, Jingdong Wang ã«ãã£ãŠææ¡ãããŸãããç京æ±ãæ¡ä»¶ä»ã DETR ã¯ãé«é DETR ãã¬ãŒãã³ã°ã®ããã®æ¡ä»¶ä»ãã¯ãã¹ã¢ãã³ã·ã§ã³ ã¡ã«ããºã ãæäŸããŸããæ¡ä»¶ä»ã DETR 㯠DETR ããã 6.7 åãã 10 åéãåæããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æè¿éçºããã DETR ã¢ãããŒãã¯ããã©ã³ã¹ãã©ãŒã㌠ãšã³ã³ãŒããŒããã³ãã³ãŒã㌠ã¢ãŒããã¯ãã£ãç©äœæ€åºã«é©çšããææãªããã©ãŒãã³ã¹ãå®çŸããŸãããã®è«æã§ã¯ããã¬ãŒãã³ã°ã®åæãé
ããšããéèŠãªåé¡ãæ±ããé«é DETR ãã¬ãŒãã³ã°ã®ããã®æ¡ä»¶ä»ãã¯ãã¹ã¢ãã³ã·ã§ã³ ã¡ã«ããºã ã玹ä»ããŸããç§ãã¡ã®ã¢ãããŒãã¯ãDETR ã«ãããã¯ãã¹ã¢ãã³ã·ã§ã³ã 4 ã€ã®åè¢ã®äœçœ®ç¹å®ãšããã¯ã¹ã®äºæž¬ã«ã³ã³ãã³ãã®åã蟌ã¿ã«å€§ããäŸåããŠãããããé«å質ã®ã³ã³ãã³ãã®åã蟌ã¿ã®å¿
èŠæ§ãé«ãŸãããã¬ãŒãã³ã°ã®é£æ床ãé«ããªããšããç¹ã«åæ©ã¥ããããŠããŸããæ¡ä»¶ä»ã DETR ãšåŒã°ããç§ãã¡ã®ã¢ãããŒãã¯ããã³ãŒããŒã®ãã«ãããã ã¯ãã¹ã¢ãã³ã·ã§ã³ã®ããã«ãã³ãŒããŒã®åã蟌ã¿ããæ¡ä»¶ä»ãã®ç©ºéã¯ãšãªãåŠç¿ããŸããå©ç¹ã¯ãæ¡ä»¶ä»ã空éã¯ãšãªãéããŠãåã¯ãã¹ã¢ãã³ã·ã§ã³ ãããããåå¥ã®é å (ããšãã°ã1 ã€ã®ãªããžã§ã¯ãã®ç«¯ãŸãã¯ãªããžã§ã¯ã ããã¯ã¹å
ã®é å) ãå«ããã³ãã«æ³šç®ã§ããããšã§ããããã«ããããªããžã§ã¯ãåé¡ãšããã¯ã¹ååž°ã®ããã®åå¥ã®é åãããŒã«ã©ã€ãºããããã®ç©ºéç¯å²ãçãŸããã³ã³ãã³ãã®åã蟌ã¿ãžã®äŸåãç·©åããããã¬ãŒãã³ã°ã容æã«ãªããŸããå®éšçµæã¯ãæ¡ä»¶ä»ã DETR ãããã¯ããŒã³ R50 ããã³ R101 㧠6.7 åéãåæãããã匷åãªããã¯ããŒã³ DC5-R50 ããã³ DC5-R101 㧠10 åéãåæããããšã瀺ããŠããŸããã³ãŒã㯠https://github.com/Atten4Vis/ConditionalDETR ã§å
¥æã§ããŸãã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg"
alt="æç»" width="600"/>
<small> æ¡ä»¶ä»ã DETR ã¯ãå
ã® DETR ã«æ¯ã¹ãŠã¯ããã«éãåæã瀺ããŸãã <a href="https://arxiv.org/abs/2108.06152">å
ã®è«æ</a>ããåŒçšã</small>
ãã®ã¢ãã«ã¯ [DepuMeng](https://huggingface.co/DepuMeng) ã«ãã£ãŠå¯çš¿ãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/Atten4Vis/ConditionalDETR) ã«ãããŸãã
## Resources
- [ãªããžã§ã¯ãæ€åºã¿ã¹ã¯ã¬ã€ã](../tasks/object_detection)
## ConditionalDetrConfig
[[autodoc]] ConditionalDetrConfig
## ConditionalDetrImageProcessor
[[autodoc]] ConditionalDetrImageProcessor
- preprocess
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
## ConditionalDetrFeatureExtractor
[[autodoc]] ConditionalDetrFeatureExtractor
- __call__
- post_process_object_detection
- post_process_instance_segmentation
- post_process_semantic_segmentation
- post_process_panoptic_segmentation
## ConditionalDetrModel
[[autodoc]] ConditionalDetrModel
- forward
## ConditionalDetrForObjectDetection
[[autodoc]] ConditionalDetrForObjectDetection
- forward
## ConditionalDetrForSegmentation
[[autodoc]] ConditionalDetrForSegmentation
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/audio-spectrogram-transformer.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Audio Spectrogram Transformer
## æŠèŠ
Audio Spectrogram Transformerã¢ãã«ã¯ã[AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778)ãšããè«æã§Yuan GongãYu-An ChungãJames Glassã«ãã£ãŠææ¡ãããŸãããããã¯ãé³å£°ãç»åïŒã¹ãã¯ããã°ã©ã ïŒã«å€æããããšã§ãé³å£°ã«[Vision Transformer](vit)ãé©çšããŸãããã®ã¢ãã«ã¯é³å£°åé¡ã«ãããŠæå
端ã®çµæãåŸãŠããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*éå»10幎éã§ãç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒCNNïŒã¯ãé³å£°ã¹ãã¯ããã°ã©ã ãã察å¿ããã©ãã«ãžã®çŽæ¥çãªãããã³ã°ãåŠç¿ããããšãç®æãããšã³ãããŒãšã³ãã®é³å£°åé¡ã¢ãã«ã®äž»èŠãªæ§æèŠçŽ ãšããŠåºãæ¡çšãããŠããŸãããé·è·é¢ã®ã°ããŒãã«ãªã³ã³ããã¹ããããè¯ãæãããããæè¿ã®åŸåãšããŠãCNNã®äžã«ã»ã«ãã¢ãã³ã·ã§ã³æ©æ§ãè¿œå ããCNN-ã¢ãã³ã·ã§ã³ãã€ããªããã¢ãã«ã圢æããããšããããŸããããããCNNãžã®äŸåãå¿
èŠãã©ããããããŠçŽç²ã«ã¢ãã³ã·ã§ã³ã«åºã¥ããã¥ãŒã©ã«ãããã¯ãŒã¯ã ãã§é³å£°åé¡ã«ãããŠè¯ãããã©ãŒãã³ã¹ãåŸãããšãã§ãããã©ããã¯æããã§ã¯ãããŸãããæ¬è«æã§ã¯ããããã®åãã«çãããããé³å£°åé¡çšã§ã¯æåã®ç³ã¿èŸŒã¿ãªãã§çŽç²ã«ã¢ãã³ã·ã§ã³ããŒã¹ã®ã¢ãã«ã§ããAudio Spectrogram TransformerïŒASTïŒã玹ä»ããŸããæã
ã¯ASTãæ§ã
ãªãªãŒãã£ãªåé¡ãã³ãããŒã¯ã§è©äŸ¡ããAudioSetã§0.485 mAPãESC-50ã§95.6%ã®æ£è§£çãSpeech Commands V2ã§98.1%ã®æ£è§£çãšããæ°ããªæå
端ã®çµæãéæããŸããã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/audio_spectogram_transformer_architecture.png"
alt="drawing" width="600"/>
<small> Audio Spectrogram Transformerã®ã¢ãŒããã¯ãã£ã<a href="https://arxiv.org/abs/2104.01778">å
è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯[nielsr](https://huggingface.co/nielsr)ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/YuanGongND/ast)ã§èŠãããšãã§ããŸãã
## 䜿çšäžã®ãã³ã
- ç¬èªã®ããŒã¿ã»ããã§Audio Spectrogram TransformerïŒASTïŒããã¡ã€ã³ãã¥ãŒãã³ã°ããå Žåãå
¥åã®æ£èŠåïŒå
¥åã®å¹³åã0ãæšæºåå·®ã0.5ã«ããããšïŒåŠçããããšãæšå¥šãããŸãã[`ASTFeatureExtractor`]ã¯ãããåŠçããŸããããã©ã«ãã§ã¯AudioSetã®å¹³åãšæšæºåå·®ã䜿çšããŠããããšã«æ³šæããŠãã ãããèè
ãäžæµã®ããŒã¿ã»ããã®çµ±èšãã©ã®ããã«èšç®ããŠãããã¯ã[`ast/src/get_norm_stats.py`](https://github.com/YuanGongND/ast/blob/master/src/get_norm_stats.py)ã§ç¢ºèªããããšãã§ããŸãã
- ASTã¯äœãåŠç¿çãå¿
èŠã§ãã èè
ã¯[PSLAè«æ](https://arxiv.org/abs/2102.01243)ã§ææ¡ãããCNNã¢ãã«ã«æ¯ã¹ãŠ10åå°ããåŠç¿çã䜿çšããŠããŸãïŒãçŽ æ©ãåæãããããã¿ã¹ã¯ã«é©ããåŠç¿çãšåŠç¿çã¹ã±ãžã¥ãŒã©ãŒãæ¢ãããšããå§ãããŸãã
## åèè³æ
Audio Spectrogram Transformerã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceããã³ã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèŠ§ã§ãã
<PipelineTag pipeline="audio-classification"/>
- ASTãçšããé³å£°åé¡ã®æšè«ã説æããããŒãããã¯ã¯[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/AST)ã§èŠãããšãã§ããŸãã
- [`ASTForAudioClassification`]ã¯ããã®[äŸç€ºã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)ãš[ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)ã«ãã£ãŠãµããŒããããŠããŸãã
- ãã¡ããåç
§ïŒ[é³å£°åé¡ã¿ã¹ã¯](../tasks/audio_classification)ã
ããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
## ASTConfig
[[autodoc]] ASTConfig
## ASTFeatureExtractor
[[autodoc]] ASTFeatureExtractor
- __call__
## ASTModel
[[autodoc]] ASTModel
- forward
## ASTForAudioClassification
[[autodoc]] ASTForAudioClassification
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/ctrl.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CTRL
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=Salesforce/ctrl">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/tiny-ctrl">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
CTRL ã¢ãã«ã¯ãNitish Shirish Keskar*ãBryan McCann*ãLav R. VarshneyãCaiming Xiong, Richard Socher ã«ãã£ãŠ [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) ã§ææ¡ãããŸããã
ãªãã£ãŒãã»ãœãŒãã£ãŒãããã¯ãéåžžã«å€§èŠæš¡ãªã³ãŒãã¹ã®èšèªã¢ããªã³ã°ã䜿çšããŠäºåãã¬ãŒãã³ã°ãããå æç (äžæ¹å) ãã©ã³ã¹ãã©ãŒããŒã§ã
æåã®ããŒã¯ã³ãå¶åŸ¡ã³ãŒã (ãªã³ã¯ãæžç±ãWikipedia ãªã©) ãšããŠäºçŽãããŠãããçŽ 140 GB ã®ããã¹ã ããŒã¿ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*倧èŠæš¡ãªèšèªã¢ãã«ã¯ææãªããã¹ãçææ©èœã瀺ããŠããŸããããŠãŒã¶ãŒã¯ç¹å®ã®èšèªã¢ãã«ãç°¡åã«å¶åŸ¡ã§ããŸãã
çæãããããã¹ãã®åŽé¢ã 16 å 3,000 äžãã©ã¡ãŒã¿ã®æ¡ä»¶ä»ããã©ã³ã¹ãã©ãŒããŒèšèªã¢ãã«ã§ãã CTRL ããªãªãŒã¹ããŸãã
ã¹ã¿ã€ã«ãã³ã³ãã³ããã¿ã¹ã¯åºæã®åäœãå¶åŸ¡ããå¶åŸ¡ã³ãŒããæ¡ä»¶ä»ããããã«èšç·ŽãããŠããŸããå¶åŸ¡ã³ãŒãã¯
çã®ããã¹ããšèªç¶ã«å
±çããæ§é ãã掟çããæåž«ãªãåŠç¿ã®å©ç¹ãç¶æããªããã
ããã¹ãçæãããæ瀺çã«å¶åŸ¡ã§ããããã«ãªããŸãããããã®ã³ãŒãã䜿çšãããšãCTRL ã§ã©ã®éšåãäºæž¬ãããã®ããäºæž¬ããããšãã§ããŸãã
ãã¬ãŒãã³ã° ããŒã¿ã«ã¯ã·ãŒã±ã³ã¹ãäžããããå¯èœæ§ãæãé«ããªããŸããããã«ããã倧éã®ããŒã¿ãåæããããã®æœåšçãªæ¹æ³ãæäŸãããŸãã
ã¢ãã«ããŒã¹ã®ãœãŒã¹åž°å±ãä»ããŠã*
ãã®ã¢ãã«ã¯ã[keskarnitishr](https://huggingface.co/keskarnitishr) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒããèŠã€ãã
[ãã¡ã](https://github.com/salesforce/Salesforce/ctrl)ã
## Usage tips
- CTRL ã¯å¶åŸ¡ã³ãŒããå©çšããŠããã¹ããçæããŸããçæãç¹å®ã®åèªãæã§éå§ããå¿
èŠããããŸãã
ãŸãã¯ãªã³ã¯ããŠäžè²«ããããã¹ããçæããŸãã [å
ã®å®è£
](https://github.com/salesforce/Salesforce/ctrl) ãåç
§ããŠãã ããã
詳ããã¯ã
- CTRL ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
- CTRL ã¯å æèšèªã¢ããªã³ã° (CLM) ã®ç®çã§ãã¬ãŒãã³ã°ãããŠããããã次ã®äºæž¬ã«åŒ·åã§ãã
ã·ãŒã±ã³ã¹å
ã®ããŒã¯ã³ããã®æ©èœãå©çšãããšãCTRL ã¯æ§æçã«äžè²«ããããã¹ããçæã§ããããã«ãªããŸãã
*run_generation.py* ãµã³ãã« ã¹ã¯ãªããã§ç¢ºèªã§ããŸãã
- PyTorch ã¢ãã«ã¯ã以åã«èšç®ãããããŒãšå€ã®ã¢ãã³ã·ã§ã³ ãã¢ã§ãã`past_key_values`ãå
¥åãšããŠåãåãããšãã§ããŸãã
TensorFlow ã¢ãã«ã¯`past`ãå
¥åãšããŠåãå
¥ããŸãã `past_key_values`å€ã䜿çšãããšãã¢ãã«ãåèšç®ãããªããªããŸãã
ããã¹ãçæã®ã³ã³ããã¹ãã§äºåã«èšç®ãããå€ã [`forward`](model_doc/ctrl#transformers.CTRLModel.forward) ãåç
§ããŠãã ããã
ãã®åŒæ°ã®äœ¿çšæ³ã®è©³çŽ°ã«ã€ããŠã¯ãã¡ãœãããåç
§ããŠãã ããã
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
## CTRLConfig
[[autodoc]] CTRLConfig
## CTRLTokenizer
[[autodoc]] CTRLTokenizer
- save_vocabulary
<frameworkcontent>
<pt>
## CTRLModel
[[autodoc]] CTRLModel
- forward
## CTRLLMHeadModel
[[autodoc]] CTRLLMHeadModel
- forward
## CTRLForSequenceClassification
[[autodoc]] CTRLForSequenceClassification
- forward
</pt>
<tf>
## TFCTRLModel
[[autodoc]] TFCTRLModel
- call
## TFCTRLLMHeadModel
[[autodoc]] TFCTRLLMHeadModel
- call
## TFCTRLForSequenceClassification
[[autodoc]] TFCTRLForSequenceClassification
- call
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bart.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BART
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=bart">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-bart-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/bart-large-mnli">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
**å
責äºé
:** äœãå¥åŠãªãã®ãèŠã€ããå Žåã¯ã[Github åé¡](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) ãæåºããå²ãåœãŠãŠãã ããã
@patrickvonplaten
## Overview
Bart ã¢ãã«ã¯ã[BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generationã
翻蚳ãšç解](https://arxiv.org/abs/1910.13461) Mike LewisãYinhan LiuãNaman GoyalãMarjan è
ã¬ãºããããžã£ããã¢ããã«ã©ããã³ã»ã¢ãã¡ãããªã¡ã«ã»ã¬ãŽã£ããã¹ã»ã¹ãã€ãããã«ãŒã¯ã»ãŒãã«ã¢ã€ã€ãŒã2019幎10æ29æ¥ã
èŠçŽã«ãããšã
- Bart ã¯ãåæ¹åãšã³ã³ãŒã (BERT ãªã©) ãåããæšæºã® seq2seq/æ©æ¢°ç¿»èš³ã¢ãŒããã¯ãã£ã䜿çšããŸãã
å·Šããå³ãžã®ãã³ãŒã (GPT ãªã©)ã
- äºåãã¬ãŒãã³ã° ã¿ã¹ã¯ã«ã¯ãå
ã®æã®é åºãã©ã³ãã ã«ã·ã£ããã«ããæ°ããåã蟌ã¿ã¹ããŒã ãå«ãŸããŸãã
ããã§ãããã¹ãã®ç¯å²ã¯åäžã®ãã¹ã¯ ããŒã¯ã³ã«çœ®ãæããããŸãã
- BART ã¯ãããã¹ãçæçšã«åŸ®èª¿æŽããå Žåã«ç¹ã«å¹æçã§ãããç解ã¿ã¹ã¯ã«ãé©ããŠããŸãããã
RoBERTa ã®ããã©ãŒãã³ã¹ã GLUE ããã³ SQuAD ã®åçã®ãã¬ãŒãã³ã° ãªãœãŒã¹ãšåçã«ããæ°ããªææãéæããŸãã
ããŸããŸãªæœè±¡çãªå¯Ÿè©±ã質åå¿çãèŠçŽã¿ã¹ã¯ã«é¢ããæå
端ã®çµæãåŸãããææãåŸãããŸãã
ã«ãŒãžã¥ã¯æ倧6æãŸã§ã
ãããïŒ
- BART ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
- ãšã³ã³ãŒããŒãšãã³ãŒããŒãåããã·ãŒã±ã³ã¹ããŒã·ãŒã±ã³ã¹ ã¢ãã«ããšã³ã³ãŒãã«ã¯ç ŽæããããŒãžã§ã³ã®ããŒã¯ã³ãäŸçµŠããããã³ãŒãã«ã¯å
ã®ããŒã¯ã³ãäŸçµŠãããŸãïŒãã ããéåžžã®ãã©ã³ã¹ãã©ãŒã㌠ãã³ãŒããšåæ§ã«ãå°æ¥ã®ã¯ãŒããé ãããã®ãã¹ã¯ããããŸãïŒã次ã®å€æã®æ§æã¯ããšã³ã³ãŒããŒã®äºåãã¬ãŒãã³ã° ã¿ã¹ã¯ã«é©çšãããŸãã
* ã©ã³ãã ãªããŒã¯ã³ããã¹ã¯ããŸã (BERT ãšåæ§)
* ã©ã³ãã ãªããŒã¯ã³ãåé€ããŸã
* k åã®ããŒã¯ã³ã®ã¹ãã³ã 1 ã€ã®ãã¹ã¯ ããŒã¯ã³ã§ãã¹ã¯ããŸã (0 ããŒã¯ã³ã®ã¹ãã³ã¯ãã¹ã¯ ããŒã¯ã³ã®æ¿å
¥ã§ã)
* æã䞊ã¹æ¿ããŸã
* ããã¥ã¡ã³ããå転ããŠç¹å®ã®ããŒã¯ã³ããéå§ããããã«ããŸã
ãã®ã¢ãã«ã¯ [sshleifer](https://huggingface.co/sshleifer) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒã㯠[ãã](https://github.com/pytorch/fairseq/tree/master/examples/bart) ã«ãããŸãã
### Examples
- ã·ãŒã±ã³ã¹éã¿ã¹ã¯çšã® BART ããã³ãã®ä»ã®ã¢ãã«ã埮調æŽããããã®äŸãšã¹ã¯ãªããã¯ã次ã®å Žæã«ãããŸãã
[examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization/README.md)ã
- Hugging Face `datasets` ã䜿çšã㊠[`BartForConditionalGeneration`] ããã¬ãŒãã³ã°ããæ¹æ³ã®äŸ
ãªããžã§ã¯ãã¯ããã® [ãã©ãŒã©ã ãã£ã¹ã«ãã·ã§ã³](https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904) ã§èŠã€ããããšãã§ããŸãã
- [æœåºããããã§ãã¯ãã€ã³ã](https://huggingface.co/models?search=distilbart) ã¯ããã® [è«æ](https://arxiv.org/abs/2010.13002) ã§èª¬æãããŠããŸãã
## Implementation Notes
- Bart ã¯ã·ãŒã±ã³ã¹ã®åé¡ã« `token_type_ids` ã䜿çšããŸããã [`BartTokenizer`] ã䜿çšãããã
[`~BartTokenizer.encode`] ã䜿çšããŠé©åã«åå²ããŸãã
- [`BartModel`] ã®ãã©ã¯ãŒããã¹ã¯ãæž¡ãããªãã£ãå Žåã`decoder_input_ids` ãäœæããŸãã
ããã¯ãä»ã®ã¢ããªã³ã° API ãšã¯ç°ãªããŸãããã®æ©èœã®äžè¬çãªäœ¿çšäŸã¯ããã¹ã¯ã®å¡ãã€ã¶ãã§ãã
- ã¢ãã«ã®äºæž¬ã¯ã次ã®å Žåã«å
ã®å®è£
ãšåäžã«ãªãããã«æå³ãããŠããŸãã
`forced_bos_token_id=0`ããã ããããã¯ãæž¡ãæååã次ã®å Žåã«ã®ã¿æ©èœããŸãã
[`fairseq.encode`] ã¯ã¹ããŒã¹ã§å§ãŸããŸãã
- [`~generation.GenerationMixin.generate`] ã¯ã次ã®ãããªæ¡ä»¶ä»ãçæã¿ã¹ã¯ã«äœ¿çšããå¿
èŠããããŸãã
èŠçŽã«ã€ããŠã¯ããã® docstring ã®äŸãåç
§ããŠãã ããã
- *facebook/bart-large-cnn* éã¿ãããŒãããã¢ãã«ã«ã¯ `mask_token_id` ããªãããå®è¡ã§ããŸããã
ãã¹ã¯ãåããã¿ã¹ã¯ã
## Mask Filling
`facebook/bart-base` ããã³ `facebook/bart-large` ãã§ãã¯ãã€ã³ãã䜿çšããŠããã«ãããŒã¯ã³ ãã¹ã¯ãåããããšãã§ããŸãã
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
```
## Resources
BART ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="summarization"/>
- ã«é¢ããããã°æçš¿ [åæ£ãã¬ãŒãã³ã°: ð€ Transformers ãš Amazon SageMaker ã䜿çšããèŠçŽã®ããã® BART/T5 ã®ãã¬ãŒãã³ã°](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq)ã
- æ¹æ³ã«é¢ããããŒããã㯠[blurr ã䜿çšã㊠fastai ã§èŠçŽããããã« BART ã埮調æŽãã](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb). ð ð
- æ¹æ³ã«é¢ããããŒããã㯠[ãã¬ãŒã㌠ã¯ã©ã¹ã䜿çšã㊠2 ã€ã®èšèªã§èŠçŽããããã« BART ã埮調æŽãã](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)ã ð
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)ã
- [`FlaxBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization) ã§ãµããŒããããŠããŸãã
- [èŠçŽ](https://huggingface.co/course/chapter7/5?fw=pt#summarization) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization.md)
<PipelineTag pipeline="fill-mask"/>
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) ã§ãµããŒããããŠããã [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
- [`FlaxBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) ããã³ [ããŒãããã¯]( https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb)ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã°](https://huggingface.co/course/chapter7/3?fw=pt) ð€ é¡ãã° ã³ãŒã¹ã®ç« ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
<PipelineTag pipeline="translation"/>
- [ãã³ãã£ãŒèªããè±èªãžã®ç¿»èš³ã« Seq2SeqTrainer ã䜿çšã㊠mBART ã埮調æŽããæ¹æ³ã«é¢ããããŒã](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)ã ð
- [`BartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)ã
- [`TFBartForConditionalGeneration`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)ã
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
以äžãåç
§ããŠãã ããã
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [æœåºããããã§ãã¯ãã€ã³ã](https://huggingface.co/models?search=distilbart) ã¯ããã® [è«æ](https://arxiv.org/abs/2010.13002) ã§èª¬æãããŠããŸãã
## BartConfig
[[autodoc]] BartConfig
- all
## BartTokenizer
[[autodoc]] BartTokenizer
- all
## BartTokenizerFast
[[autodoc]] BartTokenizerFast
- all
## BartModel
[[autodoc]] BartModel
- forward
## BartForConditionalGeneration
[[autodoc]] BartForConditionalGeneration
- forward
## BartForSequenceClassification
[[autodoc]] BartForSequenceClassification
- forward
## BartForQuestionAnswering
[[autodoc]] BartForQuestionAnswering
- forward
## BartForCausalLM
[[autodoc]] BartForCausalLM
- forward
## TFBartModel
[[autodoc]] TFBartModel
- call
## TFBartForConditionalGeneration
[[autodoc]] TFBartForConditionalGeneration
- call
## TFBartForSequenceClassification
[[autodoc]] TFBartForSequenceClassification
- call
## FlaxBartModel
[[autodoc]] FlaxBartModel
- __call__
- encode
- decode
## FlaxBartForConditionalGeneration
[[autodoc]] FlaxBartForConditionalGeneration
- __call__
- encode
- decode
## FlaxBartForSequenceClassification
[[autodoc]] FlaxBartForSequenceClassification
- __call__
- encode
- decode
## FlaxBartForQuestionAnswering
[[autodoc]] FlaxBartForQuestionAnswering
- __call__
- encode
- decode
## FlaxBartForCausalLM
[[autodoc]] FlaxBartForCausalLM
- __call__
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/blip-2.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BLIP-2
## Overview
BLIP-2 ã¢ãã«ã¯ã[BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) ã§ææ¡ãããŸããã
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.ã»ãµãã¬ãŒãŒãã¹ãã£ãŒãã³ã»ãã€ã BLIP-2 ã¯ã軜éã® 12 å±€ Transformer ããã¬ãŒãã³ã°ããããšã§ãããªãŒãºãããäºåãã¬ãŒãã³ã°æžã¿ç»åãšã³ã³ãŒããŒãšå€§èŠæš¡èšèªã¢ãã« (LLM) ã掻çšããŸãã
ãããã®éã«ãšã³ã³ãŒããŒãé
眮ããããŸããŸãªèŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸããæã泚ç®ãã¹ãç¹ã¯ãBLIP-2 ã 800 åãã©ã¡ãŒã¿ ã¢ãã«ã§ãã [Flamingo](https://arxiv.org/abs/2204.14198) ã 8.7% æ¹åããŠããããšã§ãã
ãŒãã·ã§ãã VQAv2 ã§ã¯ãã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã 54 åã® 1 ã«æžå°ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*倧èŠæš¡ã¢ãã«ã®ãšã³ãããŒãšã³ãã®ãã¬ãŒãã³ã°ã«ãããèŠèŠãšèšèªã®äºåãã¬ãŒãã³ã°ã®ã³ã¹ãã¯ãŸããŸãæ³å€ãªãã®ã«ãªã£ãŠããŠããŸãããã®è«æã§ã¯ãåžè²©ã®åçµæžã¿äºåãã¬ãŒãã³ã°ç»åãšã³ã³ãŒããšåçµããã倧èŠæš¡èšèªã¢ãã«ããèŠèŠèšèªã®äºåãã¬ãŒãã³ã°ãããŒãã¹ãã©ãããããæ±çšçã§å¹ççãªäºåãã¬ãŒãã³ã°æŠç¥ã§ãã BLIP-2 ãææ¡ããŸãã BLIP-2 ã¯ã2 段éã§äºåãã¬ãŒãã³ã°ããã軜éã® Querying Transformer ã§ã¢ããªãã£ã®ã®ã£ãããæ©æž¡ãããŸããæåã®ã¹ããŒãžã§ã¯ãããªãŒãºãããç»åãšã³ã³ãŒããŒããåŠç¿ããèŠèŠèšèªè¡šçŸãããŒãã¹ãã©ããããŸãã第 2 段éã§ã¯ãåçµãããèšèªã¢ãã«ããèŠèŠããèšèªãžã®çæåŠç¿ãããŒãã¹ãã©ããããŸãã BLIP-2 ã¯ãæ¢åã®æ¹æ³ããããã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã倧å¹
ã«å°ãªãã«ãããããããããŸããŸãªèŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸããããšãã°ãç§ãã¡ã®ã¢ãã«ã¯ããã¬ãŒãã³ã°å¯èœãªãã©ã¡ãŒã¿ãŒã 54 åã® 1 å°ãªããŒãã·ã§ãã VQAv2 ã§ãFlamingo80B ã 8.7% äžåã£ãŠããŸãããŸããèªç¶èšèªã®åœä»€ã«åŸãããšãã§ããããŒãã·ã§ããç»åããããã¹ããžã®çæãšããã¢ãã«ã®æ°ããæ©èœãå®èšŒããŸã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
<small> BLIP-2 ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2301.12597">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/salesforce/LAVIS/tree/5ee63d688ba4cebff63acee04adaef2dee9af207) ã«ãããŸãã
## Usage tips
- BLIP-2 ã¯ãç»åãšãªãã·ã§ã³ã®ããã¹ã ããã³ãããæå®ããŠæ¡ä»¶ä»ãããã¹ããçæããããã«äœ¿çšã§ããŸããæšè«æã«ã¯ã [`generate`] ã¡ãœããã䜿çšããããšããå§ãããŸãã
- [`Blip2Processor`] ã䜿çšããŠã¢ãã«çšã®ç»åãæºåããäºæž¬ãããããŒã¯ã³ ID ããã³ãŒãããŠããã¹ãã«æ»ãããšãã§ããŸãã
## Resources
BLIP-2 ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
- ç»åãã£ãã·ã§ã³ãããžã¥ã¢ã«è³ªåå¿ç (VQA)ãããã³ãã£ããã®ãããªäŒè©±ã®ããã® BLIP-2 ã®ã㢠ããŒãããã¯ã¯ã[ãã¡ã](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/BLIP-2) ã«ãããŸãã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## Blip2Config
[[autodoc]] Blip2Config
- from_vision_qformer_text_configs
## Blip2VisionConfig
[[autodoc]] Blip2VisionConfig
## Blip2QFormerConfig
[[autodoc]] Blip2QFormerConfig
## Blip2Processor
[[autodoc]] Blip2Processor
## Blip2VisionModel
[[autodoc]] Blip2VisionModel
- forward
## Blip2QFormerModel
[[autodoc]] Blip2QFormerModel
- forward
## Blip2Model
[[autodoc]] Blip2Model
- forward
- get_text_features
- get_image_features
- get_qformer_features
## Blip2ForConditionalGeneration
[[autodoc]] Blip2ForConditionalGeneration
- forward
- generate | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/blenderbot.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Blenderbot
**å
責äºé
:** äœãå¥åŠãªãã®ãèŠã€ããå Žåã¯ã [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) ãå ±åããŠãã ããã
## Overview
Blender ãã£ããããã ã¢ãã«ã¯ã[Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen RollerãEmily DinanãNaman GoyalãDa JuãMary Williamsonãyinghan Liuãã§ææ¡ãããŸããã
ãžã³ã»ã·ã¥ãŒããã€ã«ã»ãªãããã«ãŒãã»ã·ã£ã¹ã¿ãŒããšãªãã¯ã»Mã»ã¹ãã¹ãY-ã©ã³ã»ããŒããŒããžã§ã€ãœã³ã»ãŠã§ã¹ãã³ã2020幎4æ30æ¥ã
è«æã®èŠæšã¯æ¬¡ã®ãšããã§ãã
*ãªãŒãã³ãã¡ã€ã³ã®ãã£ãããããã®æ§ç¯ã¯ãæ©æ¢°åŠç¿ç 究ã«ãšã£ãŠé£ããåéã§ãããããŸã§ã®ç 究ã§ã¯æ¬¡ã®ããšã瀺ãããŠããŸããã
ãã¥ãŒã©ã« ã¢ãã«ããã©ã¡ãŒã¿ãŒã®æ°ãšãã¬ãŒãã³ã°å¯Ÿè±¡ã®ããŒã¿ã®ãµã€ãºã§ã¹ã±ãŒãªã³ã°ãããšãçµæãåäžããŸãã
é«æ§èœã®ãã£ãããããã«ã¯ä»ã®èŠçŽ ãéèŠã§ããããšã瀺ããŸããè¯ãäŒè©±ã«ã¯å€ãã®ããšãå¿
èŠã§ã
äŒè©±ã®å°é家ãã·ãŒã ã¬ã¹ã«èåããã¹ãã«: é
åçãªè©±ã®ãã€ã³ããæäŸãã話ãèã
äžè²«ããæ
床ãç¶æããªãããç¥èãå
±æãåæ§ãé©åã«è¡šçŸãã
ãã«ãœããé©åãªãã¬ãŒãã³ã° ããŒã¿ãšéžæãäžããããå Žåã倧èŠæš¡ã¢ãã«ããããã®ã¹ãã«ãåŠç¿ã§ããããšã瀺ããŸãã
äžä»£æŠç¥ã 90Mã2.7Bã9.4B ãã©ã¡ãŒã¿ãŒ ã¢ãã«ã䜿çšããŠãããã®ã¬ã·ãã®ããªã¢ã³ããæ§ç¯ããã¢ãã«ãäœæããŸãã
ã³ãŒãã¯å
¬éãããŠããŸãã人éã«ããè©äŸ¡ã§ã¯ãåœç€Ÿã®æè¯ã®ã¢ãã«ãæ¢åã®ã¢ãããŒããããåªããŠããããšããã«ãã¿ãŒã³ã§ç€ºãããŠããŸã
é
åãšäººéæ§ã®æž¬å®ãšãã芳ç¹ããã®å¯Ÿè©±ã次ã«ãåæã«ãã£ãŠãã®äœæ¥ã®éçã«ã€ããŠèª¬æããŸãã
åŒç€Ÿæ©çš®ã®æ
éäºäŸ*
ãããïŒ
- Blenderbot ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ [sshleifer](https://huggingface.co/sshleifer) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒã㯠[ãã](https://github.com/facebookresearch/ParlAI) ã«ãããŸãã
## Implementation Notes
- Blenderbot ã¯ãæšæºã® [seq2seq ã¢ãã« ãã©ã³ã¹ãã©ãŒããŒ](https://arxiv.org/pdf/1706.03762.pdf) ããŒã¹ã®ã¢ãŒããã¯ãã£ã䜿çšããŸãã
- å©çšå¯èœãªãã§ãã¯ãã€ã³ãã¯ã[ã¢ãã« ãã](https://huggingface.co/models?search=blenderbot) ã§èŠã€ããããšãã§ããŸãã
- ãã㯠*ããã©ã«ã* Blenderbot ã¢ãã« ã¯ã©ã¹ã§ãããã ãã次ã®ãããªå°ããªãã§ãã¯ãã€ã³ããããã€ããããŸãã
`facebook/blenderbot_small_90M` ã¯ã¢ãŒããã¯ãã£ãç°ãªããããäžç·ã«äœ¿çšããå¿
èŠããããŸãã
[BlenderbotSmall](ãã¬ã³ããŒãããå°)ã
## Usage
ã¢ãã«ã®äœ¿çšäŸã次ã«ç€ºããŸãã
```python
>>> from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
>>> mname = "facebook/blenderbot-400M-distill"
>>> model = BlenderbotForConditionalGeneration.from_pretrained(mname)
>>> tokenizer = BlenderbotTokenizer.from_pretrained(mname)
>>> UTTERANCE = "My friends are cool but they eat too many carbs."
>>> inputs = tokenizer([UTTERANCE], return_tensors="pt")
>>> reply_ids = model.generate(**inputs)
>>> print(tokenizer.batch_decode(reply_ids))
["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"]
```
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BlenderbotConfig
[[autodoc]] BlenderbotConfig
## BlenderbotTokenizer
[[autodoc]] BlenderbotTokenizer
- build_inputs_with_special_tokens
## BlenderbotTokenizerFast
[[autodoc]] BlenderbotTokenizerFast
- build_inputs_with_special_tokens
## BlenderbotModel
*forward* ããã³ *generate* ã®åŒæ°ã«ã€ããŠã¯ã`transformers.BartModel`ãåç
§ããŠãã ããã
[[autodoc]] BlenderbotModel
- forward
## BlenderbotForConditionalGeneration
*forward* ãš *generate* ã®åŒæ°ã«ã€ããŠã¯ã[`~transformers.BartForConditionalGeneration`] ãåç
§ããŠãã ããã
[[autodoc]] BlenderbotForConditionalGeneration
- forward
## BlenderbotForCausalLM
[[autodoc]] BlenderbotForCausalLM
- forward
## TFBlenderbotModel
[[autodoc]] TFBlenderbotModel
- call
## TFBlenderbotForConditionalGeneration
[[autodoc]] TFBlenderbotForConditionalGeneration
- call
## FlaxBlenderbotModel
[[autodoc]] FlaxBlenderbotModel
- __call__
- encode
- decode
## FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotForConditionalGeneration
- __call__
- encode
- decode
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deplot.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DePlot
## Overview
DePlot ã¯ãFangyu LiuãJulian Martin AisenschlosãFrancesco PiccinnoãSyrine KricheneãChenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. ã®è«æ [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) ã§ææ¡ãããŸããããã³ã»
è«æã®èŠçŽã«ã¯æ¬¡ã®ããã«èšèŒãããŠããŸãã
*ãã£ãŒãããããããªã©ã®èŠèŠèšèªã¯äººéã®äžçã«éåšããŠããŸããããããããã£ãŒããç解ããã«ã¯ã匷åãªæšè«ã¹ãã«ãå¿
èŠã§ããåŸæ¥ã®æå
端 (SOTA) ã¢ãã«ã«ã¯å°ãªããšãæ°äžã®ãã¬ãŒãã³ã° ãµã³ãã«ãå¿
èŠã§ããããã®æšè«èœåã¯ãç¹ã«äººéãäœæããè€éãªã¯ãšãªã§ã¯äŸç¶ãšããŠå€§å¹
ã«å¶éãããŠããŸãããã®è«æã§ã¯ãèŠèŠèšèªæšè«ã«å¯Ÿããæåã®ã¯ã³ã·ã§ãã ãœãªã¥ãŒã·ã§ã³ã玹ä»ããŸããç§ãã¡ã¯ãèŠèŠèšèªæšè«ã®èª²é¡ã 2 ã€ã®ã¹ãããã«å解ããŸãã(1) ããããããããã¹ããžã®ç¿»èš³ãšã(2) 翻蚳ãããããã¹ãã«å¯Ÿããæšè«ã§ãããã®æ¹æ³ã®éµãšãªãã®ã¯ããããããŸãã¯ãã£ãŒãã®ç»åãç·åœ¢åãããããŒãã«ã«å€æãããDePlot ãšããååã®ã¢ããªãã£å€æã¢ãžã¥ãŒã«ã§ãããã®åŸãDePlot ã®åºåãçŽæ¥äœ¿çšããŠãäºåãã¬ãŒãã³ã°æžã¿ã®å€§èŠæš¡èšèªã¢ãã« (LLM) ãããã³ããããLLM ã®å°æ°ã·ã§ããæšè«æ©èœãå©çšã§ããŸãã DePlot ãååŸããã«ã¯ãçµ±äžãããã¿ã¹ã¯åœ¢åŒãšã¡ããªã¯ã¹ã確ç«ããããšã§ããããããããŒãã«ãžã®ã¿ã¹ã¯ãæšæºåãããã®ã¿ã¹ã¯ã§ DePlot ããšã³ãããŒãšã³ãã§ãã¬ãŒãã³ã°ããŸãã DePlot ã¯ããã©ã°ã¢ã³ããã¬ã€æ¹åŒã§ LLM ãšãšãã«æ¢è£œã§äœ¿çšã§ããŸãã 28,000 ãè¶
ããããŒã¿ ãã€ã³ãã§åŸ®èª¿æŽããã SOTA ã¢ãã«ãšæ¯èŒããŠãã¯ã³ã·ã§ãã ããã³ããã®ã¿ã䜿çšãã DePlot+LLM ã¯ããã£ãŒã QA ã¿ã¹ã¯ããã®äººãäœæããã¯ãšãªã«é¢ããŠã埮調æŽããã SOTA ãã 24.0% ã®æ¹åãéæããŸããã*
DePlot ã¯ã`Pix2Struct` ã¢ãŒããã¯ãã£ã䜿çšããŠãã¬ãŒãã³ã°ãããã¢ãã«ã§ãã `Pix2Struct` ã®è©³çŽ°ã«ã€ããŠã¯ã[Pix2Struct ããã¥ã¡ã³ã](https://huggingface.co/docs/transformers/main/en/model_doc/pix2struct) ãåç
§ããŠãã ããã
DePlot ã¯ã`Pix2Struct` ã¢ãŒããã¯ãã£ã® Visual Question Answering ãµãã»ããã§ããå
¥åããã質åãç»åäžã«ã¬ã³ããªã³ã°ããçããäºæž¬ããŸãã
## Usage example
çŸåšãDePlot ã§äœ¿çšã§ãããã§ãã¯ãã€ã³ã㯠1 ã€ã§ãã
- `google/deplot`: ChartQA ããŒã¿ã»ããã§åŸ®èª¿æŽããã DePlot
```python
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
```
## Fine-tuning
DePlot ã埮調æŽããã«ã¯ãpix2struct [埮調æŽããŒãããã¯](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_pix2struct.ipynb) ãåç
§ããŠãã ããã `Pix2Struct` ã¢ãã«ã®å ŽåãAdafactor ãšã³ãµã€ã³åŠç¿çã¹ã±ãžã¥ãŒã©ã䜿çšããŠã¢ãã«ã埮調æŽãããšãåæãé«éåãããããšãããããŸããã
```python
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
```
<Tip>
DePlot ã¯ã`Pix2Struct`ã¢ãŒããã¯ãã£ã䜿çšããŠãã¬ãŒãã³ã°ãããã¢ãã«ã§ãã API ãªãã¡ã¬ã³ã¹ã«ã€ããŠã¯ã[`Pix2Struct` ããã¥ã¡ã³ã](pix2struct) ãåç
§ããŠãã ããã
</Tip> | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/decision_transformer.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Decision Transformer
## Overview
Decision Transformer ã¢ãã«ã¯ã[Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) ã§ææ¡ãããŸããã
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*匷ååŠç¿ïŒRLïŒãã·ãŒã±ã³ã¹ã¢ããªã³ã°åé¡ãšããŠæœè±¡åãããã¬ãŒã ã¯ãŒã¯ã玹ä»ããŸãã
ããã«ãããTransformer ã¢ãŒããã¯ãã£ã®ã·ã³ãã«ããšã¹ã±ãŒã©ããªãã£ãããã³é¢é£ããé²æ©ã掻çšã§ããããã«ãªããŸãã
GPT-x ã BERT ãªã©ã®èšèªã¢ããªã³ã°ã§ãç¹ã«ãDecision Transformer ãšããã¢ãŒããã¯ãã£ã玹ä»ããŸãã
RL ã®åé¡ãæ¡ä»¶ä»ãã·ãŒã±ã³ã¹ ã¢ããªã³ã°ãšããŠæããããŸããå€é¢æ°ã«é©åãã以åã® RL ã¢ãããŒããšã¯ç°ãªãã
ããªã·ãŒåŸé
ãèšç®ãããšãDecision Transformer ã¯å æçã«ãã¹ã¯ãããã¢ã«ãŽãªãºã ãå©çšããŠæé©ãªã¢ã¯ã·ã§ã³ãåºåããã ãã§ãã
å€æåšãæãŸãããªã¿ãŒã³ (å ±é
¬)ãéå»ã®ç¶æ
ãã¢ã¯ã·ã§ã³ã«åºã¥ããŠèªå·±ååž°ã¢ãã«ãæ¡ä»¶ä»ãããããšã«ããã
Decision Transformer ã¢ãã«ã¯ãæãŸãããªã¿ãŒã³ãéæããå°æ¥ã®ã¢ã¯ã·ã§ã³ãçæã§ããŸãããã®ã·ã³ãã«ãã«ãé¢ãããã
Decision Transformer ã¯ãæå
端ã®ã¢ãã«ããªãŒã®ãªãã©ã€ã³ RL ããŒã¹ã©ã€ã³ã®ããã©ãŒãã³ã¹ãšåçããŸãã¯ãããè¶
ããŠããŸãã
AtariãOpenAI GymãKey-to-Door ã¿ã¹ã¯*
ãã®ããŒãžã§ã³ã®ã¢ãã«ã¯ãç¶æ
ããã¯ãã«ã§ããã¿ã¹ã¯çšã§ãã
ãã®ã¢ãã«ã¯ã[edbeeching](https://huggingface.co/edbeeching) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/kzl/decion-transformer) ã«ãããŸãã
## DecisionTransformerConfig
[[autodoc]] DecisionTransformerConfig
## DecisionTransformerGPT2Model
[[autodoc]] DecisionTransformerGPT2Model
- forward
## DecisionTransformerModel
[[autodoc]] DecisionTransformerModel
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/clvp.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLVP
## Overview
CLVP (Contrastive Language-Voice Pretrained Transformer) ã¢ãã«ã¯ãJames Betker ã«ãã£ãŠ [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) ã§ææ¡ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*è¿å¹Žãç»åçæã®åéã¯èªå·±ååž°å€æåšãš DDPM ã®å¿çšã«ãã£ãŠé©åœãèµ·ãããŠããŸãããããã®ã¢ãããŒãã¯ãç»åçæã®ããã»ã¹ã段éçãªç¢ºççããã»ã¹ãšããŠã¢ãã«åãã倧éã®ã³ã³ãã¥ãŒãã£ã³ã°ãšããŒã¿ã掻çšããŠç»åã®ååžãåŠç¿ããŸããããã©ãŒãã³ã¹ãåäžããããã®æ¹æ³è«ã¯ãç»åã«éå®ãããå¿
èŠã¯ãããŸããããã®è«æã§ã¯ãç»åçæãã¡ã€ã³ã®é²æ©ãé³å£°åæã«é©çšããæ¹æ³ã«ã€ããŠèª¬æããŸãããã®çµæãè¡šçŸåè±ããªãã«ãé³å£°ããã¹ãèªã¿äžãã·ã¹ãã ã§ãã TorToise ãèªçããŸããã
ãã®ã¢ãã«ã¯ [Susnato Dhar](https://huggingface.co/susnato) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/neonbjb/tortoise-tts) ã«ãããŸãã
## Usage tips
1. CLVP 㯠Tortoise TTS ã¢ãã«ã®äžå¯æ¬ ãªéšåã§ãã
2. CLVP ã䜿çšããŠãçæãããããŸããŸãªé³å£°åè£ãæäŸãããããã¹ããšæ¯èŒããããšãã§ããæè¯ã®é³å£°ããŒã¯ã³ãæ¡æ£ã¢ãã«ã«è»¢éãããŸãã
3. Tortoise ã®äœ¿çšã«ã¯ã[`ClvpModelForConditionalGeneration.generate()`] ã¡ãœããã®äœ¿çšã匷ããå§ãããŸãã
4. 16 kHz ãæåŸ
ããä»ã®ãªãŒãã£ãª ã¢ãã«ãšã¯å¯Ÿç
§çã«ãCLVP ã¢ãã«ã¯ãªãŒãã£ãªã 22.05 kHz ã§ãµã³ããªã³ã°ãããããšãæåŸ
ããŠããããšã«æ³šæããŠãã ããã
## Brief Explanation:
- [`ClvpTokenizer`] ã¯ããã¹ãå
¥åãããŒã¯ã³åãã[`ClvpFeatureExtractor`] ã¯ç®çã®ãªãŒãã£ãªãããã° ã¡ã« ã¹ãã¯ããã°ã©ã ãæœåºããŸãã
- [`ClvpConditioningEncoder`] ã¯ããããã®ããã¹ã ããŒã¯ã³ãšãªãŒãã£ãªè¡šçŸãååŸããããã¹ããšãªãŒãã£ãªã«åºã¥ããŠæ¡ä»¶ä»ããããåã蟌ã¿ã«å€æããŸãã
- [`ClvpForCausalLM`] ã¯ããããã®åã蟌ã¿ã䜿çšããŠè€æ°ã®é³å£°åè£ãçæããŸãã
- åé³å£°åè£ã¯é³å£°ãšã³ã³ãŒã ([`ClvpEncoder`]) ãééããŠãã¯ãã«è¡šçŸã«å€æãããããã¹ã ãšã³ã³ãŒã ([`ClvpEncoder`]) ã¯ããã¹ã ããŒã¯ã³ãåãæœåšç©ºéã«å€æããŸãã
- æåŸã«ãåé³å£°ãã¯ãã«ãããã¹ã ãã¯ãã«ãšæ¯èŒããŠãã©ã®é³å£°ãã¯ãã«ãããã¹ã ãã¯ãã«ã«æãé¡äŒŒããŠãããã確èªããŸãã
- [`ClvpModelForConditionalGeneration.generate()`] ã¯ãäžèšã®ãã¹ãŠã®ããžãã¯ã 1 ã€ã®ã¡ãœããã«å§çž®ããŸãã
äŸ ïŒ
```python
>>> import datasets
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library).
>>> text = "This is an example text."
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
>>> sample = ds[0]["audio"]
>>> # Define processor and model.
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
>>> # Generate processor output and model output.
>>> processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt")
>>> generated_output = model.generate(**processor_output)
```
## ClvpConfig
[[autodoc]] ClvpConfig
- from_sub_model_configs
## ClvpEncoderConfig
[[autodoc]] ClvpEncoderConfig
## ClvpDecoderConfig
[[autodoc]] ClvpDecoderConfig
## ClvpTokenizer
[[autodoc]] ClvpTokenizer
- save_vocabulary
## ClvpFeatureExtractor
[[autodoc]] ClvpFeatureExtractor
- __call__
## ClvpProcessor
[[autodoc]] ClvpProcessor
- __call__
- decode
- batch_decode
## ClvpModelForConditionalGeneration
[[autodoc]] ClvpModelForConditionalGeneration
- forward
- generate
- get_text_features
- get_speech_features
## ClvpForCausalLM
[[autodoc]] ClvpForCausalLM
## ClvpModel
[[autodoc]] ClvpModel
## ClvpEncoder
[[autodoc]] ClvpEncoder
## ClvpDecoder
[[autodoc]] ClvpDecoder
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/byt5.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ByT5
## Overview
ByT5 ã¢ãã«ã¯ã[ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir
Kale, Adam Roberts, Colin Raffel.
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æãåºã䜿çšãããŠããäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãåèªãŸãã¯ãµãã¯ãŒãåäœã«å¯Ÿå¿ããããŒã¯ã³ã®ã·ãŒã±ã³ã¹ã§åäœããŸãã
ããã¹ããããŒã¯ã³ã®ã·ãŒã±ã³ã¹ãšããŠãšã³ã³ãŒãããã«ã¯ãããŒã¯ãã€ã¶ãŒãå¿
èŠã§ããããŒã¯ãã€ã¶ãŒã¯éåžžã
ã¢ãã«ã代ããã«çã®ããã¹ã (ãã€ããŸãã¯æå) ãçŽæ¥æäœããããŒã¯ã³ããªãŒ ã¢ãã«ã«ã¯å€ãã®å©ç¹ããããŸãã
ããã«äœ¿çšã§ããããããèšèªã®ããã¹ããåŠçã§ãããã€ãºã«å¯ŸããŠããå
ç¢ã§ãããæè¡çè² åµãæå°éã«æããŸãã
è€éã§ãšã©ãŒãçºçããããããã¹ãååŠçãã€ãã©ã€ã³ãåé€ããŸãããã€ããŸãã¯æååãããŒã¯ã³ããé·ããã
ããŒã¯ã³ããªãŒ ã¢ãã«ã«é¢ããéå»ã®ç 究ã§ã¯ãã·ãŒã±ã³ã¹ã®ã³ã¹ããååŽããããã«èšèšãããæ°ããã¢ãã« ã¢ãŒããã¯ãã£ãå°å
¥ãããããšããããããŸããã
çã®ããã¹ããçŽæ¥æäœããŸãããã®è«æã§ã¯ãæšæºç㪠Transformer ã¢ãŒããã¯ãã£ã次ã®ãããªãã®ã§äœ¿çšã§ããããšã瀺ããŸãã
ãã€ãã·ãŒã±ã³ã¹ãåŠçããããã®æå°éã®å€æŽããã©ã¡ãŒã¿æ°ã®èŠ³ç¹ãããã¬ãŒããªãã泚ææ·±ãç¹åŸŽä»ããŸãã
FLOP ã®ãã¬ãŒãã³ã°ãšæšè«é床ã調ã¹ããã€ãã¬ãã«ã®ã¢ãã«ãããŒã¯ã³ã¬ãã«ãšç«¶åã§ããããšã瀺ããŸãã
察å¿è
ããŸãããã€ãã¬ãã«ã®ã¢ãã«ã¯ãã€ãºã«å¯ŸããŠå€§å¹
ã«å
ç¢ã§ãããããåªããããã©ãŒãã³ã¹ãçºæ®ããããšã瀺ããŠããŸãã
ã¹ãã«ãšçºé³ã«ææãªã¿ã¹ã¯ãç§ãã¡ã®è²¢ç®ã®äžç°ãšããŠãæ°ããã»ããããªãªãŒã¹ããŸãã
T5 ã¢ãŒããã¯ãã£ã«åºã¥ããäºåãã¬ãŒãã³ã°æžã¿ã®ãã€ãã¬ãã«ã® Transformer ã¢ãã«ãšãããã§äœ¿çšããããã¹ãŠã®ã³ãŒããšããŒã¿
å®éšã*
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://github.com/google-research/byt5) ã«ãããŸãã
<Tip>
ByT5 ã®ã¢ãŒããã¯ãã£ã¯ T5v1.1 ã¢ãã«ã«åºã¥ããŠããŸããAPI ãªãã¡ã¬ã³ã¹ã«ã€ããŠã¯ã[T5v1.1 ã®ããã¥ã¡ã³ã ããŒãž](t5v1.1) ãåç
§ããŠãã ããã圌ãã¯
ã¢ãã«ã®å
¥åãæºåããæ¹æ³ãç°ãªãã ãã§ãã以äžã®ã³ãŒãäŸãåç
§ããŠãã ããã
</Tip>
ByT5 ã¯æåž«ãªãã§äºåãã¬ãŒãã³ã°ãããŠãããããåäžã¿ã¹ã¯äžã«ã¿ã¹ã¯ ãã¬ãã£ãã¯ã¹ã䜿çšããå©ç¹ã¯ãããŸããã
埮調æŽããã«ãã¿ã¹ã¯ã®åŸ®èª¿æŽãè¡ãå Žåã¯ããã¬ãã£ãã¯ã¹ã䜿çšããå¿
èŠããããŸãã
## Usage Examples
ByT5 ã¯çã® UTF-8 ãã€ãã§åäœãããããããŒã¯ãã€ã¶ãŒãªãã§äœ¿çšã§ããŸãã
```python
>>> from transformers import T5ForConditionalGeneration
>>> import torch
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> num_special_tokens = 3
>>> # Model has 3 special tokens which take up the input ids 0,1,2 of ByT5.
>>> # => Need to shift utf-8 character encodings by 3 before passing ids to model.
>>> input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens
>>> labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens
>>> loss = model(input_ids, labels=labels).loss
>>> loss.item()
2.66
```
ãã ãããããæšè«ãšãã¬ãŒãã³ã°ã®å Žåã¯ãããŒã¯ãã€ã¶ãŒã䜿çšããããšããå§ãããŸãã
```python
>>> from transformers import T5ForConditionalGeneration, AutoTokenizer
>>> model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-small")
>>> model_inputs = tokenizer(
... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt"
... )
>>> labels_dict = tokenizer(
... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt"
... )
>>> labels = labels_dict.input_ids
>>> loss = model(**model_inputs, labels=labels).loss
>>> loss.item()
17.9
```
[T5](t5) ãšåæ§ã«ãByT5 ã¯ã¹ãã³ãã¹ã¯ãã€ãºé€å»ã¿ã¹ã¯ã§ãã¬ãŒãã³ã°ãããŸããããããã
ã¢ãã«ã¯ãã£ã©ã¯ã¿ãŒã«çŽæ¥äœçšãããããäºåãã¬ãŒãã³ã°ã¿ã¹ã¯ã¯å°ãè€éã§ã
éããã®ããã€ãã®æåãç ŽæããŠã¿ãŸããã
`"The dog chases a ball in the park."`ãšããæãå
¥åããByT5 ã«äºæž¬ããŠããããŸãã
ããããã¡ã®ããã
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base")
>>> input_ids_prompt = "The dog chases a ball in the park."
>>> input_ids = tokenizer(input_ids_prompt).input_ids
>>> # Note that we cannot add "{extra_id_...}" to the string directly
>>> # as the Byte tokenizer would incorrectly merge the tokens
>>> # For ByT5, we need to work directly on the character level
>>> # Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead
>>> # uses final utf character ids.
>>> # UTF-8 is represented by 8 bits and ByT5 has 3 special tokens.
>>> # => There are 2**8+2 = 259 input ids and mask tokens count down from index 258.
>>> # => mask to "The dog [258]a ball [257]park."
>>> input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]])
>>> input_ids
tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]])
>>> # ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`.
>>> output_ids = model.generate(input_ids, max_length=100)[0].tolist()
>>> output_ids
[0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49]
>>> # ^- Note how 258 descends to 257, 256, 255
>>> # Now we need to split on the sentinel tokens, let's write a short loop for this
>>> output_ids_list = []
>>> start_token = 0
>>> sentinel_token = 258
>>> while sentinel_token in output_ids:
... split_idx = output_ids.index(sentinel_token)
... output_ids_list.append(output_ids[start_token:split_idx])
... start_token = split_idx
... sentinel_token -= 1
>>> output_ids_list.append(output_ids[start_token:])
>>> output_string = tokenizer.batch_decode(output_ids_list)
>>> output_string
['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.']
```
## ByT5Tokenizer
[[autodoc]] ByT5Tokenizer
詳现ã«ã€ããŠã¯ã[`ByT5Tokenizer`] ãåç
§ããŠãã ããã | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/auto.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Auto Classes
å€ãã®å Žåã`from_pretrained()`ã¡ãœããã«äžããããäºååŠç¿æžã¿ã¢ãã«ã®ååããã¹ããã䜿çšãããã¢ãŒããã¯ãã£ãæšæž¬ããããšãã§ããŸããèªåã¯ã©ã¹ã¯ãã®ä»äºãããªãã«ä»£ãã£ãŠè¡ãããã«ããã«ãããŸãã®ã§ãäºååŠç¿æžã¿ã®éã¿/èšå®/èªåœãžã®åå/ãã¹ãäžãããšèªåçã«é¢é£ããã¢ãã«ãååŸã§ããŸãã
[`AutoConfig`]ã[`AutoModel`]ã[`AutoTokenizer`]ã®ãããããã€ã³ã¹ã¿ã³ã¹åãããšãé¢é£ããã¢ãŒããã¯ãã£ã®ã¯ã©ã¹ãçŽæ¥äœæãããŸããäŸãã°ã
```python
model = AutoModel.from_pretrained("google-bert/bert-base-cased")
```
ããã¯[`BertModel`]ã®ã€ã³ã¹ã¿ã³ã¹ã§ããã¢ãã«ãäœæããŸãã
åã¿ã¹ã¯ããšããããŠåããã¯ãšã³ãïŒPyTorchãTensorFlowããŸãã¯FlaxïŒããšã«`AutoModel`ã®ã¯ã©ã¹ãååšããŸãã
## èªåã¯ã©ã¹ã®æ¡åŒµ
ããããã®èªåã¯ã©ã¹ã«ã¯ãã«ã¹ã¿ã ã¯ã©ã¹ã§æ¡åŒµããããã®ã¡ãœããããããŸããäŸãã°ã`NewModel`ãšããã¢ãã«ã®ã«ã¹ã¿ã ã¯ã©ã¹ãå®çŸ©ããå Žåã`NewModelConfig`ã確ä¿ããŠããã°ãã®ããã«ããŠèªåã¯ã©ã¹ã«è¿œå ããããšãã§ããŸãïŒ
```python
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
```
ãã®åŸãéåžžã©ããauto classesã䜿çšããããšãã§ããããã«ãªããŸãïŒ
<Tip warning={true}>
ããªãã®`NewModelConfig`ã[`~transformers.PretrainedConfig`]ã®ãµãã¯ã©ã¹ã§ããå Žåããã®`model_type`å±æ§ãã³ã³ãã£ã°ãç»é²ãããšãã«äœ¿çšããããŒïŒããã§ã¯`"new-model"`ïŒãšåãã«èšå®ãããŠããããšã確èªããŠãã ããã
åæ§ã«ãããªãã®`NewModel`ã[`PreTrainedModel`]ã®ãµãã¯ã©ã¹ã§ããå Žåããã®`config_class`å±æ§ãã¢ãã«ãç»é²ããéã«äœ¿çšããã¯ã©ã¹ïŒããã§ã¯`NewModelConfig`ïŒãšåãã«èšå®ãããŠããããšã確èªããŠãã ããã
</Tip>
## AutoConfig
[[autodoc]] AutoConfig
## AutoTokenizer
[[autodoc]] AutoTokenizer
## AutoFeatureExtractor
[[autodoc]] AutoFeatureExtractor
## AutoImageProcessor
[[autodoc]] AutoImageProcessor
## AutoProcessor
[[autodoc]] AutoProcessor
## Generic model classes
以äžã®èªåã¯ã©ã¹ã¯ãç¹å®ã®ããããæããªãããŒã¹ã¢ãã«ã¯ã©ã¹ãã€ã³ã¹ã¿ã³ã¹åããããã«å©çšå¯èœã§ãã
### AutoModel
[[autodoc]] AutoModel
### TFAutoModel
[[autodoc]] TFAutoModel
### FlaxAutoModel
[[autodoc]] FlaxAutoModel
## Generic pretraining classes
以äžã®èªåã¯ã©ã¹ã¯ãäºååŠç¿ããããæã€ã¢ãã«ãã€ã³ã¹ã¿ã³ã¹åããããã«å©çšå¯èœã§ãã
### AutoModelForPreTraining
[[autodoc]] AutoModelForPreTraining
### TFAutoModelForPreTraining
[[autodoc]] TFAutoModelForPreTraining
### FlaxAutoModelForPreTraining
[[autodoc]] FlaxAutoModelForPreTraining
## Natural Language Processing
以äžã®èªåã¯ã©ã¹ã¯ã次ã®èªç¶èšèªåŠçã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForCausalLM
[[autodoc]] AutoModelForCausalLM
### TFAutoModelForCausalLM
[[autodoc]] TFAutoModelForCausalLM
### FlaxAutoModelForCausalLM
[[autodoc]] FlaxAutoModelForCausalLM
### AutoModelForMaskedLM
[[autodoc]] AutoModelForMaskedLM
### TFAutoModelForMaskedLM
[[autodoc]] TFAutoModelForMaskedLM
### FlaxAutoModelForMaskedLM
[[autodoc]] FlaxAutoModelForMaskedLM
### AutoModelForMaskGeneration
[[autodoc]] AutoModelForMaskGeneration
### TFAutoModelForMaskGeneration
[[autodoc]] TFAutoModelForMaskGeneration
### AutoModelForSeq2SeqLM
[[autodoc]] AutoModelForSeq2SeqLM
### TFAutoModelForSeq2SeqLM
[[autodoc]] TFAutoModelForSeq2SeqLM
### FlaxAutoModelForSeq2SeqLM
[[autodoc]] FlaxAutoModelForSeq2SeqLM
### AutoModelForSequenceClassification
[[autodoc]] AutoModelForSequenceClassification
### TFAutoModelForSequenceClassification
[[autodoc]] TFAutoModelForSequenceClassification
### FlaxAutoModelForSequenceClassification
[[autodoc]] FlaxAutoModelForSequenceClassification
### AutoModelForMultipleChoice
[[autodoc]] AutoModelForMultipleChoice
### TFAutoModelForMultipleChoice
[[autodoc]] TFAutoModelForMultipleChoice
### FlaxAutoModelForMultipleChoice
[[autodoc]] FlaxAutoModelForMultipleChoice
### AutoModelForNextSentencePrediction
[[autodoc]] AutoModelForNextSentencePrediction
### TFAutoModelForNextSentencePrediction
[[autodoc]] TFAutoModelForNextSentencePrediction
### FlaxAutoModelForNextSentencePrediction
[[autodoc]] FlaxAutoModelForNextSentencePrediction
### AutoModelForTokenClassification
[[autodoc]] AutoModelForTokenClassification
### TFAutoModelForTokenClassification
[[autodoc]] TFAutoModelForTokenClassification
### FlaxAutoModelForTokenClassification
[[autodoc]] FlaxAutoModelForTokenClassification
### AutoModelForQuestionAnswering
[[autodoc]] AutoModelForQuestionAnswering
### TFAutoModelForQuestionAnswering
[[autodoc]] TFAutoModelForQuestionAnswering
### FlaxAutoModelForQuestionAnswering
[[autodoc]] FlaxAutoModelForQuestionAnswering
### AutoModelForTextEncoding
[[autodoc]] AutoModelForTextEncoding
### TFAutoModelForTextEncoding
[[autodoc]] TFAutoModelForTextEncoding
## Computer vision
以äžã®èªåã¯ã©ã¹ã¯ã次ã®ã³ã³ãã¥ãŒã¿ãŒããžã§ã³ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForDepthEstimation
[[autodoc]] AutoModelForDepthEstimation
### AutoModelForImageClassification
[[autodoc]] AutoModelForImageClassification
### TFAutoModelForImageClassification
[[autodoc]] TFAutoModelForImageClassification
### FlaxAutoModelForImageClassification
[[autodoc]] FlaxAutoModelForImageClassification
### AutoModelForVideoClassification
[[autodoc]] AutoModelForVideoClassification
### AutoModelForMaskedImageModeling
[[autodoc]] AutoModelForMaskedImageModeling
### TFAutoModelForMaskedImageModeling
[[autodoc]] TFAutoModelForMaskedImageModeling
### AutoModelForObjectDetection
[[autodoc]] AutoModelForObjectDetection
### AutoModelForImageSegmentation
[[autodoc]] AutoModelForImageSegmentation
### AutoModelForImageToImage
[[autodoc]] AutoModelForImageToImage
### AutoModelForSemanticSegmentation
[[autodoc]] AutoModelForSemanticSegmentation
### TFAutoModelForSemanticSegmentation
[[autodoc]] TFAutoModelForSemanticSegmentation
### AutoModelForInstanceSegmentation
[[autodoc]] AutoModelForInstanceSegmentation
### AutoModelForUniversalSegmentation
[[autodoc]] AutoModelForUniversalSegmentation
### AutoModelForZeroShotImageClassification
[[autodoc]] AutoModelForZeroShotImageClassification
### TFAutoModelForZeroShotImageClassification
[[autodoc]] TFAutoModelForZeroShotImageClassification
### AutoModelForZeroShotObjectDetection
[[autodoc]] AutoModelForZeroShotObjectDetection
## Audio
以äžã®èªåã¯ã©ã¹ã¯ã次ã®é³å£°ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForAudioClassification
[[autodoc]] AutoModelForAudioClassification
### AutoModelForAudioFrameClassification
[[autodoc]] TFAutoModelForAudioClassification
### TFAutoModelForAudioFrameClassification
[[autodoc]] AutoModelForAudioFrameClassification
### AutoModelForCTC
[[autodoc]] AutoModelForCTC
### AutoModelForSpeechSeq2Seq
[[autodoc]] AutoModelForSpeechSeq2Seq
### TFAutoModelForSpeechSeq2Seq
[[autodoc]] TFAutoModelForSpeechSeq2Seq
### FlaxAutoModelForSpeechSeq2Seq
[[autodoc]] FlaxAutoModelForSpeechSeq2Seq
### AutoModelForAudioXVector
[[autodoc]] AutoModelForAudioXVector
### AutoModelForTextToSpectrogram
[[autodoc]] AutoModelForTextToSpectrogram
### AutoModelForTextToWaveform
[[autodoc]] AutoModelForTextToWaveform
## Multimodal
以äžã®èªåã¯ã©ã¹ã¯ã次ã®ãã«ãã¢ãŒãã«ã¿ã¹ã¯ã«å©çšå¯èœã§ãã
### AutoModelForTableQuestionAnswering
[[autodoc]] AutoModelForTableQuestionAnswering
### TFAutoModelForTableQuestionAnswering
[[autodoc]] TFAutoModelForTableQuestionAnswering
### AutoModelForDocumentQuestionAnswering
[[autodoc]] AutoModelForDocumentQuestionAnswering
### TFAutoModelForDocumentQuestionAnswering
[[autodoc]] TFAutoModelForDocumentQuestionAnswering
### AutoModelForVisualQuestionAnswering
[[autodoc]] AutoModelForVisualQuestionAnswering
### AutoModelForVision2Seq
[[autodoc]] AutoModelForVision2Seq
### TFAutoModelForVision2Seq
[[autodoc]] TFAutoModelForVision2Seq
### FlaxAutoModelForVision2Seq
[[autodoc]] FlaxAutoModelForVision2Seq
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/big_bird.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BigBird
## Overview
BigBird ã¢ãã«ã¯ã[Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) ã§ææ¡ãããŸããã
ã¶ããŒã«ããã³ãžã«ãšã°ã«ã¬ãã·ã¥ãã°ã«ãšããã€ãã¯ããŒã«ã»ã¢ãŽã£ããŽã¡ãšãšã€ã³ãºãªãŒããžã§ã·ã¥ã¢ãšã¢ã«ãã«ãã£ãã¯ãªã¹ãšãªã³ã¿ãã³ã
ãµã³ãã£ã¢ãŽãšãã¡ã ããã£ãªãããšã©ãã©ãã¢ãã«ãŒããšã¯ã³ãããŒãã¡ã³ãšã€ã³ããªãŒãªã©ã BigBird ã¯æ³šç®åºŠãäœã
BERT ãªã©ã® Transformer ããŒã¹ã®ã¢ãã«ãããã«é·ãã·ãŒã±ã³ã¹ã«æ¡åŒµãããTransformer ããŒã¹ã®ã¢ãã«ããŸã°ãã«å ããŠ
ã¢ãã³ã·ã§ã³ãšåæ§ã«ãBigBird ã¯å
¥åã·ãŒã±ã³ã¹ã«ã©ã³ãã ã¢ãã³ã·ã§ã³ã ãã§ãªãã°ããŒãã« ã¢ãã³ã·ã§ã³ãé©çšããŸããçè«çã«ã¯ã
ãŸã°ãã§å
šäœçã§ã©ã³ãã ãªæ³šæãé©çšãããšãå®å
šãªæ³šæã«è¿ã¥ãããšã瀺ãããŠããŸããã
é·ãã·ãŒã±ã³ã¹ã§ã¯èšç®å¹çã倧å¹
ã«åäžããŸããããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çã
BERT ãŸã㯠RoBERTa ãšæ¯èŒããèŠçŽã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãªã©ã®ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ã¢ãã«ã¯ãNLP ã§æãæåãã深局åŠç¿ã¢ãã«ã® 1 ã€ã§ãã
æ®å¿µãªããããããã®äžæ žçãªå¶éã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ã«å¯Ÿããäºæ¬¡äŸåæ§ (äž»ã«ã¡ã¢ãªã«é¢ãã) ã§ãã
å®å
šãªæ³šæã¡ã«ããºã ã«ããé·ãã§ããããã解決ããããã«ãBigBird ã¯ããŸã°ããªæ³šæã¡ã«ããºã ãææ¡ããŸãã
ãã®äºæ¬¡äŸåé¢ä¿ãç·åœ¢ã«åæžããŸãã BigBird ãã·ãŒã±ã³ã¹é¢æ°ã®æ±çšè¿äŒŒåšã§ããããšã瀺ããŸãã
ãã¥ãŒãªã³ã°ã¯å®å
šã§ãããããäºæ¬¡å®å
šæ³šæã¢ãã«ã®ãããã®ç¹æ§ãä¿åãããŸããéäžãç§ãã¡ã®
çè«åæã«ãããO(1) åã®ã°ããŒãã« ããŒã¯ã³ (CLS ãªã©) ãæã€å©ç¹ã®äžéšãæããã«ãªãã
ã¹ããŒã¹æ³šæã¡ã«ããºã ã®äžéšãšããŠã®ã·ãŒã±ã³ã¹ãææ¡ãããã¹ããŒã¹ ã¢ãã³ã·ã§ã³ã¯ã次ã®é·ãã®ã·ãŒã±ã³ã¹ãåŠçã§ããŸãã
åæ§ã®ããŒããŠã§ã¢ã䜿çšããŠä»¥åã«å¯èœã§ãã£ããã®ã® 8 åãããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çãèŠçŽãªã©ã®ããŸããŸãª NLP ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžãããŸããç§éã
ã²ããã¯ã¹ããŒã¿ãžã®æ°ããã¢ããªã±ãŒã·ã§ã³ãææ¡ããŸãã*
ãããïŒ
- BigBird ã®æ³šæãã©ã®ããã«æ©èœãããã«ã€ããŠã®è©³çŽ°ãªèª¬æã«ã€ããŠã¯ã[ãã®ããã°æçš¿](https://huggingface.co/blog/big-bird) ãåç
§ããŠãã ããã
- BigBird ã«ã¯ã**original_full** ãš **block_sparse** ã® 2 ã€ã®å®è£
ãä»å±ããŠããŸããã·ãŒã±ã³ã¹é·ã 1024 æªæºã®å Žåã次ã䜿çšããŸãã
**block_sparse** ã䜿çšããŠãã¡ãªããããªãããã**original_full** ã䜿çšããããšããå§ãããŸãã
- ã³ãŒãã¯çŸåšã3 ãããã¯ãš 2 ã°ããŒãã« ãããã¯ã®ãŠã£ã³ã㊠ãµã€ãºã䜿çšããŠããŸãã
- ã·ãŒã±ã³ã¹ã®é·ãã¯ããã㯠ãµã€ãºã§å²ãåããå¿
èŠããããŸãã
- çŸåšã®å®è£
ã§ã¯ **ITC** ã®ã¿ããµããŒããããŠããŸãã
- çŸåšã®å®è£
ã§ã¯ **num_random_blocks = 0** ã¯ãµããŒããããŠããŸãã
- BigBird ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ã[vasudevgupta](https://huggingface.co/vasudevgupta) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒããèŠã€ãã
[ãã¡ã](https://github.com/google-research/bigbird)ã
## ããã¥ã¡ã³ã ãªãœãŒã¹
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ãã¹ã¯ãããèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_lang_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## BigBirdConfig
[[autodoc]] BigBirdConfig
## BigBirdTokenizer
[[autodoc]] BigBirdTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## BigBirdTokenizerFast
[[autodoc]] BigBirdTokenizerFast
## BigBird specific outputs
[[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
<frameworkcontent>
<pt>
## BigBirdModel
[[autodoc]] BigBirdModel
- forward
## BigBirdForPreTraining
[[autodoc]] BigBirdForPreTraining
- forward
## BigBirdForCausalLM
[[autodoc]] BigBirdForCausalLM
- forward
## BigBirdForMaskedLM
[[autodoc]] BigBirdForMaskedLM
- forward
## BigBirdForSequenceClassification
[[autodoc]] BigBirdForSequenceClassification
- forward
## BigBirdForMultipleChoice
[[autodoc]] BigBirdForMultipleChoice
- forward
## BigBirdForTokenClassification
[[autodoc]] BigBirdForTokenClassification
- forward
## BigBirdForQuestionAnswering
[[autodoc]] BigBirdForQuestionAnswering
- forward
</pt>
<jax>
## FlaxBigBirdModel
[[autodoc]] FlaxBigBirdModel
- __call__
## FlaxBigBirdForPreTraining
[[autodoc]] FlaxBigBirdForPreTraining
- __call__
## FlaxBigBirdForCausalLM
[[autodoc]] FlaxBigBirdForCausalLM
- __call__
## FlaxBigBirdForMaskedLM
[[autodoc]] FlaxBigBirdForMaskedLM
- __call__
## FlaxBigBirdForSequenceClassification
[[autodoc]] FlaxBigBirdForSequenceClassification
- __call__
## FlaxBigBirdForMultipleChoice
[[autodoc]] FlaxBigBirdForMultipleChoice
- __call__
## FlaxBigBirdForTokenClassification
[[autodoc]] FlaxBigBirdForTokenClassification
- __call__
## FlaxBigBirdForQuestionAnswering
[[autodoc]] FlaxBigBirdForQuestionAnswering
- __call__
</jax>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/biogpt.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BioGPT
## Overview
BioGPT ã¢ãã«ã¯ã[BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian LuoãLiai SunãYingce Xiaã Tao QinãSheng ZhangãHoifung PoonãTie-Yan Liuã BioGPT ã¯ãçç©å»åŠããã¹ãã®çæãšãã€ãã³ã°ã®ããã®ããã¡ã€ã³åºæã®çæäºåãã¬ãŒãã³ã°æžã¿ Transformer èšèªã¢ãã«ã§ãã BioGPT ã¯ãTransformer èšèªã¢ãã«ã®ããã¯ããŒã³ã«åŸãã1,500 äžã® PubMed æé²ã§æåããäºåãã¬ãŒãã³ã°ãããŠããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã¯ãäžè¬çãªèªç¶èšèªé åã§ã®å€§ããªæåã«è§ŠçºãããŠãçç©å»åŠé åã§ãŸããŸã泚ç®ãéããŠããŸããäžè¬èšèªãã¡ã€ã³ã®äºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã® 2 ã€ã®äž»ãªãã©ã³ããã€ãŸã BERT (ããã³ãã®ããªã¢ã³ã) ãš GPT (ããã³ãã®ããªã¢ã³ã) ã®ãã¡ã1 ã€ç®ã¯ BioBERT ã PubMedBERT ãªã©ã®çç©å»åŠãã¡ã€ã³ã§åºãç 究ãããŠããŸãããããã¯ããŸããŸãªäžæµã®çç©å»åŠçã¿ã¹ã¯ã§å€§ããªæåãåããŠããŸãããçæèœåã®æ¬ åŠã«ããå¿çšç¯å²ãå¶éãããŠããŸãããã®è«æã§ã¯ã倧èŠæš¡ãªçç©å»åŠæç®ã§äºåãã¬ãŒãã³ã°ããããã¡ã€ã³åºæã®çæ Transformer èšèªã¢ãã«ã§ãã BioGPT ãææ¡ããŸããç§ãã¡ã¯ 6 ã€ã®çç©å»åŠçèªç¶èšèªåŠçã¿ã¹ã¯ã§ BioGPT ãè©äŸ¡ããã»ãšãã©ã®ã¿ã¹ã¯ã§ç§ãã¡ã®ã¢ãã«ã以åã®ã¢ãã«ãããåªããŠããããšãå®èšŒããŸãããç¹ã«ãBC5CDRãKD-DTIãDDI ã®ãšã³ãããŒãšã³ãé¢ä¿æœåºã¿ã¹ã¯ã§ã¯ãããã 44.98%ã38.42%ã40.76% ã® F1 ã¹ã³ã¢ãç²åŸããPubMedQA ã§ã¯ 78.2% ã®ç²ŸåºŠãç²åŸããæ°èšé²ãæš¹ç«ããŸãããããã¹ãçæã«é¢ããç§ãã¡ã®ã±ãŒã¹ã¹ã¿ãã£ã¯ãçç©å»åŠæç®ã«ããã BioGPT ã®å©ç¹ãããã«å®èšŒããçç©å»åŠçšèªã®æµæ¢ãªèª¬æãçæããŸãã*
## Usage tips
- BioGPT ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå·ŠåŽã§ã¯ãªãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
- BioGPT ã¯å æèšèªã¢ããªã³ã° (CLM) ç®çã§ãã¬ãŒãã³ã°ãããŠãããããã·ãŒã±ã³ã¹å
ã®æ¬¡ã®ããŒã¯ã³ãäºæž¬ããã®ã«åŒ·åã§ãã run_generation.py ãµã³ãã« ã¹ã¯ãªããã§ç¢ºèªã§ããããã«ããã®æ©èœãå©çšãããšãBioGPT ã¯æ§æçã«äžè²«ããããã¹ããçæã§ããŸãã
- ã¢ãã«ã¯ã以åã«èšç®ãããããŒãšå€ã®ã¢ãã³ã·ã§ã³ ãã¢ã§ãã`past_key_values`(PyTorch ã®å Žå) ãå
¥åãšããŠåãåãããšãã§ããŸãããã® (past_key_values ãŸã㯠past) å€ã䜿çšãããšãã¢ãã«ãããã¹ãçæã®ã³ã³ããã¹ãã§äºåã«èšç®ãããå€ãåèšç®ã§ããªããªããŸãã PyTorch ã®äœ¿çšæ³ã®è©³çŽ°ã«ã€ããŠã¯ãBioGptForCausalLM.forward() ã¡ãœããã® past_key_values åŒæ°ãåç
§ããŠãã ããã
ãã®ã¢ãã«ã¯ã[kamalkraj](https://huggingface.co/kamalkraj) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/BioGPT) ã«ãããŸãã
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
## BioGptConfig
[[autodoc]] BioGptConfig
## BioGptTokenizer
[[autodoc]] BioGptTokenizer
- save_vocabulary
## BioGptModel
[[autodoc]] BioGptModel
- forward
## BioGptForCausalLM
[[autodoc]] BioGptForCausalLM
- forward
## BioGptForTokenClassification
[[autodoc]] BioGptForTokenClassification
- forward
## BioGptForSequenceClassification
[[autodoc]] BioGptForSequenceClassification
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/canine.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CANINE
## Overview
CANINE ã¢ãã«ã¯ã[CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation](https://arxiv.org/abs/2103.06874)ãJonathan H. ClarkãDan GarretteãIulia TurcãJohn Wieting èããã®
æ瀺çãªããŒã¯ã³åã¹ããã (ãã€ã ãã¢ãªã©) ã䜿çšããã« Transformer ããã¬ãŒãã³ã°ããæåã®è«æã® 1 ã€
ãšã³ã³ãŒãã£ã³ã° (BPEãWordPiece ãŸã㯠SentencePiece)ã代ããã«ãã¢ãã«ã¯ Unicode æåã¬ãã«ã§çŽæ¥ãã¬ãŒãã³ã°ãããŸãã
ãã£ã©ã¯ã¿ãŒã¬ãã«ã§ã®ãã¬ãŒãã³ã°ã§ã¯å¿
ç¶çã«ã·ãŒã±ã³ã¹ã®é·ããé·ããªããŸãããCANINE ã¯ãããå¹ççãªæ¹æ³ã§è§£æ±ºããŸãã
ãã£ãŒã Transformer ãšã³ã³ãŒããé©çšããåã«ãããŠã³ãµã³ããªã³ã°æŠç¥ãå®è¡ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ãã€ãã©ã€ã³ NLP ã·ã¹ãã ã¯ããšã³ãããŒãšã³ãã®ãã¥ãŒã©ã« ã¢ããªã³ã°ã«å€§éšåãåã£ãŠä»£ããããŠããŸãããäžè¬çã«äœ¿çšãããŠããã»ãŒãã¹ãŠã®ã¢ãã«ã¯
äŸç¶ãšããŠæ瀺çãªããŒã¯ã³åæé ãå¿
èŠã§ããæè¿ã®ããŒã¯ã³åã¢ãããŒãã¯ããŒã¿ç±æ¥ã®ãµãã¯ãŒãã«åºã¥ããŠããŸããã
ã¬ãã·ã³ã³ã¯æåã§äœæãããããŒã¯ãã€ã¶ãŒãããè匱ã§ã¯ãããŸãããããããã®æè¡ã¯ãã¹ãŠã®èšèªã«çããé©ããŠããããã§ã¯ãããŸããã
èšèªãåºå®èªåœã®äœ¿çšã«ãããã¢ãã«ã®é©å¿èœåãå¶éãããå¯èœæ§ããããŸãããã®è«æã§ã¯ãCANINE ã玹ä»ããŸãã
æ瀺çãªããŒã¯ã³åãèªåœã䜿çšããã«ãæåã·ãŒã±ã³ã¹ãçŽæ¥æäœãããã¥ãŒã©ã« ãšã³ã³ãŒããŒãšã
æåã«çŽæ¥äœçšãããããªãã·ã§ã³ã§ãµãã¯ãŒãããœããèªå°ãã€ã¢ã¹ãšããŠäœ¿çšããäºåãã¬ãŒãã³ã°æŠç¥ã
ããããã®çŽ°ããå
¥åãå¹æçãã€å¹ççã«äœ¿çšããããã«ãCANINE ã¯ããŠã³ãµã³ããªã³ã°ãçµã¿åãããŠãå
¥åãåæžããŸãã
ã³ã³ããã¹ãããšã³ã³ãŒããããã£ãŒããã©ã³ã¹ãã©ãŒããŒã¹ã¿ãã¯ãåããã·ãŒã±ã³ã¹ã®é·ãã CANINE ã¯ãåçã® mBERT ã¢ãã«ããã次ã®ç¹ã§åªããŠããŸãã
TyDi QA ã® 2.8 F1 ã¯ãã¢ãã« ãã©ã¡ãŒã¿ã 28% å°ãªãã«ãããããããå°é£ãªå€èšèªãã³ãããŒã¯ã§ãã*
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/google-research/language/tree/master/language/canine) ã«ãããŸãã
## Usage tips
- CANINE ã¯å
éšã§å°ãªããšã 3 ã€ã® Transformer ãšã³ã³ãŒããŒã䜿çšããŸã: 2 ã€ã®ãæµ
ãããšã³ã³ãŒã㌠(åäžã®ãšã³ã³ãŒããŒã®ã¿ã§æ§æ)
ã¬ã€ã€ãŒ) ãš 1 ã€ã®ããã£ãŒãããšã³ã³ãŒã㌠(éåžžã® BERT ãšã³ã³ãŒããŒ)ããŸãããæµ
ãããšã³ã³ãŒãã䜿çšããŠã³ã³ããã¹ããèšå®ããŸãã
ããŒã«ã« ã¢ãã³ã·ã§ã³ã䜿çšããæåã®åã蟌ã¿ã次ã«ãããŠã³ãµã³ããªã³ã°ã®åŸãããã£ãŒãããšã³ã³ãŒããŒãé©çšãããŸããã€ãã«ã
ã¢ãããµã³ããªã³ã°åŸããæµ
ãããšã³ã³ãŒãã䜿çšããŠæçµçãªæååã蟌ã¿ãäœæãããŸããã¢ãããš
ããŠã³ãµã³ããªã³ã°ã«ã€ããŠã¯è«æã«èšèŒãããŠããŸãã
- CANINE ã¯ãããã©ã«ã㧠2048 æåã®æ倧ã·ãŒã±ã³ã¹é·ã䜿çšããŸãã [`CanineTokenizer`] ã䜿çšã§ããŸã
ã¢ãã«çšã®ããã¹ããæºåããŸãã
- ç¹å¥ãª [CLS] ããŒã¯ã³ã®æçµçãªé衚瀺ç¶æ
ã®äžã«ç·åœ¢ã¬ã€ã€ãŒãé
眮ããããšã§åé¡ãè¡ãããšãã§ããŸãã
(äºåå®çŸ©ããã Unicode ã³ãŒã ãã€ã³ãããããŸã)ããã ããããŒã¯ã³åé¡ã¿ã¹ã¯ã®å Žåã¯ãããŠã³ãµã³ããªã³ã°ãããã·ãŒã±ã³ã¹
ããŒã¯ã³ã¯ãå
ã®æåã·ãŒã±ã³ã¹ã®é·ã (2048) ãšäžèŽããããã«å床ã¢ãããµã³ããªã³ã°ããå¿
èŠããããŸããã®
詳现ã«ã€ããŠã¯ãè«æãåç
§ããŠãã ããã
ã¢ãã«ã®ãã§ãã¯ãã€ã³ã:
- [google/canine-c](https://huggingface.co/google/canine-c): èªå·±ååž°æåæ倱ã§äºåãã¬ãŒãã³ã°æžã¿ã
12 ã¬ã€ã€ãŒã768 é ãã12 ãããã121M ãã©ã¡ãŒã¿ãŒ (ãµã€ãº ~500 MB)ã
- [google/canine-s](https://huggingface.co/google/canine-s): ãµãã¯ãŒãæ倱ã§äºåãã¬ãŒãã³ã°æžã¿ã12 å±€ã
768 åã®é衚瀺ã12 ãããã121M ãã©ã¡ãŒã¿ãŒ (ãµã€ãº ~500 MB)ã
## Usage example
CANINE ã¯çã®æåã§åäœããããã**ããŒã¯ãã€ã¶ãŒãªã**ã§äœ¿çšã§ããŸãã
```python
>>> from transformers import CanineModel
>>> import torch
>>> model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss
>>> text = "hello world"
>>> # use Python's built-in ord() function to turn each character into its unicode code point id
>>> input_ids = torch.tensor([[ord(char) for char in text]])
>>> outputs = model(input_ids) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
ãã ãããããæšè«ãšãã¬ãŒãã³ã°ã®å Žåã¯ãããŒã¯ãã€ã¶ãŒã䜿çšããããšããå§ãããŸãïŒãã¹ãŠãããã£ã³ã°/åãè©°ããããïŒ
ã·ãŒã±ã³ã¹ãåãé·ãã«ããŸã):
```python
>>> from transformers import CanineTokenizer, CanineModel
>>> model = CanineModel.from_pretrained("google/canine-c")
>>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
>>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
>>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
>>> outputs = model(**encoding) # forward pass
>>> pooled_output = outputs.pooler_output
>>> sequence_output = outputs.last_hidden_state
```
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## CanineConfig
[[autodoc]] CanineConfig
## CanineTokenizer
[[autodoc]] CanineTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
## CANINE specific outputs
[[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling
## CanineModel
[[autodoc]] CanineModel
- forward
## CanineForSequenceClassification
[[autodoc]] CanineForSequenceClassification
- forward
## CanineForMultipleChoice
[[autodoc]] CanineForMultipleChoice
- forward
## CanineForTokenClassification
[[autodoc]] CanineForTokenClassification
- forward
## CanineForQuestionAnswering
[[autodoc]] CanineForQuestionAnswering
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bark.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Bark
## Overview
Bark ã¯ã[suno-ai/bark](https://github.com/suno-ai/bark) 㧠Suno AI ã«ãã£ãŠææ¡ããããã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ããã¹ãèªã¿äžãã¢ãã«ã§ãã
Bark 㯠4 ã€ã®äž»èŠãªã¢ãã«ã§æ§æãããŠããŸãã
- [`BarkSemanticModel`] ('ããã¹ã'ã¢ãã«ãšãåŒã°ãã): ããŒã¯ã³åãããããã¹ããå
¥åãšããŠåãåããããã¹ãã®æå³ãæããã»ãã³ãã£ã㯠ããã¹ã ããŒã¯ã³ãäºæž¬ããå æçèªå·±ååž°å€æã¢ãã«ã
- [`BarkCoarseModel`] ('ç²ãé³é¿' ã¢ãã«ãšãåŒã°ãã): [`BarkSemanticModel`] ã¢ãã«ã®çµæãå
¥åãšããŠåãåãå æçèªå·±ååž°å€æåšã EnCodec ã«å¿
èŠãªæåã® 2 ã€ã®ãªãŒãã£ãª ã³ãŒãããã¯ãäºæž¬ããããšãç®çãšããŠããŸãã
- [`BarkFineModel`] ('埮现é³é¿' ã¢ãã«)ãä»åã¯éå æçãªãŒããšã³ã³ãŒã㌠ãã©ã³ã¹ãã©ãŒããŒã§ã以åã®ã³ãŒãããã¯åã蟌ã¿ã®åèšã«åºã¥ããŠæåŸã®ã³ãŒãããã¯ãç¹°ãè¿ãäºæž¬ããŸãã
- [`EncodecModel`] ãããã¹ãŠã®ã³ãŒããã㯠ãã£ãã«ãäºæž¬ããã®ã§ãBark ã¯ããã䜿çšããŠåºåãªãŒãã£ãªé
åããã³ãŒãããŸãã
æåã® 3 ã€ã®ã¢ãžã¥ãŒã«ã¯ãããããç¹å®ã®äºåå®çŸ©ãããé³å£°ã«åŸã£ãŠåºåãµãŠã³ãã調æŽããããã®æ¡ä»¶ä»ãã¹ããŒã«ãŒåã蟌ã¿ããµããŒãã§ããããšã«æ³šæããŠãã ããã
### Optimizing Bark
Bark ã¯ãã³ãŒããæ°è¡è¿œå ããã ãã§æé©åã§ãã**ã¡ã¢ãª ãããããªã³ãã倧å¹
ã«åæž**ããã**æšè«ãé«éå**ãããŸãã
#### Using half-precision
ã¢ãã«ãå粟床ã§ããŒãããã ãã§ãæšè«ãé«éåããã¡ã¢ãªäœ¿çšéã 50% åæžã§ããŸãã
```python
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
```
#### Using ð€ Better Transformer
Better Transformer ã¯ãå
éšã§ã«ãŒãã«èåãå®è¡ãã ð€ æé©ãªæ©èœã§ããããã©ãŒãã³ã¹ãäœäžãããããšãªããé床ã 20% ïœ 30% åäžãããããšãã§ããŸããã¢ãã«ã ð€ Better Transformer ã«ãšã¯ã¹ããŒãããã®ã«å¿
èŠãªã³ãŒã㯠1 è¡ã ãã§ãã
```python
model = model.to_bettertransformer()
```
ãã®æ©èœã䜿çšããåã« ð€ Optimum ãã€ã³ã¹ããŒã«ããå¿
èŠãããããšã«æ³šæããŠãã ããã [ã€ã³ã¹ããŒã«æ¹æ³ã¯ãã¡ã](https://huggingface.co/docs/optimum/installation)
#### Using CPU offload
åè¿°ããããã«ãBark 㯠4 ã€ã®ãµãã¢ãã«ã§æ§æãããŠããããªãŒãã£ãªçæäžã«é çªã«åŒã³åºãããŸããèšãæããã°ã1 ã€ã®ãµãã¢ãã«ã䜿çšãããŠããéãä»ã®ãµãã¢ãã«ã¯ã¢ã€ãã«ç¶æ
ã«ãªããŸãã
CUDA ããã€ã¹ã䜿çšããŠããå Žåãã¡ã¢ãª ãããããªã³ãã® 80% åæžã«ããæ©æµãåããç°¡åãªè§£æ±ºçã¯ãã¢ã€ãã«ç¶æ
ã® GPU ã®ãµãã¢ãã«ããªãããŒãããããšã§ãããã®æäœã¯ CPU ãªãããŒããšåŒã°ããŸãã 1è¡ã®ã³ãŒãã§äœ¿çšã§ããŸãã
```python
model.enable_cpu_offload()
```
ãã®æ©èœã䜿çšããåã«ãð€ Accelerate ãã€ã³ã¹ããŒã«ããå¿
èŠãããããšã«æ³šæããŠãã ããã [ã€ã³ã¹ããŒã«æ¹æ³ã¯ãã¡ã](https://huggingface.co/docs/accelerate/basic_tutorials/install)
#### Combining optimization techniques
æé©åææ³ãçµã¿åãããŠãCPU ãªãããŒããå粟床ãð€ Better Transformer ããã¹ãŠäžåºŠã«äœ¿çšã§ããŸãã
```python
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
# load in fp16
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
# convert to bettertransformer
model = BetterTransformer.transform(model, keep_original_model=False)
# enable CPU offload
model.enable_cpu_offload()
```
æšè«æé©åææ³ã®è©³çŽ°ã«ã€ããŠã¯ã[ãã¡ã](https://huggingface.co/docs/transformers/perf_infer_gpu_one) ãã芧ãã ããã
### Tips
Suno ã¯ãå€ãã®èšèªã§é³å£°ããªã»ããã®ã©ã€ãã©ãªãæäŸããŠããŸã [ãã¡ã](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c)ã
ãããã®ããªã»ããã¯ããã [ãã¡ã](https://huggingface.co/suno/bark-small/tree/main/speaker_embeddings) ãŸã㯠[ãã¡ã](https://huggingface.co/suno/bark/tree/main/speaker_embeddings)ã
```python
>>> from transformers import AutoProcessor, BarkModel
>>> processor = AutoProcessor.from_pretrained("suno/bark")
>>> model = BarkModel.from_pretrained("suno/bark")
>>> voice_preset = "v2/en_speaker_6"
>>> inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
Bark ã¯ãéåžžã«ãªã¢ã«ãª **å€èšèª** é³å£°ã ãã§ãªããé³æ¥œãèæ¯ãã€ãºãåçŽãªå¹æé³ãªã©ã®ä»ã®é³å£°ãçæã§ããŸãã
```python
>>> # Multilingual speech - simplified Chinese
>>> inputs = processor("æ人çïŒæäŒè¯Žäžæ")
>>> # Multilingual speech - French - let's use a voice_preset as well
>>> inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
>>> # Bark can also generate music. You can help it out by adding music notes around your lyrics.
>>> inputs = processor("⪠Hello, my dog is cute âª")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
ãã®ã¢ãã«ã¯ãç¬ããããæ¯ãæ³£ããªã©ã®**éèšèªã³ãã¥ãã±ãŒã·ã§ã³**ãçæããããšãã§ããŸãã
```python
>>> # Adding non-speech cues to the input text
>>> inputs = processor("Hello uh ... [clears throat], my dog is cute [laughter]")
>>> audio_array = model.generate(**inputs)
>>> audio_array = audio_array.cpu().numpy().squeeze()
```
ãªãŒãã£ãªãä¿åããã«ã¯ãã¢ãã«èšå®ãš scipy ãŠãŒãã£ãªãã£ãããµã³ãã« ã¬ãŒããååŸããã ãã§ãã
```python
>>> from scipy.io.wavfile import write as write_wav
>>> # save audio to disk, but first take the sample rate from the model config
>>> sample_rate = model.generation_config.sample_rate
>>> write_wav("bark_generation.wav", sample_rate, audio_array)
```
ãã®ã¢ãã«ã¯ã[Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe) ããã³ [Sanchit Gandhi (sanchit-gandhi)](https://github.com/sanchit-gandhi) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/suno-ai/bark) ã«ãããŸãã
## BarkConfig
[[autodoc]] BarkConfig
- all
## BarkProcessor
[[autodoc]] BarkProcessor
- all
- __call__
## BarkModel
[[autodoc]] BarkModel
- generate
- enable_cpu_offload
## BarkSemanticModel
[[autodoc]] BarkSemanticModel
- forward
## BarkCoarseModel
[[autodoc]] BarkCoarseModel
- forward
## BarkFineModel
[[autodoc]] BarkFineModel
- forward
## BarkCausalModel
[[autodoc]] BarkCausalModel
- forward
## BarkCoarseConfig
[[autodoc]] BarkCoarseConfig
- all
## BarkFineConfig
[[autodoc]] BarkFineConfig
- all
## BarkSemanticConfig
[[autodoc]] BarkSemanticConfig
- all
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bridgetower.md | <!--Copyright 2023 The Intel Labs Team Authors, The Microsoft Research Team Authors and HuggingFace Inc. team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BridgeTower
## Overview
BridgeTower ã¢ãã«ã¯ãXiao XuãChenfei WuãShachar RosenmanãVasudev LalãWanxiang CheãNan Duan [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning](https://arxiv.org/abs/2206.08657) ã§ææ¡ãããŸããããã¥ã¢ã³ããã®ã¢ãã«ã®ç®æšã¯ã
åãŠãã¢ãŒãã« ãšã³ã³ãŒããšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®éã®ããªããžã«ãããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®åå±€ã§ã®å
æ¬çãã€è©³çŽ°ãªå¯Ÿè©±ãå¯èœã«ãªããè¿œå ã®ããã©ãŒãã³ã¹ãšèšç®ã³ã¹ããã»ãšãã©ç¡èŠã§ããçšåºŠã§ãããŸããŸãªäžæµã¿ã¹ã¯ã§åªããããã©ãŒãã³ã¹ãå®çŸããŸãã
ãã®è«æ㯠[AAAI'23](https://aaai.org/Conferences/AAAI-23/) äŒè°ã«æ¡æãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*TWO-TOWER ã¢ãŒããã¯ãã£ãåããããžã§ã³èšèª (VL) ã¢ãã«ã¯ãè¿å¹Žã®èŠèŠèšèªè¡šçŸåŠç¿ã®äž»æµãšãªã£ãŠããŸãã
çŸåšã® VL ã¢ãã«ã¯ã軜éã®ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒã䜿çšããŠããã£ãŒã ã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã§äž¡æ¹ã®ã¢ããªãã£ãåæã«æœåºãäœçœ®åãããèåããããšãåŠç¿ããããäºåã«ãã¬ãŒãã³ã°ããããã£ãŒã ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒããæçµå±€ã®ãŠãã¢ãŒãã«è¡šçŸãäžéšã®ã¯ãã¹ã¢ãŒãã«ãšã³ã³ãŒããŒã
ã©ã¡ãã®ã¢ãããŒãããèŠèŠèšèªè¡šçŸã®åŠç¿ãå¶éããã¢ãã«ã®ããã©ãŒãã³ã¹ãå¶éããå¯èœæ§ããããŸãããã®è«æã§ã¯ããŠãã¢ãŒãã« ãšã³ã³ãŒãã®æäžäœå±€ãšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒãã®åå±€ã®éã®æ¥ç¶ãæ§ç¯ããè€æ°ã®ããªããžå±€ãå°å
¥ãã BRIDGETOWER ãææ¡ããŸãã
ããã«ãããå¹æçãªããã ã¢ããã®ã¯ãã¹ã¢ãŒãã«èª¿æŽãšãã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒå
ã®äºåãã¬ãŒãã³ã°æžã¿ãŠãã¢ãŒãã« ãšã³ã³ãŒããŒã®ããŸããŸãªã»ãã³ãã£ã㯠ã¬ãã«ã®èŠèŠè¡šçŸãšããã¹ãè¡šçŸã®éã®èåãå¯èœã«ãªããŸãã BRIDGETOWER 㯠4M ç»åã®ã¿ã§äºåãã¬ãŒãã³ã°ãããŠãããããŸããŸãªäžæµã®èŠèŠèšèªã¿ã¹ã¯ã§æå
端ã®ããã©ãŒãã³ã¹ãå®çŸããŸãã
ç¹ã«ãVQAv2 ãã¹ãæšæºã»ããã§ã¯ãBRIDGETOWER 㯠78.73% ã®ç²ŸåºŠãéæããåãäºåãã¬ãŒãã³ã° ããŒã¿ãšã»ãŒç¡èŠã§ããè¿œå ãã©ã¡ãŒã¿ãšèšç®ã³ã¹ãã§ä»¥åã®æå
端ã¢ãã« METER ã 1.09% äžåããŸããã
ç¹ã«ãã¢ãã«ãããã«ã¹ã±ãŒãªã³ã°ãããšãBRIDGETOWER 㯠81.15% ã®ç²ŸåºŠãéæããæ¡éãã«å€§ããªããŒã¿ã»ããã§äºåãã¬ãŒãã³ã°ãããã¢ãã«ãäžåããŸããã*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/bridgetower_architecture%20.jpg"
alt="drawing" width="600"/>
<small> ããªããžã¿ã¯ãŒ ã¢ãŒããã¯ãã£ã <a href="https://arxiv.org/abs/2206.08657">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[Anahita Bhiwandiwalla](https://huggingface.co/anahita-b)ã[Tiep Le](https://huggingface.co/Tile)ã[Shaoyen Tseng](https://huggingface.co/shaoyent) ãå
ã®ã³ãŒã㯠[ãã](https://github.com/microsoft/BridgeTower) ã«ãããŸãã
## Usage tips and examples
BridgeTower ã¯ãããžã¥ã¢ã« ãšã³ã³ãŒããŒãããã¹ã ãšã³ã³ãŒããŒãããã³è€æ°ã®è»œéããªããž ã¬ã€ã€ãŒãåããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã§æ§æãããŸãã
ãã®ã¢ãããŒãã®ç®æšã¯ãåãŠãã¢ãŒãã« ãšã³ã³ãŒããŒãšã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã®éã«ããªããžãæ§ç¯ããã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããŒã®åå±€ã§å
æ¬çãã€è©³çŽ°ãªå¯Ÿè©±ãå¯èœã«ããããšã§ããã
ååãšããŠãææ¡ãããã¢ãŒããã¯ãã£ã§ã¯ãä»»æã®ããžã¥ã¢ã«ãããã¹ãããŸãã¯ã¯ãã¹ã¢ãŒãã« ãšã³ã³ãŒããé©çšã§ããŸãã
[`BridgeTowerProcessor`] ã¯ã[`RobertaTokenizer`] ãš [`BridgeTowerImageProcessor`] ãåäžã®ã€ã³ã¹ã¿ã³ã¹ã«ã©ããããäž¡æ¹ã®æ©èœãå®çŸããŸãã
ããã¹ãããšã³ã³ãŒãããç»åãããããçšæããŸãã
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForContrastiveLearning`] ã䜿çšããŠå¯Ÿç
§åŠç¿ãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
>>> import requests
>>> from PIL import Image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
>>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
>>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs
```
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForImageAndTextRetrieval`] ã䜿çšããŠç»åããã¹ãã®ååŸãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
>>> import requests
>>> from PIL import Image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> # forward pass
>>> scores = dict()
>>> for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, 1].item()
```
次ã®äŸã¯ã[`BridgeTowerProcessor`] ãš [`BridgeTowerForMaskedLM`] ã䜿çšããŠãã¹ã¯ãããèšèªã¢ããªã³ã°ãå®è¡ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000360943.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
>>> text = "a <mask> looking out of the window"
>>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
>>> # prepare inputs
>>> encoding = processor(image, text, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**encoding)
>>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
>>> print(results)
.a cat looking out of the window.
```
ãããïŒ
- BridgeTower ã®ãã®å®è£
ã§ã¯ã[`RobertaTokenizer`] ã䜿çšããŠããã¹ãåã蟌ã¿ãçæããOpenAI ã® CLIP/ViT ã¢ãã«ã䜿çšããŠèŠèŠçåã蟌ã¿ãèšç®ããŸãã
- äºåãã¬ãŒãã³ã°ããã [bridgeTower-base](https://huggingface.co/BridgeTower/bridgetower-base) ããã³ [bridgetower ãã¹ã¯ãããèšèªã¢ããªã³ã°ãšç»åããã¹ã ãããã³ã°](https://huggingface.co/BridgeTower/bridgetower--base-itm-mlm) ã®ãã§ãã¯ãã€ã³ã ããªãªãŒã¹ãããŸããã
- ç»åæ€çŽ¢ããã³ãã®ä»ã®äžæµã¿ã¹ã¯ã«ããã BridgeTower ã®ããã©ãŒãã³ã¹ã«ã€ããŠã¯ã[è¡š 5](https://arxiv.org/pdf/2206.08657.pdf) ãåç
§ããŠãã ããã
- ãã®ã¢ãã«ã® PyTorch ããŒãžã§ã³ã¯ãtorch 1.10 以éã§ã®ã¿äœ¿çšã§ããŸãã
## BridgeTowerConfig
[[autodoc]] BridgeTowerConfig
## BridgeTowerTextConfig
[[autodoc]] BridgeTowerTextConfig
## BridgeTowerVisionConfig
[[autodoc]] BridgeTowerVisionConfig
## BridgeTowerImageProcessor
[[autodoc]] BridgeTowerImageProcessor
- preprocess
## BridgeTowerProcessor
[[autodoc]] BridgeTowerProcessor
- __call__
## BridgeTowerModel
[[autodoc]] BridgeTowerModel
- forward
## BridgeTowerForContrastiveLearning
[[autodoc]] BridgeTowerForContrastiveLearning
- forward
## BridgeTowerForMaskedLM
[[autodoc]] BridgeTowerForMaskedLM
- forward
## BridgeTowerForImageAndTextRetrieval
[[autodoc]] BridgeTowerForImageAndTextRetrieval
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/cpm.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CPM
## Overview
CPM ã¢ãã«ã¯ãZhengyan ZhangãXu HanãHao ZhouãPei KeãYuxian Gu ã«ãã£ãŠ [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) ã§ææ¡ãããŸãããè埳æã秊è£äœ³ã
Yusheng SuãHaozhe JiãJian GuanãFanchao QiãXiaozi WangãYanan ZhengãGuoyang ZengãHuanqi CaoãShengqi Chenã
Daixuan LiãZhenbo SunãZhiyuan LiuãMinlie HuangãWentao HanãJie TangãJuanzi LiãXiaoyan ZhuãMaosong Sunã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ãããèšèªã¢ãã« (PLM) ã¯ãããŸããŸãªäžæµã® NLP ã¿ã¹ã¯ã«æçã§ããããšã蚌æãããŠããŸããæè¿ã§ã¯GPT-3ã
1,750ååã®ãã©ã¡ãŒã¿ãš570GBã®åŠç¿ããŒã¿ãåããæ°åã®æ®åœ±ïŒ1æã§ãïŒã®å®¹éã§å€§ããªæ³šç®ãéããŸãã
ãŒãã·ã§ããïŒåŠç¿ããã ããGPT-3 ãé©çšããŠäžåœèªã® NLP ã¿ã¹ã¯ã«å¯ŸåŠããããšã¯äŸç¶ãšããŠå°é£ã§ãã
GPT-3 ã®èšèªã¯äž»ã«è±èªã§ããããã©ã¡ãŒã¿ãŒã¯å
¬éãããŠããŸããããã®æè¡ã¬ããŒãã§ã¯ã
倧èŠæš¡ãªäžåœèªãã¬ãŒãã³ã° ããŒã¿ã«å¯Ÿããçæçäºåãã¬ãŒãã³ã°ãåããäžåœèªäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã« (CPM)ãæé«ã«
ç§ãã¡ã®ç¥èã®éãã§ã¯ã26 åã®ãã©ã¡ãŒã¿ãš 100GB ã®äžåœèªãã¬ãŒãã³ã° ããŒã¿ãåãã CPM ã¯ãäºåãã¬ãŒãã³ã°ãããäžåœèªãšããŠã¯æ倧ã®ãã®ã§ãã
èšèªã¢ãã«ã¯ãäŒè©±ããšãã»ã€ã®äœæã
ã¯ããŒãŒãã¹ããšèšèªç解ãåºç¯ãªå®éšã«ãããCPM ãå€ãã®ç°å¢ã§åªããããã©ãŒãã³ã¹ãéæã§ããããšãå®èšŒãããŠããŸãã
å°æ°ã·ã§ãã (ãŒãã·ã§ããã§ã) åŠç¿ã®èšå®ã§ã® NLP ã¿ã¹ã¯ã*
ãã®ã¢ãã«ã¯ [canwenxu](https://huggingface.co/canwenxu) ã«ãã£ãŠæäŸãããŸããããªãªãžãã«ã®å®è£
ãèŠã€ãããŸã
ãã: https://github.com/TsinghuaAI/CPM-Generate
<Tip>
CPM ã®ã¢ãŒããã¯ãã£ã¯ãããŒã¯ã³åæ¹æ³ãé€ã㊠GPT-2 ãšåãã§ãã詳现ã«ã€ããŠã¯ã[GPT-2 ããã¥ã¡ã³ã](openai-community/gpt2) ãåç
§ããŠãã ããã
API ãªãã¡ã¬ã³ã¹æ
å ±ã
</Tip>
## CpmTokenizer
[[autodoc]] CpmTokenizer
## CpmTokenizerFast
[[autodoc]] CpmTokenizerFast
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/autoformer.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Autoformer
## æŠèŠ
Autoformerã¢ãã«ã¯ãã[Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008)ããšããè«æã§Haixu WuãJiehui XuãJianmin WangãMingsheng Longã«ãã£ãŠææ¡ãããŸããã
ãã®ã¢ãã«ã¯ãäºæž¬ããã»ã¹äžã«ãã¬ã³ããšå£ç¯æ§æåãé次çã«å解ã§ãã深局å解ã¢ãŒããã¯ãã£ãšããŠTransformerãå¢åŒ·ããŸãã
è«æã®èŠæšã¯ä»¥äžã®éãã§ãïŒ
*äŸãã°ç°åžžæ°è±¡ã®æ©æèŠåãé·æçãªãšãã«ã®ãŒæ¶è²»èšç»ãšãã£ãå®å¿çšã«ãããŠãäºæž¬æéã延é·ããããšã¯éèŠãªèŠæ±ã§ããæ¬è«æã§ã¯ãæç³»åã®é·æäºæž¬åé¡ãç 究ããŠããŸãã以åã®TransformerããŒã¹ã®ã¢ãã«ã¯ãé·è·é¢äŸåé¢ä¿ãçºèŠããããã«æ§ã
ãªã»ã«ãã¢ãã³ã·ã§ã³æ©æ§ãæ¡çšããŠããŸããããããé·ææªæ¥ã®è€éãªæéçãã¿ãŒã³ã«ãã£ãŠã¢ãã«ãä¿¡é Œã§ããäŸåé¢ä¿ãèŠã€ããããšã劚ããããŸãããŸããTransformerã¯ãé·ãç³»åã®å¹çåã®ããã«ãã€ã³ãã¯ã€ãºãªã»ã«ãã¢ãã³ã·ã§ã³ã®ã¹ããŒã¹ããŒãžã§ã³ãæ¡çšããå¿
èŠããããæ
å ±å©çšã®ããã«ããã¯ãšãªããŸããTransformerãè¶
ããŠãæã
ã¯èªå·±çžé¢æ©æ§ãæã€æ°ããå解ã¢ãŒããã¯ãã£ãšããŠAutoformerãèšèšããŸãããç³»åå解ã®äºååŠçã®æ
£è¡ãç Žããããã深局ã¢ãã«ã®åºæ¬çãªå
éšãããã¯ãšããŠé©æ°ããŸãããã®èšèšã¯ãè€éãªæç³»åã«å¯ŸããAutoformerã®é²è¡çãªå解èœåã匷åããŸããããã«ã確çéçšçè«ã«è§ŠçºãããŠãç³»åã®åšææ§ã«åºã¥ããèªå·±çžé¢æ©æ§ãèšèšãããµãç³»åã¬ãã«ã§ã®äŸåé¢ä¿ã®çºèŠãšè¡šçŸã®éçŽãè¡ããŸããèªå·±çžé¢ã¯å¹çãšç²ŸåºŠã®äž¡æ¹ã§ã»ã«ãã¢ãã³ã·ã§ã³ãäžåããŸããé·æäºæž¬ã«ãããŠãAutoformerã¯ããšãã«ã®ãŒã亀éãçµæžãæ°è±¡ãçŸç
ã®5ã€ã®å®çšçãªå¿çšãã«ããŒãã6ã€ã®ãã³ãããŒã¯ã§38%ã®çžå¯Ÿçãªæ¹åããããããæå
端ã®ç²ŸåºŠãéæããŸãã*
ãã®ã¢ãã«ã¯[elisim](https://huggingface.co/elisim)ãš[kashif](https://huggingface.co/kashif)ããæäŸãããŸããã
ãªãªãžãã«ã®ã³ãŒãã¯[ãã¡ã](https://github.com/thuml/Autoformer)ã§èŠãããšãã§ããŸãã
## åèè³æ
Autoformerã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒã®Hugging Faceããã³ã³ãã¥ããã£ïŒðã§ç€ºãããŠããïŒã®åèè³æã®äžèŠ§ã§ããããã«åèè³æãæåºãããå Žåã¯ãæ°å
ŒããªãPull RequestãéããŠãã ãããç§ãã¡ã¯ãããã¬ãã¥ãŒããããŸãïŒåèè³æã¯ãæ¢åã®ãã®ãè€è£œããã®ã§ã¯ãªããäœãæ°ããããšã瀺ãããšãçæ³çã§ãã
- HuggingFaceããã°ã§Autoformerã«é¢ããããã°èšäºããã§ãã¯ããŠãã ããïŒ[ã¯ããTransformersã¯æç³»åäºæž¬ã«å¹æçã§ãïŒ+ AutoformerïŒ](https://huggingface.co/blog/autoformer)
## AutoformerConfig
[[autodoc]] AutoformerConfig
## AutoformerModel
[[autodoc]] AutoformerModel
- forward
## AutoformerForPrediction
[[autodoc]] AutoformerForPrediction
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/dinat.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Dilated Neighborhood Attention Transformer
## Overview
DiNAT 㯠[Dilated Neighborhood Attender Transformer](https://arxiv.org/abs/2209.15001) ã§ææ¡ãããŸããã
Ali Hassani and Humphrey Shi.
[NAT](nat) ãæ¡åŒµããããã«ãæ¡åŒµè¿é£ã¢ãã³ã·ã§ã³ ãã¿ãŒã³ãè¿œå ããŠã°ããŒãã« ã³ã³ããã¹ãããã£ããã£ããŸãã
ãããŠãããšæ¯èŒããŠå€§å¹
ãªããã©ãŒãã³ã¹ã®åäžãèŠãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ãã©ã³ã¹ãã©ãŒããŒã¯æ¥éã«ãããŸããŸãªã¢ããªãã£ã«ããã£ãŠæãé »ç¹ã«é©çšããã深局åŠç¿ã¢ãŒããã¯ãã£ã® 1 ã€ã«ãªãã€ã€ãããŸãã
ãã¡ã€ã³ãšã¿ã¹ã¯ãããžã§ã³ã§ã¯ãåçŽãªãã©ã³ã¹ãã©ãŒããŒãžã®ç¶ç¶çãªåãçµã¿ã«å ããŠãéå±€åãã©ã³ã¹ãã©ãŒããŒã
ãŸãããã®ããã©ãŒãã³ã¹ãšæ¢åã®ãã¬ãŒã ã¯ãŒã¯ãžã®ç°¡åãªçµ±åã®ãããã§ã倧ããªæ³šç®ãéããŸããã
ãããã®ã¢ãã«ã¯éåžžãã¹ã©ã€ãã£ã³ã° ãŠã£ã³ããŠã®è¿é£ã¢ãã³ã·ã§ã³ (NA) ãªã©ã®å±æçãªæ³šæã¡ã«ããºã ãæ¡çšããŠããŸãã
ãŸã㯠Swin Transformer ã®ã·ãã ãŠã£ã³ã㊠ã»ã«ã ã¢ãã³ã·ã§ã³ãèªå·±æ³šæã®äºæ¬¡è€éãã軜æžããã®ã«å¹æçã§ããã
å±æçãªæ³šæã¯ãèªå·±æ³šæã®æãæãŸãã 2 ã€ã®ç¹æ§ã匱ããŸããããã¯ãé·è·é¢ã®çžäºäŸåæ§ã¢ããªã³ã°ã§ãã
ãããŠå
šäœçãªå容éããã®ããŒããŒã§ã¯ãèªç¶ã§æè»ã§ã
NA ãžã®å¹ççãªæ¡åŒµã«ãããããã°ããŒãã«ãªã³ã³ããã¹ããææããå容éããŒãããææ°é¢æ°çã«æ¡åŒµããããšãã§ããŸãã
è¿œå è²»çšã NA ã®ããŒã«ã«ãªæ³šç®ãš DiNA ã®ãŸã°ããªã°ããŒãã«ãªæ³šç®ã¯çžäºã«è£å®ãåããããç§ãã¡ã¯
äž¡æ¹ã«åºã¥ããŠæ§ç¯ãããæ°ããéå±€åããžã§ã³ ãã©ã³ã¹ãã©ãŒããŒã§ãã Dilated Neighborhood Attendant Transformer (DiNAT) ãå°å
¥ããŸãã
DiNAT ã®ããªã¢ã³ãã¯ãNATãSwinãConvNeXt ãªã©ã®åŒ·åãªããŒã¹ã©ã€ã³ã«æ¯ã¹ãŠå€§å¹
ã«æ¹åãããŠããŸãã
ç§ãã¡ã®å€§èŠæš¡ã¢ãã«ã¯ãCOCO ãªããžã§ã¯ãæ€åºã«ãã㊠Swin ã¢ãã«ãããé«éã§ãããã¯ã¹ AP ã 1.5% åªããŠããŸãã
COCO ã€ã³ã¹ã¿ã³ã¹ ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ 1.3% ã®ãã¹ã¯ APãADE20K ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ã§ã¯ 1.1% ã® mIoUã
æ°ãããã¬ãŒã ã¯ãŒã¯ãšçµã¿åãããåœç€Ÿã®å€§èŠæš¡ããªã¢ã³ãã¯ãCOCO (58.2 PQ) äžã®æ°ããæå
端ã®ããããã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¢ãã«ã§ãã
ããã³ ADE20K (48.5 PQ)ãããã³ Cityscapes (44.5 AP) ããã³ ADE20K (35.4 AP) ã®ã€ã³ã¹ã¿ã³ã¹ ã»ã°ã¡ã³ããŒã·ã§ã³ ã¢ãã« (è¿œå ããŒã¿ãªã)ã
ãŸããADE20K (58.2 mIoU) äžã®æå
端ã®ç¹æ®ãªã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¢ãã«ãšãäžèŽããŸãã
éœåžæ¯èŠ³ (84.5 mIoU) ã§ã¯ 2 äœã«ã©ã³ã¯ãããŠããŸã (è¿œå ããŒã¿ãªã)ã *
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dilated-neighborhood-attention-pattern.jpg"
alt="drawing" width="600"/>
<small> ç°ãªãæ¡åŒµå€ã䜿çšããè¿é£ã¢ãã³ã·ã§ã³ã
<a href="https://arxiv.org/abs/2209.15001">å
ã®è«æ</a>ããæç²ã</small>
ãã®ã¢ãã«ã¯ [Ali Hassani](https://huggingface.co/alihassanijr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/SHI-Labs/Neighborhood-Attendance-Transformer) ã«ãããŸãã
## Usage tips
DiNAT 㯠*ããã¯ããŒã³* ãšããŠäœ¿çšã§ããŸãã ãoutput_hidden_ââstates = Trueãã®å Žåã
`hidden_ââstates` ãš `reshaped_hidden_ââstates` ã®äž¡æ¹ãåºåããŸãã `reshape_hidden_ââstates` ã¯ã`(batch_size, height, width, num_channels)` ã§ã¯ãªãã`(batch, num_channels, height, width)` ã®åœ¢ç¶ãæã£ãŠããŸãã
ããŒãïŒ
- DiNAT ã¯ã[NATTEN](https://github.com/SHI-Labs/NATTEN/) ã«ããè¿é£ã¢ãã³ã·ã§ã³ãšæ¡åŒµè¿é£ã¢ãã³ã·ã§ã³ã®å®è£
ã«äŸåããŠããŸãã
[shi-labs.com/natten](https://shi-labs.com/natten) ãåç
§ããŠãLinux çšã®ãã«ãæžã¿ãã€ãŒã«ã䜿çšããŠã€ã³ã¹ããŒã«ãããã`pip install natten` ãå®è¡ããŠã·ã¹ãã äžã«æ§ç¯ã§ããŸãã
åŸè
ã¯ã³ã³ãã€ã«ã«æéããããå¯èœæ§ãããããšã«æ³šæããŠãã ããã NATTEN ã¯ãŸã Windows ããã€ã¹ããµããŒãããŠããŸããã
- çŸæç¹ã§ã¯ããã ãµã€ãº 4 ã®ã¿ããµããŒããããŠããŸãã
## Resources
DiNAT ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`DinatForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## DinatConfig
[[autodoc]] DinatConfig
## DinatModel
[[autodoc]] DinatModel
- forward
## DinatForImageClassification
[[autodoc]] DinatForImageClassification
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bort.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BORT
<Tip warning={true}>
ãã®ã¢ãã«ã¯ã¡ã³ããã³ã¹ ã¢ãŒãã®ã¿ã§ãããã³ãŒããå€æŽããæ°ãã PR ã¯åãä»ããããŸããã
ãã®ã¢ãã«ã®å®è¡äžã«åé¡ãçºçããå Žåã¯ããã®ã¢ãã«ããµããŒãããŠããæåŸã®ããŒãžã§ã³ (v4.30.0) ãåã€ã³ã¹ããŒã«ããŠãã ããã
ãããè¡ãã«ã¯ãã³ãã³ã `pip install -U Transformers==4.30.0` ãå®è¡ããŸãã
</Tip>
## Overview
BORT ã¢ãã«ã¯ã[Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) ã§ææ¡ãããŸããã
Adrian de Wynter and Daniel J. Perry.ããã¯ãBERT ã®ã¢ãŒããã¯ã㣠ãã©ã¡ãŒã¿ã®æé©ãªãµãã»ããã§ãã
èè
ã¯ããã«ãããšåŒãã§ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*Devlin ããã BERT ã¢ãŒããã¯ãã£ã®ã¢ãŒããã¯ã㣠ãã©ã¡ãŒã¿ã®æé©ãªãµãã»ãããæœåºããŸãã (2018)
ãã¥ãŒã©ã« ã¢ãŒããã¯ãã£æ€çŽ¢ã®ã¢ã«ãŽãªãºã ã«ãããæè¿ã®ç»æçãªæè¡ãé©çšããŸãããã®æé©ãªãµãã»ããã次ã®ããã«åŒã³ãŸãã
"Bort" ã¯æããã«å°ãããæå¹ (ã€ãŸããåã蟌ã¿å±€ãèæ
®ããªã) ãµã€ãºã¯ 5.5% ã§ãã
ãªãªãžãã«ã® BERT 倧èŠæš¡ã¢ãŒããã¯ãã£ãããã³ããã ãµã€ãºã® 16%ã Bort 㯠288 GPU æéã§äºåãã¬ãŒãã³ã°ããããšãã§ããŸãã
æé«ããã©ãŒãã³ã¹ã® BERT ãã©ã¡ããªã㯠ã¢ãŒããã¯ã㣠ããªã¢ã³ãã§ãã RoBERTa-large ã®äºåãã¬ãŒãã³ã°ã«å¿
èŠãªæéã® 1.2%
(Liu et al., 2019)ãåããã·ã³ã§ BERT-large ããã¬ãŒãã³ã°ããã®ã«å¿
èŠãª GPU æéã®äžçèšé²ã®çŽ 33%
ããŒããŠã§ã¢ããŸããCPU äžã§ 7.9 åé«éã§ããã ãã§ãªããä»ã®å§çž®ããŒãžã§ã³ãããããã©ãŒãã³ã¹ãåªããŠããŸãã
ã¢ãŒããã¯ãã£ãããã³äžéšã®éå§çž®ããªã¢ã³ã: 0.3% ïœ 31% ã®ããã©ãŒãã³ã¹åäžãåŸãããŸãã
BERT-large ã«é¢ããŠãè€æ°ã®å
¬éèªç¶èšèªç解 (NLU) ãã³ãããŒã¯ã«ããã絶察çãªè©äŸ¡ã*
ãã®ã¢ãã«ã¯ [stefan-it](https://huggingface.co/stefan-it) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒãã¯[ãã](https://github.com/alexa/bort/)ã«ãããŸãã
## Usage tips
- BORT ã®ã¢ãã« ã¢ãŒããã¯ãã£ã¯ BERT ã«åºã¥ããŠããŸãã詳现ã«ã€ããŠã¯ã[BERT ã®ããã¥ã¡ã³ã ããŒãž](bert) ãåç
§ããŠãã ããã
ã¢ãã«ã® API ãªãã¡ã¬ã³ã¹ãšäœ¿çšäŸã
- BORT 㯠BERT ããŒã¯ãã€ã¶ãŒã®ä»£ããã« RoBERTa ããŒã¯ãã€ã¶ãŒã䜿çšããŸããããŒã¯ãã€ã¶ãŒã® API ãªãã¡ã¬ã³ã¹ãšäœ¿çšäŸã«ã€ããŠã¯ã[RoBERTa ã®ããã¥ã¡ã³ã ããŒãž](roberta) ãåç
§ããŠãã ããã
- BORT ã«ã¯ã [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html#fine-tuning-with-algebraic-topology) ãšåŒã°ããç¹å®ã®åŸ®èª¿æŽã¢ã«ãŽãªãºã ãå¿
èŠã§ãã
æ®å¿µãªãããŸã ãªãŒãã³ãœãŒã¹åãããŠããŸããã誰ããå®è£
ããããšãããšãã³ãã¥ããã£ã«ãšã£ãŠéåžžã«åœ¹ç«ã¡ãŸãã
BORT ã®åŸ®èª¿æŽãæ©èœãããããã®ã¢ã«ãŽãªãºã ã | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bigbird_pegasus.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BigBirdPegasus
## Overview
BigBird ã¢ãã«ã¯ã[Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) ã§ææ¡ãããŸããã
ã¶ããŒã«ããã³ãžã«ãšã°ã«ã¬ãã·ã¥ãã°ã«ãšããã€ãã¯ããŒã«ã»ã¢ãŽã£ããŽã¡ãšãšã€ã³ãºãªãŒããžã§ã·ã¥ã¢ãšã¢ã«ãã«ãã£ãã¯ãªã¹ãšãªã³ã¿ãã³ã
ãµã³ãã£ã¢ãŽãšãã¡ã ããã£ãªãããšã©ãã©ãã¢ãã«ãŒããšã¯ã³ãããŒãã¡ã³ãšã€ã³ããªãŒãªã©ã BigBird ã¯æ³šç®åºŠãäœã
BERT ãªã©ã® Transformer ããŒã¹ã®ã¢ãã«ãããã«é·ãã·ãŒã±ã³ã¹ã«æ¡åŒµãããTransformer ããŒã¹ã®ã¢ãã«ããŸã°ãã«å ããŠ
ã¢ãã³ã·ã§ã³ãšåæ§ã«ãBigBird ã¯å
¥åã·ãŒã±ã³ã¹ã«ã©ã³ãã ã¢ãã³ã·ã§ã³ã ãã§ãªãã°ããŒãã« ã¢ãã³ã·ã§ã³ãé©çšããŸããçè«çã«ã¯ã
ãŸã°ãã§å
šäœçã§ã©ã³ãã ãªæ³šæãé©çšãããšãå®å
šãªæ³šæã«è¿ã¥ãããšã瀺ãããŠããŸããã
é·ãã·ãŒã±ã³ã¹ã§ã¯èšç®å¹çã倧å¹
ã«åäžããŸããããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çã
BERT ãŸã㯠RoBERTa ãšæ¯èŒããèŠçŽã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BERT ãªã©ã®ãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ã¢ãã«ã¯ãNLP ã§æãæåãã深局åŠç¿ã¢ãã«ã® 1 ã€ã§ãã
æ®å¿µãªããããããã®äžæ žçãªå¶éã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ã«å¯Ÿããäºæ¬¡äŸåæ§ (äž»ã«ã¡ã¢ãªã«é¢ãã) ã§ãã
å®å
šãªæ³šæã¡ã«ããºã ã«ããé·ãã§ããããã解決ããããã«ãBigBird ã¯ããŸã°ããªæ³šæã¡ã«ããºã ãææ¡ããŸãã
ãã®äºæ¬¡äŸåé¢ä¿ãç·åœ¢ã«åæžããŸãã BigBird ãã·ãŒã±ã³ã¹é¢æ°ã®æ±çšè¿äŒŒåšã§ããããšã瀺ããŸãã
ãã¥ãŒãªã³ã°ã¯å®å
šã§ãããããäºæ¬¡å®å
šæ³šæã¢ãã«ã®ãããã®ç¹æ§ãä¿åãããŸããéäžãç§ãã¡ã®
çè«åæã«ãããO(1) åã®ã°ããŒãã« ããŒã¯ã³ (CLS ãªã©) ãæã€å©ç¹ã®äžéšãæããã«ãªãã
ã¹ããŒã¹æ³šæã¡ã«ããºã ã®äžéšãšããŠã®ã·ãŒã±ã³ã¹ãææ¡ãããã¹ããŒã¹ ã¢ãã³ã·ã§ã³ã¯ã次ã®é·ãã®ã·ãŒã±ã³ã¹ãåŠçã§ããŸãã
åæ§ã®ããŒããŠã§ã¢ã䜿çšããŠä»¥åã«å¯èœã§ãã£ããã®ã® 8 åãããé·ãã³ã³ããã¹ããåŠçã§ããæ©èœã®çµæãšããŠã
BigBird ã¯ã質åå¿çãèŠçŽãªã©ã®ããŸããŸãª NLP ã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžãããŸããç§éã
ã²ããã¯ã¹ããŒã¿ãžã®æ°ããã¢ããªã±ãŒã·ã§ã³ãææ¡ããŸãã*
## Usage tips
- BigBird ã®æ³šæãã©ã®ããã«æ©èœãããã«ã€ããŠã®è©³çŽ°ãªèª¬æã«ã€ããŠã¯ã[ãã®ããã°æçš¿](https://huggingface.co/blog/big-bird) ãåç
§ããŠãã ããã
- BigBird ã«ã¯ã**original_full** ãš **block_sparse** ã® 2 ã€ã®å®è£
ãä»å±ããŠããŸããã·ãŒã±ã³ã¹é·ã 1024 æªæºã®å Žåã次ã䜿çšããŸãã
**block_sparse** ã䜿çšããŠãã¡ãªããããªãããã**original_full** ã䜿çšããããšããå§ãããŸãã
- ã³ãŒãã¯çŸåšã3 ãããã¯ãš 2 ã°ããŒãã« ãããã¯ã®ãŠã£ã³ã㊠ãµã€ãºã䜿çšããŠããŸãã
- ã·ãŒã±ã³ã¹ã®é·ãã¯ããã㯠ãµã€ãºã§å²ãåããå¿
èŠããããŸãã
- çŸåšã®å®è£
ã§ã¯ **ITC** ã®ã¿ããµããŒããããŠããŸãã
- çŸåšã®å®è£
ã§ã¯ **num_random_blocks = 0** ã¯ãµããŒããããŠããŸããã
- BigBirdPegasus 㯠[PegasusTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pegasus/tokenization_pegasus.py) ã䜿çšããŸãã
- BigBird ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ã§ãããããéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/bigbird) ã«ãããŸãã
## ããã¥ã¡ã³ã ãªãœãŒã¹
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BigBirdPegasusConfig
[[autodoc]] BigBirdPegasusConfig
- all
## BigBirdPegasusModel
[[autodoc]] BigBirdPegasusModel
- forward
## BigBirdPegasusForConditionalGeneration
[[autodoc]] BigBirdPegasusForConditionalGeneration
- forward
## BigBirdPegasusForSequenceClassification
[[autodoc]] BigBirdPegasusForSequenceClassification
- forward
## BigBirdPegasusForQuestionAnswering
[[autodoc]] BigBirdPegasusForQuestionAnswering
- forward
## BigBirdPegasusForCausalLM
[[autodoc]] BigBirdPegasusForCausalLM
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bit.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Big Transfer (BiT)
## Overview
BiT ã¢ãã«ã¯ãAlexander KolesnikovãLucas BeyerãXiaohua ZhaiãJoan PuigcerverãJessica YungãSylvain Gelly ã«ãã£ãŠ [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) ã§ææ¡ãããŸãããããŒã«ã»ããŒã«ãºããŒã
BiT ã¯ã[ResNet](resnet) ã®ãããªã¢ãŒããã¯ã㣠(å
·äœçã«ã¯ ResNetv2) ã®äºåãã¬ãŒãã³ã°ãã¹ã±ãŒã«ã¢ããããããã®ç°¡åãªã¬ã·ãã§ãããã®æ¹æ³ã«ããã転移åŠç¿ã倧å¹
ã«æ¹åãããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ãããè¡šçŸã®è»¢éã«ããããµã³ãã«å¹çãåäžããèŠèŠçšã®ãã£ãŒã ãã¥ãŒã©ã« ãããã¯ãŒã¯ããã¬ãŒãã³ã°ããéã®ãã€ããŒãã©ã¡ãŒã¿ãŒèª¿æŽãç°¡çŽ åãããŸãã倧èŠæš¡ãªæåž«ããããŒã¿ã»ããã§ã®äºåãã¬ãŒãã³ã°ãšãã¿ãŒã²ãã ã¿ã¹ã¯ã§ã®ã¢ãã«ã®åŸ®èª¿æŽã®ãã©ãã€ã ãåæ€èšããŸããç§ãã¡ã¯äºåãã¬ãŒãã³ã°ãã¹ã±ãŒã«ã¢ããããBig Transfer (BiT) ãšåŒã¶ã·ã³ãã«ãªã¬ã·ããææ¡ããŸããããã€ãã®æ
éã«éžæãããã³ã³ããŒãã³ããçµã¿åãããã·ã³ãã«ãªãã¥ãŒãªã¹ãã£ãã¯ã䜿çšããŠè»¢éããããšã«ããã20 ãè¶
ããããŒã¿ã»ããã§åªããããã©ãŒãã³ã¹ãå®çŸããŸãã BiT ã¯ãã¯ã©ã¹ããšã« 1 ã€ã®ãµã³ãã«ããåèš 100 äžã®ãµã³ãã«ãŸã§ãé©ãã»ã©åºç¯å²ã®ããŒã¿é åã«ããã£ãŠè¯å¥œã«ããã©ãŒãã³ã¹ãçºæ®ããŸãã BiT ã¯ãILSVRC-2012 㧠87.5%ãCIFAR-10 㧠99.4%ã19 ã¿ã¹ã¯ã® Visual Task Adaptation Benchmark (VTAB) 㧠76.3% ã®ããã 1 粟床ãéæããŸãããå°èŠæš¡ãªããŒã¿ã»ããã§ã¯ãBiT 㯠ILSVRC-2012 (ã¯ã©ã¹ããã 10 äŸ) 㧠76.8%ãCIFAR-10 (ã¯ã©ã¹ããã 10 äŸ) 㧠97.0% ãéæããŸãããé«ã転åæ§èœãå®çŸããäž»èŠæåã詳现ã«åæâ»ã
## Usage tips
- BiT ã¢ãã«ã¯ãã¢ãŒããã¯ãã£ã®ç¹ã§ ResNetv2 ãšåçã§ããã次ã®ç¹ãç°ãªããŸã: 1) ãã¹ãŠã®ãããæ£èŠåå±€ã [ã°ã«ãŒãæ£èŠå](https://arxiv.org/abs/1803.08494) ã«çœ®ãæããããŸãã
2) [éã¿ã®æšæºå](https://arxiv.org/abs/1903.10520) ã¯ç³ã¿èŸŒã¿å±€ã«äœ¿çšãããŸããèè
ãã¯ãäž¡æ¹ã®çµã¿åããã倧ããªããããµã€ãºã§ã®ãã¬ãŒãã³ã°ã«åœ¹ç«ã¡ãéèŠãªå¹æãããããšã瀺ããŠããŸãã
転移åŠç¿ãžã®åœ±é¿ã
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/google-research/big_transfer) ã«ãããŸãã
## Resources
BiT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`BitForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## BitConfig
[[autodoc]] BitConfig
## BitImageProcessor
[[autodoc]] BitImageProcessor
- preprocess
## BitModel
[[autodoc]] BitModel
- forward
## BitForImageClassification
[[autodoc]] BitForImageClassification
- forward
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/camembert.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CamemBERT
## Overview
CamemBERT ã¢ãã«ã¯ã[CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) ã§ææ¡ãããŸããã
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Ãric Villemonte de la
Clergerie, Djamé Seddah, and Benoît Sagot. 2019幎ã«ãªãªãŒã¹ãããFacebookã®RoBERTaã¢ãã«ãããŒã¹ã«ããã¢ãã«ã§ãã
138GBã®ãã©ã³ã¹èªããã¹ãã§ãã¬ãŒãã³ã°ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ãããèšèªã¢ãã«ã¯çŸåšãèªç¶èšèªåŠçã§åºãæ®åããŠããŸããæåã«ãããããããå©çšå¯èœãªã»ãšãã©ã®
ã¢ãã«ã¯è±èªã®ããŒã¿ããŸãã¯è€æ°èšèªã®ããŒã¿ã®é£çµã§ãã¬ãŒãã³ã°ãããŠããŸããããã«ããã
ãã®ãããªã¢ãã«ã®å®éã®äœ¿çšã¯ãè±èªãé€ããã¹ãŠã®èšèªã§éåžžã«éãããŠããŸãããã©ã³ã¹äººã«ãšã£ãŠãã®åé¡ã«å¯ŸåŠããããšãç®æããŠã
Bi-direction Encoders for Transformers (BERT) ã®ãã©ã³ã¹èªçã§ãã CamemBERT ããªãªãŒã¹ããŸãã枬å®ããŸã
è€æ°ã®äžæµã¿ã¹ã¯ãã€ãŸãåè©ã¿ã°ä»ãã«ãããå€èšèªã¢ãã«ãšæ¯èŒãã CamemBERT ã®ããã©ãŒãã³ã¹
äŸåé¢ä¿è§£æãåºæè¡šçŸèªèãèªç¶èšèªæšè«ã CamemBERT ã¯æå
端æè¡ãåäžãããŸã
æ€èšãããŠããã»ãšãã©ã®ã¿ã¹ã¯ã«å¯Ÿå¿ããŸããç§ãã¡ã¯ãç 究ãš
ãã©ã³ã¹èª NLP ã®äžæµã¢ããªã±ãŒã·ã§ã³ã*
ãã®ã¢ãã«ã¯ [camembert](https://huggingface.co/camembert) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://camembert-model.fr/) ã«ãããŸãã
<Tip>
ãã®å®è£
ã¯RoBERTaãšåãã§ãã䜿çšäŸã«ã€ããŠã¯[RoBERTaã®ããã¥ã¡ã³ã](roberta)ãåç
§ããŠãã ããã
å
¥åãšåºåã«é¢ããæ
å ±ãšããŠã
</Tip>
## Resources
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ãã¹ã¯èšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
## CamembertConfig
[[autodoc]] CamembertConfig
## CamembertTokenizer
[[autodoc]] CamembertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CamembertTokenizerFast
[[autodoc]] CamembertTokenizerFast
<frameworkcontent>
<pt>
## CamembertModel
[[autodoc]] CamembertModel
## CamembertForCausalLM
[[autodoc]] CamembertForCausalLM
## CamembertForMaskedLM
[[autodoc]] CamembertForMaskedLM
## CamembertForSequenceClassification
[[autodoc]] CamembertForSequenceClassification
## CamembertForMultipleChoice
[[autodoc]] CamembertForMultipleChoice
## CamembertForTokenClassification
[[autodoc]] CamembertForTokenClassification
## CamembertForQuestionAnswering
[[autodoc]] CamembertForQuestionAnswering
</pt>
<tf>
## TFCamembertModel
[[autodoc]] TFCamembertModel
## TFCamembertForCasualLM
[[autodoc]] TFCamembertForCausalLM
## TFCamembertForMaskedLM
[[autodoc]] TFCamembertForMaskedLM
## TFCamembertForSequenceClassification
[[autodoc]] TFCamembertForSequenceClassification
## TFCamembertForMultipleChoice
[[autodoc]] TFCamembertForMultipleChoice
## TFCamembertForTokenClassification
[[autodoc]] TFCamembertForTokenClassification
## TFCamembertForQuestionAnswering
[[autodoc]] TFCamembertForQuestionAnswering
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/chinese_clip.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Chinese-CLIP
## Overview
Chinese-CLIP An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) ã§ææ¡ãããŸãããåšã匵åšã
Chinese-CLIP ã¯ãäžåœèªã®ç»åãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªããŒã¿ã»ããã«å¯Ÿãã CLIP (Radford et al., 2021) ã®å®è£
ã§ããã¯ãã¹ã¢ãŒãã«æ€çŽ¢ãå®è¡ã§ããã»ãããŒãã·ã§ããç»ååé¡ããªãŒãã³ãã¡ã€ã³ãªããžã§ã¯ãæ€åºãªã©ã®ããžã§ã³ã¿ã¹ã¯ã®ããžã§ã³ããã¯ããŒã³ãšããŠãæ©èœããŸãããªãªãžãã«ã®äžåœèª-CLIPã³ãŒãã¯[ãã®ãªã³ã¯ã§](https://github.com/OFA-Sys/Chinese-CLIP)ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*CLIP ã®å€§æå (Radford et al., 2021) ã«ãããèŠèŠèšèªã®äºåèšç·Žã®ããã®å¯Ÿç
§åŠç¿ã®ç 究ãšå¿çšãä¿é²ãããŸããããã®ç 究ã§ã¯ãã»ãšãã©ã®ããŒã¿ãå
¬éãããŠããããŒã¿ã»ããããååŸãããäžåœèªã®ç»åãšããã¹ãã®ãã¢ã®å€§èŠæš¡ãªããŒã¿ã»ãããæ§ç¯ããæ°ããããŒã¿ã»ããã§äžåœèªã® CLIP ã¢ãã«ãäºåãã¬ãŒãã³ã°ããŸããåœç€Ÿã§ã¯ã7,700 äžãã 9 å 5,800 äžã®ãã©ã¡ãŒã¿ã«ããããè€æ°ã®ãµã€ãºã® 5 ã€ã®äžåœ CLIP ã¢ãã«ãéçºããŠããŸããããã«ãã¢ãã«ã®ããã©ãŒãã³ã¹ãåäžãããããã«ãæåã«ç»åãšã³ã³ãŒããŒãããªãŒãºãããŠã¢ãã«ããã¬ãŒãã³ã°ãã次ã«ãã¹ãŠã®ãã©ã¡ãŒã¿ãŒãæé©åããŠãã¬ãŒãã³ã°ãã 2 段éã®äºåãã¬ãŒãã³ã°æ¹æ³ãææ¡ããŸããç§ãã¡ã®å
æ¬çãªå®éšã§ã¯ãäžåœã® CLIP ããŒãã·ã§ããåŠç¿ãšåŸ®èª¿æŽã®ã»ããã¢ãã㧠MUGEãFlickr30K-CNãããã³ COCO-CN äžã§æå
端ã®ããã©ãŒãã³ã¹ãéæã§ãããŒãã§ç«¶äºåã®ããããã©ãŒãã³ã¹ãéæã§ããããšãå®èšŒããŠããŸãã - ELEVATER ãã³ãããŒã¯ã§ã®è©äŸ¡ã«åºã¥ãã·ã§ããç»åã®åé¡ (Li et al., 2022)ãã³ãŒããäºåãã¬ãŒãã³ã°æžã¿ã¢ãã«ããã¢ããªãªãŒã¹ãããŸããã*
Chinese-CLIP ã¢ãã«ã¯ã[OFA-Sys](https://huggingface.co/OFA-Sys) ã«ãã£ãŠæäŸãããŸããã
## Usage example
以äžã®ã³ãŒã ã¹ããããã¯ãç»åãšããã¹ãã®ç¹åŸŽãšé¡äŒŒæ§ãèšç®ããæ¹æ³ã瀺ããŠããŸãã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import ChineseCLIPProcessor, ChineseCLIPModel
>>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
>>> processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
>>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> # Squirtle, Bulbasaur, Charmander, Pikachu in English
>>> texts = ["æ°å°ŒéŸ", "åŠèç§å", "å°ç«éŸ", "ç®å¡äž"]
>>> # compute image feature
>>> inputs = processor(images=image, return_tensors="pt")
>>> image_features = model.get_image_features(**inputs)
>>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
>>> # compute text features
>>> inputs = processor(text=texts, padding=True, return_tensors="pt")
>>> text_features = model.get_text_features(**inputs)
>>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
>>> # compute image-text similarity scores
>>> inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
```
çŸåšã次ã®ã¹ã±ãŒã«ã®äºåãã¬ãŒãã³ã°æžã¿ Chinese-CLIP ã¢ãã«ã ð€ Hub ã§å©çšå¯èœã§ãã
- [OFA-Sys/chinese-clip-vit-base-patch16](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16)
- [OFA-Sys/chinese-clip-vit-large-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14)
- [OFA-Sys/chinese-clip-vit-large-patch14-336px](https://huggingface.co/OFA-Sys/chinese-clip-vit-large-patch14-336px)
- [OFA-Sys/chinese-clip-vit-huge-patch14](https://huggingface.co/OFA-Sys/chinese-clip-vit-huge-patch14)
## ChineseCLIPConfig
[[autodoc]] ChineseCLIPConfig
- from_text_vision_configs
## ChineseCLIPTextConfig
[[autodoc]] ChineseCLIPTextConfig
## ChineseCLIPVisionConfig
[[autodoc]] ChineseCLIPVisionConfig
## ChineseCLIPImageProcessor
[[autodoc]] ChineseCLIPImageProcessor
- preprocess
## ChineseCLIPFeatureExtractor
[[autodoc]] ChineseCLIPFeatureExtractor
## ChineseCLIPProcessor
[[autodoc]] ChineseCLIPProcessor
## ChineseCLIPModel
[[autodoc]] ChineseCLIPModel
- forward
- get_text_features
- get_image_features
## ChineseCLIPTextModel
[[autodoc]] ChineseCLIPTextModel
- forward
## ChineseCLIPVisionModel
[[autodoc]] ChineseCLIPVisionModel
- forward | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/clip.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIP
## Overview
CLIP ã¢ãã«ã¯ãAlec RadfordãJong Wook KimãChris HallacyãAditya RameshãGabriel Goh Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) ã§ææ¡ãããŸããã
ãµã³ãã£ãã»ã¢ã¬ã«ã¯ã«ãã®ãªãã·ã¥ã»ãµã¹ããªãŒãã¢ãã³ãã»ã¢ã¹ã±ã«ããã¡ã©ã»ãã·ã¥ãã³ããžã£ãã¯ã»ã¯ã©ãŒã¯ãã°ã¬ããã§ã³ã»ã¯ã«ãŒã¬ãŒãã€ãªã€ã»ãµãã±ãŽã¡ãŒãã¯ãªãã
(Contrastive Language-Image Pre-Training) ã¯ãããŸããŸãª (ç»åãããã¹ã) ãã¢ã§ãã¬ãŒãã³ã°ããããã¥ãŒã©ã« ãããã¯ãŒã¯ã§ããããã
çŽæ¥æé©åããããšãªããäžããããç»åããæãé¢é£æ§ã®é«ãããã¹ã ã¹ãããããäºæž¬ããããã«èªç¶èšèªã§æ瀺ãããŸãã
GPT-2 ããã³ 3 ã®ãŒãã·ã§ããæ©èœãšåæ§ã«ãã¿ã¹ã¯ã«å¯ŸããŠã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æå
端ã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ã·ã¹ãã ã¯ããããããå®ãããããªããžã§ã¯ã ã«ããŽãªã®åºå®ã»ãããäºæž¬ããããã«ãã¬ãŒãã³ã°ãããŠããŸãããã
å¶éããã圢åŒã®ç£èŠã§ã¯ãæå®ããããã«è¿œå ã®ã©ãã«ä»ãããŒã¿ãå¿
èŠãšãªããããäžè¬æ§ãšäœ¿ãããããå¶éãããŸãã
ãã®ä»ã®èŠèŠçãªã³ã³ã»ãããç»åã«é¢ããçã®ããã¹ãããçŽæ¥åŠç¿ããããšã¯ã
ããåºç¯ãªç£ç£æºãã©ã®ãã£ãã·ã§ã³ã衚瀺ãããããäºæž¬ãããšããåçŽãªäºåãã¬ãŒãã³ã° ã¿ã¹ã¯ãæå¹ã§ããããšã瀺ããŸãã
400 ã®ããŒã¿ã»ãã㧠SOTA ç»åè¡šçŸãæåããåŠç¿ããããã®å¹ççãã€ã¹ã±ãŒã©ãã«ãªæ¹æ³ã¯ã©ã®ç»åã§ãã
ã€ã³ã¿ãŒãããããåéãããæ°çŸäžã®ïŒç»åãããã¹ãïŒãã¢ãäºåãã¬ãŒãã³ã°åŸãèªç¶èšèªã䜿çšããŠåç
§ããŸãã
èŠèŠçãªæŠå¿µãåŠç¿ãïŒãŸãã¯æ°ããæŠå¿µã説æãïŒãäžæµã®ã¿ã¹ã¯ãžã®ã¢ãã«ã®ãŒãã·ã§ãã転éãå¯èœã«ããŸããç§ãã¡ã¯å匷ããŸã
30 ãè¶
ããããŸããŸãªæ¢åã®ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ ããŒã¿ã»ããã§ã¿ã¹ã¯ããŸããã£ãŠãã³ãããŒã¯ãè¡ãããšã«ããããã®ã¢ãããŒãã®ããã©ãŒãã³ã¹ãè©äŸ¡ããŸãã
OCRããããªå
ã®ã¢ã¯ã·ã§ã³èªèãå°ççäœçœ®ç¹å®ãããã³ããŸããŸãªçš®é¡ã®ãã现ãããªããžã§ã¯ãåé¡ãªã©ãã®
ã¢ãã«ã¯ã»ãšãã©ã®ã¿ã¹ã¯ã«ç°¡åã«ç§»è¡ã§ããå€ãã®å Žåãå¿
èŠããªããŠãå®å
šã«ç£èŠãããããŒã¹ã©ã€ã³ãšç«¶åããŸãã
ããŒã¿ã»ããåºæã®ãã¬ãŒãã³ã°ã«é©ããŠããŸããããšãã°ãImageNet ãŒãã·ã§ããã§ã¯ãªãªãžãã«ã® ResNet-50 ã®ç²ŸåºŠãšäžèŽããŸãã
ãã¬ãŒãã³ã°ã«äœ¿çšããã 128 äžã®ãã¬ãŒãã³ã° ãµã³ãã«ã䜿çšããå¿
èŠã¯ãããŸãããã³ãŒãããªãªãŒã¹ããäºåãã¬ãŒãã³ã°æžã¿
ã¢ãã«ã®éã¿ã¯ãã® https URL ã§ç¢ºèªã§ããŸãã*
ãã®ã¢ãã«ã¯ [valhalla](https://huggingface.co/valhalla) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/openai/CLIP) ã«ãããŸãã
## Usage tips and example
CLIP ã¯ããã«ãã¢ãŒãã«ãªããžã§ã³ããã³èšèªã¢ãã«ã§ããç»åãšããã¹ãã®é¡äŒŒæ§ããŒãã·ã§ããç»åã«äœ¿çšã§ããŸãã
åé¡ã CLIP ã¯ãViT ã®ãããªãã©ã³ã¹ãã©ãŒããŒã䜿çšããŠèŠèŠçç¹åŸŽãååŸããå æèšèªã¢ãã«ã䜿çšããŠããã¹ããååŸããŸã
ç¹åŸŽã次ã«ãããã¹ããšèŠèŠã®äž¡æ¹ã®ç¹åŸŽããåã次å
ã®æœåšç©ºéã«æ圱ãããŸããããã
æ圱ãããç»åãšããã¹ãã®ç¹åŸŽéã®ç©ãåæ§ã®ã¹ã³ã¢ãšããŠäœ¿çšãããŸãã
ç»åã Transformer ãšã³ã³ãŒãã«äŸçµŠããããã«ãåç»åã¯åºå®ãµã€ãºã®éè€ããªããããã®ã·ãŒã±ã³ã¹ã«åå²ãããŸãã
ãããã¯ç·åœ¢ã«åã蟌ãŸããŸãã [CLS] ããŒã¯ã³ã¯ãã€ã¡ãŒãžå
šäœã®è¡šçŸãšããŠæ©èœããããã«è¿œå ãããŸããäœå®¶ãã¡
ãŸãã絶察äœçœ®åã蟌ã¿ãè¿œå ããçµæãšããŠåŸããããã¯ãã«ã®ã·ãŒã±ã³ã¹ãæšæºã® Transformer ãšã³ã³ãŒãã«äŸçµŠããŸãã
[`CLIPImageProcessor`] ã䜿çšããŠãã¢ãã«ã®ç»åã®ãµã€ãºå€æŽ (ãŸãã¯åã¹ã±ãŒã«) ããã³æ£èŠåãè¡ãããšãã§ããŸãã
[`CLIPTokenizer`] ã¯ããã¹ãã®ãšã³ã³ãŒãã«äœ¿çšãããŸãã [`CLIPProcessor`] ã¯ã©ããããŸã
[`CLIPImageProcessor`] ãš [`CLIPTokenizer`] ãäž¡æ¹ã®åäžã€ã³ã¹ã¿ã³ã¹ã«çµ±å
ããã¹ãããšã³ã³ãŒãããŠç»åãæºåããŸãã次ã®äŸã¯ã次ã®ã¡ãœããã䜿çšããŠç»åãšããã¹ãã®é¡äŒŒæ§ã¹ã³ã¢ãååŸããæ¹æ³ã瀺ããŠããŸãã
[`CLIPProcessor`] ãš [`CLIPModel`]ã
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Resources
CLIP ã䜿ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
- [ãªã¢ãŒã ã»ã³ã·ã³ã° (è¡æ) ç»åãšãã£ãã·ã§ã³ã䜿çšãã CLIP ã®åŸ®èª¿æŽ](https://huggingface.co/blog/fine-tune-clip-rsicd)ã[RSICD ããŒã¿ã»ãã] ã䜿çšã㊠CLIP ã埮調æŽããæ¹æ³ã«é¢ããããã°æçš¿(https://github.com/201528014227051/RSICD_optimal) ãšãããŒã¿æ¡åŒµã«ããããã©ãŒãã³ã¹ã®å€åã®æ¯èŒã
- ãã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) ã¯ããã¬- [COCO ããŒã¿ã»ãã](https://cocodataset.org/#home) ã䜿çšããŠãã¬ãŒãã³ã°ãããããžã§ã³ããã³ããã¹ã ãšã³ã³ãŒããŒã
<PipelineTag pipeline="image-to-text"/>
- ç»åãã£ãã·ã§ã³ã®ããŒã æ€çŽ¢ã«ããæšè«ã«äºåãã¬ãŒãã³ã°æžã¿ CLIP ã䜿çšããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing)ã ð
**ç»åæ€çŽ¢**
- äºåãã¬ãŒãã³ã°ããã CLIP ã䜿çšããç»åæ€çŽ¢ãš MRR (å¹³åçžäºã©ã³ã¯) ã¹ã³ã¢ã®èšç®ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1bLVwVKpAndpEDHqjzxVPr_9nGrSbuOQd?usp=sharing)ã ð
- ç»åã®ååŸãšé¡äŒŒæ§ã¹ã³ã¢ã®è¡šç€ºã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/deep-diver/image_search_with_natural_language/blob/main/notebooks/Image_Search_CLIP.ipynb)ã ð
- å€èšèª CLIP ã䜿çšããŠç»åãšããã¹ããåããã¯ãã«ç©ºéã«ãããã³ã°ããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/drive/1xO-wC_m_GNzgjIBQ4a4znvQkvDoZJvH4?usp=sharing)ã ð
- ã䜿çšããŠã»ãã³ãã£ã㯠ã€ã¡ãŒãžæ€çŽ¢ã§ CLIP ãå®è¡ããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/vivien000/clip-demo/blob/master/clip.ipynb#scrollTo=uzdFhRGqiWkR) [Unsplash](https://unsplash.com) ããã³ [TMDB](https://www.themoviedb.org/) ããŒã¿ã»ããã ð
**説æå¯èœæ§**
- å
¥åããŒã¯ã³ãšç»åã»ã°ã¡ã³ãã®é¡äŒŒæ§ãèŠèŠåããæ¹æ³ã«é¢ãã [ããŒãããã¯](https://colab.research.google.com/github/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb)ã ð
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãã
ãªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## CLIPConfig
[[autodoc]] CLIPConfig
- from_text_vision_configs
## CLIPTextConfig
[[autodoc]] CLIPTextConfig
## CLIPVisionConfig
[[autodoc]] CLIPVisionConfig
## CLIPTokenizer
[[autodoc]] CLIPTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CLIPTokenizerFast
[[autodoc]] CLIPTokenizerFast
## CLIPImageProcessor
[[autodoc]] CLIPImageProcessor
- preprocess
## CLIPFeatureExtractor
[[autodoc]] CLIPFeatureExtractor
## CLIPProcessor
[[autodoc]] CLIPProcessor
<frameworkcontent>
<pt>
## CLIPModel
[[autodoc]] CLIPModel
- forward
- get_text_features
- get_image_features
## CLIPTextModel
[[autodoc]] CLIPTextModel
- forward
## CLIPTextModelWithProjection
[[autodoc]] CLIPTextModelWithProjection
- forward
## CLIPVisionModelWithProjection
[[autodoc]] CLIPVisionModelWithProjection
- forward
## CLIPVisionModel
[[autodoc]] CLIPVisionModel
- forward
</pt>
<tf>
## TFCLIPModel
[[autodoc]] TFCLIPModel
- call
- get_text_features
- get_image_features
## TFCLIPTextModel
[[autodoc]] TFCLIPTextModel
- call
## TFCLIPVisionModel
[[autodoc]] TFCLIPVisionModel
- call
</tf>
<jax>
## FlaxCLIPModel
[[autodoc]] FlaxCLIPModel
- __call__
- get_text_features
- get_image_features
## FlaxCLIPTextModel
[[autodoc]] FlaxCLIPTextModel
- __call__
## FlaxCLIPTextModelWithProjection
[[autodoc]] FlaxCLIPTextModelWithProjection
- __call__
## FlaxCLIPVisionModel
[[autodoc]] FlaxCLIPVisionModel
- __call__
</jax>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bertweet.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERTweet
## Overview
BERTweet ã¢ãã«ã¯ãDat Quoc NguyenãThanh Vu ã«ãã£ãŠ [BERTweet: A pre-trained language model for English Tweets](https://www.aclweb.org/anthology/2020.emnlp-demos.2.pdf) ã§ææ¡ãããŸãããã¢ã³ã»ãã¥ã¢ã³ã»ã°ãšã³ããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç§ãã¡ã¯ãè±èªãã€ãŒãçšã«åããŠå
¬éããã倧èŠæš¡ãªäºåãã¬ãŒãã³ã°æžã¿èšèªã¢ãã«ã§ãã BERTweet ã玹ä»ããŸããç§ãã¡ã®BERTweetã¯ã
BERT ããŒã¹ãšåãã¢ãŒããã¯ã㣠(Devlin et al., 2019) ã¯ãRoBERTa äºåãã¬ãŒãã³ã°æé (Liu et al.) ã䜿çšããŠãã¬ãŒãã³ã°ãããŸãã
al.ã2019ïŒãå®éšã§ã¯ãBERTweet ã匷åãªããŒã¹ã©ã€ã³ã§ãã RoBERTa ããŒã¹ããã³ XLM-R ããŒã¹ãäžåãããã©ãŒãã³ã¹ã瀺ãããšã瀺ãããŠããŸã (Conneau et al.,
2020)ã3 ã€ã®ãã€ãŒã NLP ã¿ã¹ã¯ã«ãããŠã以åã®æå
端ã¢ãã«ãããåªããããã©ãŒãã³ã¹çµæãåŸãããŸããã
åè©ã¿ã°ä»ããåºæè¡šçŸèªèããã³ããã¹ãåé¡ã*
## Usage example
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
>>> # For transformers v4.x+:
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
>>> # For transformers v3.x:
>>> # tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
>>> # INPUT TWEET IS ALREADY NORMALIZED!
>>> line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
>>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = bertweet(input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> # from transformers import TFAutoModel
>>> # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
<Tip>
ãã®å®è£
ã¯ãããŒã¯ã³åæ¹æ³ãé€ã㊠BERT ãšåãã§ãã詳现ã«ã€ããŠã¯ã[BERT ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
API ãªãã¡ã¬ã³ã¹æ
å ±ã
</Tip>
ãã®ã¢ãã«ã¯ [dqnguyen](https://huggingface.co/dqnguyen) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã](https://github.com/VinAIResearch/BERTweet) ã«ãããŸãã
## BertweetTokenizer
[[autodoc]] BertweetTokenizer
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deberta.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeBERTa
## Overview
DeBERTa ã¢ãã«ã¯ãPengcheng HeãXiaodong LiuãJianfeng GaoãWeizhu Chen ã«ãã£ãŠ [DeBERTa: Decoding-enhanced BERT with Disentangled Attendant](https://arxiv.org/abs/2006.03654) ã§ææ¡ãããŸãããGoogle ã®ã¢ãã«ã«åºã¥ããŠããŸãã
2018幎ã«ãªãªãŒã¹ãããBERTã¢ãã«ãš2019幎ã«ãªãªãŒã¹ãããFacebookã®RoBERTaã¢ãã«ã
ããã¯ããã€ãã泚æã解ãã»ããã䜿çšãããããŒã¿ã®ååã䜿çšããŠåŒ·åããããã¹ã¯ ãã³ãŒã ãã¬ãŒãã³ã°ãåãã RoBERTa ã«åºã¥ããŠæ§ç¯ãããŠããŸãã
ããã«ã¿ã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*äºåãã¬ãŒãã³ã°ããããã¥ãŒã©ã«èšèªã¢ãã«ã®æè¿ã®é²æ©ã«ãããå€ãã®èªç¶èšèªã¢ãã«ã®ããã©ãŒãã³ã¹ã倧å¹
ã«åäžããŸããã
èšèªåŠç (NLP) ã¿ã¹ã¯ããã®è«æã§ã¯ãæ°ããã¢ãã« ã¢ãŒããã¯ã㣠DeBERTa (Decoding-enhanced BERT with
ããã¯ã2 ã€ã®æ°ããæè¡ã䜿çšã㊠BERT ã¢ãã«ãš RoBERTa ã¢ãã«ãæ¹åããŸãã 1ã€ç®ã¯ã
ãã€ãã解ã泚æã¡ã«ããºã ãååèªã¯ããã®å
容ããšã³ã³ãŒããã 2 ã€ã®ãã¯ãã«ã䜿çšããŠè¡šçŸããã
åèªéã®æ³šæã®éã¿ã¯ããããã®åèªã®ãã€ã解é€è¡åã䜿çšããŠèšç®ãããŸãã
å
容ãšçžå¯Ÿçãªäœçœ®ã 2 çªç®ã«ã匷åããããã¹ã¯ ãã³ãŒãã䜿çšããŠãåºåãœããããã¯ã¹ ã¬ã€ã€ã次ã®ããã«çœ®ãæããŸãã
ã¢ãã«ã®äºåãã¬ãŒãã³ã°çšã«ãã¹ã¯ãããããŒã¯ã³ãäºæž¬ããŸããããã 2 ã€ã®ææ³ã«ããå¹çã倧å¹
ã«åäžããããšã瀺ããŸãã
ã¢ãã«ã®äºåãã¬ãŒãã³ã°ãšäžæµã¿ã¹ã¯ã®ããã©ãŒãã³ã¹ã®åäžã RoBERTa-Large ãšæ¯èŒãããšãDeBERTa ã¢ãã«ã¯ååã®ã¬ãã«ã§ãã¬ãŒãã³ã°ãããŠããŸãã
ãã¬ãŒãã³ã° ããŒã¿ã¯å¹
åºã NLP ã¿ã¹ã¯ã§äžè²«ããŠåªããããã©ãŒãã³ã¹ã瀺ããMNLI 㧠+0.9% ã®æ¹åãéæããŸããã
(90.2% 察 91.1%)ãSQuAD v2.0 ã§ã¯ +2.3% (88.4% 察 90.7%)ãRACE ã§ã¯ +3.6% (83.2% 察 86.8%) ã§ããã DeBERTa ã³ãŒããš
äºåãã¬ãŒãã³ã°ãããã¢ãã«ã¯ https://github.com/microsoft/DeBERTa ã§å
¬éãããŸãã*
ãã®ã¢ãã«ã¯ [DeBERTa](https://huggingface.co/DeBERTa) ã«ãã£ãŠå¯çš¿ãããŸããããã®ã¢ãã«ã® TF 2.0 å®è£
ã¯ã
[kamalkraj](https://huggingface.co/kamalkraj) ã«ããå¯çš¿ãå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/microsoft/DeBERTa) ã«ãããŸãã
## Resources
DeBERTa ã䜿ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="text-classification"/>
- DeBERTa ã䜿çšã㊠[DeepSpeed ã䜿çšããŠå€§èŠæš¡ã¢ãã«ã®ãã¬ãŒãã³ã°ãå éãã](https://huggingface.co/blog/accelerate-deepspeed) æ¹æ³ã«é¢ããããã°æçš¿ã
- DeBERTa ã«ãã [æ©æ¢°åŠç¿ã«ããã¹ãŒããŒãã£ãŒãžããã顧客ãµãŒãã¹](https://huggingface.co/blog/supercharge-customer-service-with-machine-learning) ã«é¢ããããã°æçš¿ã
- [`DebertaForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)ã
- [`TFDebertaForSequenceClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)ã
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
<PipelineTag pipeline="token-classification" />
- [`DebertaForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)ã
- [`TFDebertaForTokenClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)ã
- [ããŒã¯ã³åé¡](https://huggingface.co/course/chapter7/2?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã® [ãã€ããã¢ãšã³ã³ãŒãã£ã³ã°ã®ããŒã¯ã³å](https://huggingface.co/course/chapter6/5?fw=pt) ã®ç« ã
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
<PipelineTag pipeline="fill-mask"/>
- [`DebertaForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) ã§ãµããŒããããŠããŸãã [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)ã
- [`TFDebertaForMaskedLM`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/lang-modeling#run_mlmpy) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)ã
- [ãã¹ã¯ãããèšèªã¢ããªã³ã°](https://huggingface.co/course/chapter7/3?fw=pt) ð€ é¡ã®ãã° ã³ãŒã¹ã®ç« ã
- [ãã¹ã¯èšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_language_modeling)
<PipelineTag pipeline="question-answering"/>
- [`DebertaForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)ã
- [`TFDebertaForQuestionAnswering`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)ã
- [質ååç](https://huggingface.co/course/chapter7/7?fw=pt) ð€ ãã°ãã§ã€ã¹ã³ãŒã¹ã®ç« ã
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
## DebertaConfig
[[autodoc]] DebertaConfig
## DebertaTokenizer
[[autodoc]] DebertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## DebertaTokenizerFast
[[autodoc]] DebertaTokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences
<frameworkcontent>
<pt>
## DebertaModel
[[autodoc]] DebertaModel
- forward
## DebertaPreTrainedModel
[[autodoc]] DebertaPreTrainedModel
## DebertaForMaskedLM
[[autodoc]] DebertaForMaskedLM
- forward
## DebertaForSequenceClassification
[[autodoc]] DebertaForSequenceClassification
- forward
## DebertaForTokenClassification
[[autodoc]] DebertaForTokenClassification
- forward
## DebertaForQuestionAnswering
[[autodoc]] DebertaForQuestionAnswering
- forward
</pt>
<tf>
## TFDebertaModel
[[autodoc]] TFDebertaModel
- call
## TFDebertaPreTrainedModel
[[autodoc]] TFDebertaPreTrainedModel
- call
## TFDebertaForMaskedLM
[[autodoc]] TFDebertaForMaskedLM
- call
## TFDebertaForSequenceClassification
[[autodoc]] TFDebertaForSequenceClassification
- call
## TFDebertaForTokenClassification
[[autodoc]] TFDebertaForTokenClassification
- call
## TFDebertaForQuestionAnswering
[[autodoc]] TFDebertaForQuestionAnswering
- call
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/blenderbot-small.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Blenderbot Small
[`BlenderbotSmallModel`] ãš
[`BlenderbotSmallForConditionalGeneration`] ã¯ãã§ãã¯ãã€ã³ããšçµã¿åãããŠã®ã¿äœ¿çšãããŸã
[facebook/blenderbot-90M](https://huggingface.co/facebook/blenderbot-90M)ããã倧èŠæš¡ãª Blenderbot ãã§ãã¯ãã€ã³ãã¯ã
代ããã« [`BlenderbotModel`] ãšãšãã«äœ¿çšããŠãã ããã
[`BlenderbotForConditionalGeneration`]
## Overview
Blender ãã£ããããã ã¢ãã«ã¯ã[Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf) Stephen RollerãEmily DinanãNaman GoyalãDa JuãMary Williamsonãyinghan Liuãã§ææ¡ãããŸããã
ãžã³ã»ã·ã¥ãŒããã€ã«ã»ãªãããã«ãŒãã»ã·ã£ã¹ã¿ãŒããšãªãã¯ã»Mã»ã¹ãã¹ãY-ã©ã³ã»ããŒããŒããžã§ã€ãœã³ã»ãŠã§ã¹ãã³ã2020幎4æ30æ¥ã
è«æã®èŠæšã¯æ¬¡ã®ãšããã§ãã
*ãªãŒãã³ãã¡ã€ã³ã®ãã£ãããããã®æ§ç¯ã¯ãæ©æ¢°åŠç¿ç 究ã«ãšã£ãŠé£ããåéã§ãããããŸã§ã®ç 究ã§ã¯æ¬¡ã®ããšã瀺ãããŠããŸããã
ãã¥ãŒã©ã« ã¢ãã«ããã©ã¡ãŒã¿ãŒã®æ°ãšãã¬ãŒãã³ã°å¯Ÿè±¡ã®ããŒã¿ã®ãµã€ãºã§ã¹ã±ãŒãªã³ã°ãããšãçµæãåäžããŸãã
é«æ§èœã®ãã£ãããããã«ã¯ä»ã®èŠçŽ ãéèŠã§ããããšã瀺ããŸããè¯ãäŒè©±ã«ã¯å€ãã®ããšãå¿
èŠã§ã
äŒè©±ã®å°é家ãã·ãŒã ã¬ã¹ã«èåããã¹ãã«: é
åçãªè©±ã®ãã€ã³ããæäŸãã話ãèã
äžè²«ããæ
床ãç¶æããªãããç¥èãå
±æãåæ§ãé©åã«è¡šçŸãã
ãã«ãœããé©åãªãã¬ãŒãã³ã° ããŒã¿ãšéžæãäžããããå Žåã倧èŠæš¡ã¢ãã«ããããã®ã¹ãã«ãåŠç¿ã§ããããšã瀺ããŸãã
äžä»£æŠç¥ã 90Mã2.7Bã9.4B ãã©ã¡ãŒã¿ãŒ ã¢ãã«ã䜿çšããŠãããã®ã¬ã·ãã®ããªã¢ã³ããæ§ç¯ããã¢ãã«ãäœæããŸãã
ã³ãŒãã¯å
¬éãããŠããŸãã人éã«ããè©äŸ¡ã§ã¯ãåœç€Ÿã®æè¯ã®ã¢ãã«ãæ¢åã®ã¢ãããŒããããåªããŠããããšããã«ãã¿ãŒã³ã§ç€ºãããŠããŸã
é
åãšäººéæ§ã®æž¬å®ãšãã芳ç¹ããã®å¯Ÿè©±ã次ã«ãåæã«ãã£ãŠãã®äœæ¥ã®éçã«ã€ããŠèª¬æããŸãã
åŒç€Ÿæ©çš®ã®æ
éäºäŸ*
ãããïŒ
- Blenderbot Small ã¯çµ¶å¯Ÿäœçœ®åã蟌ã¿ãåããã¢ãã«ãªã®ã§ãéåžžã¯å
¥åãå³åŽã«ããã£ã³ã°ããããšããå§ãããŸãã
å·Šã
ãã®ã¢ãã«ã¯ã[patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸãããèè
ã®ã³ãŒãã¯æ¬¡ã®ãšããã§ã
[ãã](https://github.com/facebookresearch/ParlAI) ãã芧ãã ããã
## Documentation resources
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [翻蚳ã¿ã¹ã¯ã¬ã€ã](../tasks/translation)
- [èŠçŽã¿ã¹ã¯ã¬ã€ã](../tasks/summarization)
## BlenderbotSmallConfig
[[autodoc]] BlenderbotSmallConfig
## BlenderbotSmallTokenizer
[[autodoc]] BlenderbotSmallTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## BlenderbotSmallTokenizerFast
[[autodoc]] BlenderbotSmallTokenizerFast
## BlenderbotSmallModel
[[autodoc]] BlenderbotSmallModel
- forward
## BlenderbotSmallForConditionalGeneration
[[autodoc]] BlenderbotSmallForConditionalGeneration
- forward
## BlenderbotSmallForCausalLM
[[autodoc]] BlenderbotSmallForCausalLM
- forward
## TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
## TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call
## FlaxBlenderbotSmallModel
[[autodoc]] FlaxBlenderbotSmallModel
- __call__
- encode
- decode
## FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotSmallForConditionalGeneration
- __call__
- encode
- decode
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/data2vec.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Data2Vec
## Overview
Data2Vec ã¢ãã«ã¯ã[data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/pdf/2202.03555) 㧠Alexei BaevskiãWei-Ning HsuãQiantong XuããArun Babu, Jiatao Gu and Michael Auli.
Data2Vec ã¯ãããã¹ããé³å£°ãç»åãªã©ã®ããŸããŸãªããŒã¿ ã¢ããªãã£ã«ãããèªå·±æåž«ããåŠç¿ã®ããã®çµ±äžãã¬ãŒã ã¯ãŒã¯ãææ¡ããŸãã
éèŠãªã®ã¯ãäºåãã¬ãŒãã³ã°ã®äºæž¬ã¿ãŒã²ããã¯ãã¢ããªãã£åºæã®ã³ã³ããã¹ãã«äŸåããªãã¿ãŒã²ããã§ã¯ãªããå
¥åã®ã³ã³ããã¹ãåãããæœåšè¡šçŸã§ããããšã§ãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*èªå·±æåž«ããåŠç¿ã®äžè¬çãªèãæ¹ã¯ã©ã®ã¢ããªãã£ã§ãåãã§ãããå®éã®ã¢ã«ãŽãªãºã ãš
åäžã®ã¢ããªãã£ã念é ã«çœ®ããŠéçºããããããç®çã¯å€§ããç°ãªããŸããäžè¬ã«è¿ã¥ããããã«
èªå·±æåž«ããåŠç¿ã§ã¯ãã©ã¡ãã®é³å£°ã«å¯ŸããŠãåãåŠç¿æ¹æ³ã䜿çšãããã¬ãŒã ã¯ãŒã¯ã§ãã data2vec ã玹ä»ããŸãã
NLP ãŸãã¯ã³ã³ãã¥ãŒã¿ãŒ ããžã§ã³ãäžå¿ãšãªãã¢ã€ãã¢ã¯ãå®å
šãªå
¥åããŒã¿ã®æœåšçãªè¡šçŸãã
æšæºã® Transformer ã¢ãŒããã¯ãã£ã䜿çšããèªå·±èžçã»ããã¢ããã®å
¥åã®ãã¹ã¯ããããã¥ãŒã
åèªãèŠèŠçããŒã¯ã³ã人éã®é³å£°åäœãªã©ã®ã¢ããªãã£åºæã®ã¿ãŒã²ãããäºæž¬ããã®ã§ã¯ãªãã
æ¬è³ªçã«ããŒã«ã«ã§ãããããdata2vec ã¯ãããã®æ
å ±ãå«ãæèåãããæœåšè¡šçŸãäºæž¬ããŸãã
å
¥åå
šäœãé³å£°èªèãç»ååé¡ãããã³
èªç¶èšèªç解ã¯ãæ°ããæå
端æè¡ããäž»æµã®ã¢ãããŒãã«å¹æµããããã©ãŒãã³ã¹ãå®èšŒããŸãã
ã¢ãã«ãšã³ãŒãã¯ãwww.github.com/pytorch/fairseq/tree/master/examples/data2vec.* ã§å
¥æã§ããŸãã
ãã®ã¢ãã«ã¯ã[edugp](https://huggingface.co/edugp) ããã³ [patrickvonplaten](https://huggingface.co/patrickvonplaten) ã«ãã£ãŠæäŸãããŸããã
[sayakpaul](https://github.com/sayakpaul) ãš [Rocketknight1](https://github.com/Rocketknight1) ã¯ãTensorFlow ã®ããžã§ã³ã« Data2Vec ãæäŸããŸããã
å
ã®ã³ãŒã (NLP ããã³é³å£°çš) ã¯ã[ãã¡ã](https://github.com/pytorch/fairseq/tree/main/examples/data2vec) ã«ãããŸãã
ããžã§ã³ã®å
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/data2vec_vision/tree/main/beit) ã«ãããŸãã
## Usage tips
- Data2VecAudioãData2VecTextãããã³ Data2VecVision ã¯ãã¹ãŠãåãèªå·±æåž«ããåŠç¿æ¹æ³ã䜿çšããŠãã¬ãŒãã³ã°ãããŠããŸãã
- Data2VecAudio ã®å ŽåãååŠçã¯ç¹åŸŽæœåºãå«ã㊠[`Wav2Vec2Model`] ãšåãã§ãã
- Data2VecText ã®å ŽåãååŠçã¯ããŒã¯ã³åãå«ã㊠[`RobertaModel`] ãšåãã§ãã
- Data2VecVision ã®å ŽåãååŠçã¯ç¹åŸŽæœåºãå«ã㊠[`BeitModel`] ãšåãã§ãã
## Resources
Data2Vec ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºããã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`Data2VecVisionForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://cola.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- ã«ã¹ã¿ã ããŒã¿ã»ãã㧠[`TFData2VecVisionForImageClassification`] ã埮調æŽããã«ã¯ã[ãã®ããŒãããã¯](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb) ãåç
§ããŠãã ããã ïŒã
**Data2VecText ããã¥ã¡ã³ã ãªãœãŒã¹**
- [ããã¹ãåé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/sequence_classification)
- [ããŒã¯ã³åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/token_classification)
- [質ååçã¿ã¹ã¯ ã¬ã€ã](../tasks/question_answering)
- [å æèšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/language_modeling)
- [ãã¹ã¯èšèªã¢ããªã³ã° ã¿ã¹ã¯ ã¬ã€ã](../tasks/masked_language_modeling)
- [å€è¢éžæã¿ã¹ã¯ ã¬ã€ã](../tasks/multiple_choice)
**Data2VecAudio ããã¥ã¡ã³ã ãªãœãŒã¹**
- [é³å£°åé¡ã¿ã¹ã¯ã¬ã€ã](../tasks/audio_classification)
- [èªåé³å£°èªèã¿ã¹ã¯ã¬ã€ã](../tasks/asr)
**Data2VecVision ããã¥ã¡ã³ã ãªãœãŒã¹**
- [ç»ååé¡](../tasks/image_classification)
- [ã»ãã³ãã£ã㯠ã»ã°ã¡ã³ããŒã·ã§ã³](../tasks/semantic_segmentation)
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## Data2VecTextConfig
[[autodoc]] Data2VecTextConfig
## Data2VecAudioConfig
[[autodoc]] Data2VecAudioConfig
## Data2VecVisionConfig
[[autodoc]] Data2VecVisionConfig
<frameworkcontent>
<pt>
## Data2VecAudioModel
[[autodoc]] Data2VecAudioModel
- forward
## Data2VecAudioForAudioFrameClassification
[[autodoc]] Data2VecAudioForAudioFrameClassification
- forward
## Data2VecAudioForCTC
[[autodoc]] Data2VecAudioForCTC
- forward
## Data2VecAudioForSequenceClassification
[[autodoc]] Data2VecAudioForSequenceClassification
- forward
## Data2VecAudioForXVector
[[autodoc]] Data2VecAudioForXVector
- forward
## Data2VecTextModel
[[autodoc]] Data2VecTextModel
- forward
## Data2VecTextForCausalLM
[[autodoc]] Data2VecTextForCausalLM
- forward
## Data2VecTextForMaskedLM
[[autodoc]] Data2VecTextForMaskedLM
- forward
## Data2VecTextForSequenceClassification
[[autodoc]] Data2VecTextForSequenceClassification
- forward
## Data2VecTextForMultipleChoice
[[autodoc]] Data2VecTextForMultipleChoice
- forward
## Data2VecTextForTokenClassification
[[autodoc]] Data2VecTextForTokenClassification
- forward
## Data2VecTextForQuestionAnswering
[[autodoc]] Data2VecTextForQuestionAnswering
- forward
## Data2VecVisionModel
[[autodoc]] Data2VecVisionModel
- forward
## Data2VecVisionForImageClassification
[[autodoc]] Data2VecVisionForImageClassification
- forward
## Data2VecVisionForSemanticSegmentation
[[autodoc]] Data2VecVisionForSemanticSegmentation
- forward
</pt>
<tf>
## TFData2VecVisionModel
[[autodoc]] TFData2VecVisionModel
- call
## TFData2VecVisionForImageClassification
[[autodoc]] TFData2VecVisionForImageClassification
- call
## TFData2VecVisionForSemanticSegmentation
[[autodoc]] TFData2VecVisionForSemanticSegmentation
- call
</tf>
</frameworkcontent>
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bert-japanese.md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BertJapanese
## Overview
BERT ã¢ãã«ã¯æ¥æ¬èªããã¹ãã§ãã¬ãŒãã³ã°ãããŸããã
2 ã€ã®ç°ãªãããŒã¯ã³åæ¹æ³ãåããã¢ãã«ããããŸãã
- MeCab ãš WordPiece ã䜿çšããŠããŒã¯ã³åããŸããããã«ã¯ã[MeCab](https://taku910.github.io/mecab/) ã®ã©ãããŒã§ãã [fugashi](https://github.com/polm/fugashi) ãšããè¿œå ã®äŸåé¢ä¿ãå¿
èŠã§ãã
- æåã«ããŒã¯ã³åããŸãã
*MecabTokenizer* ã䜿çšããã«ã¯ã`pip installTransformers["ja"]` (ãŸãã¯ãã€ã³ã¹ããŒã«ããå Žå㯠`pip install -e .["ja"]`) ããå¿
èŠããããŸãã
ãœãŒã¹ããïŒäŸåé¢ä¿ãã€ã³ã¹ããŒã«ããŸãã
[cl-tohakuãªããžããªã®è©³çŽ°](https://github.com/cl-tohaku/bert-japanese)ãåç
§ããŠãã ããã
MeCab ããã³ WordPiece ããŒã¯ã³åã§ã¢ãã«ã䜿çšããäŸ:
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
>>> ## Input Japanese Text
>>> line = "åŸèŒ©ã¯ç«ã§ããã"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] åŸèŒ© ã¯ ç« ã§ ãã ã [SEP]
>>> outputs = bertjapanese(**inputs)
```
æåããŒã¯ã³åã䜿çšããã¢ãã«ã®äœ¿çšäŸ:
```python
>>> bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
>>> ## Input Japanese Text
>>> line = "åŸèŒ©ã¯ç«ã§ããã"
>>> inputs = tokenizer(line, return_tensors="pt")
>>> print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] åŸ èŒ© ã¯ ç« ã§ ã ã ã [SEP]
>>> outputs = bertjapanese(**inputs)
```
<Tip>
- ãã®å®è£
ã¯ããŒã¯ã³åæ¹æ³ãé€ã㊠BERT ãšåãã§ãããã®ä»ã®äœ¿çšäŸã«ã€ããŠã¯ã[BERT ã®ããã¥ã¡ã³ã](bert) ãåç
§ããŠãã ããã
</Tip>
ãã®ã¢ãã«ã¯[cl-tohaku](https://huggingface.co/cl-tohaku)ããæäŸãããŸããã
## BertJapaneseTokenizer
[[autodoc]] BertJapaneseTokenizer
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/clipseg.md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIPSeg
## Overview
CLIPSeg ã¢ãã«ã¯ãTimo LÃŒddecke, Alexander Ecker ã«ãã£ãŠ [Image Segmentation using Text and Image Prompts](https://arxiv.org/abs/2112.10003) ã§ææ¡ãããŸããã
ãããŠã¢ã¬ã¯ãµã³ããŒã»ãšãã«ãŒã CLIPSeg ã¯ããŒãã·ã§ããããã³ã¯ã³ã·ã§ããç»åã»ã°ã¡ã³ããŒã·ã§ã³ã®ããã«ãåçµããã [CLIP](clip) ã¢ãã«ã®äžã«æå°éã®ãã³ãŒããè¿œå ããŸãã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç»åã®ã»ã°ã¡ã³ããŒã·ã§ã³ã¯éåžžããã¬ãŒãã³ã°ã«ãã£ãŠè§£æ±ºãããŸãã
ãªããžã§ã¯ã ã¯ã©ã¹ã®åºå®ã»ããã®ã¢ãã«ãåŸã§è¿œå ã®ã¯ã©ã¹ãããè€éãªã¯ãšãªãçµã¿èŸŒããšã³ã¹ããããããŸã
ãããã®åŒãå«ãããŒã¿ã»ããã§ã¢ãã«ãåãã¬ãŒãã³ã°ããå¿
èŠãããããã§ããããã§ã·ã¹ãã ãææ¡ããŸã
ä»»æã®æ
å ±ã«åºã¥ããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³ãçæã§ããŸãã
ãã¹ãæã«ããã³ãââãã衚瀺ãããŸããããã³ããã¯ããã¹ããŸãã¯
ç»åããã®ã¢ãããŒãã«ãããçµ±äžãããã¢ãã«ãäœæã§ããŸãã
3 ã€ã®äžè¬çãªã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã«ã€ã㊠(1 åãã¬ãŒãã³ã°æžã¿)
åç
§åŒã®ã»ã°ã¡ã³ããŒã·ã§ã³ããŒãã·ã§ãã ã»ã°ã¡ã³ããŒã·ã§ã³ãã¯ã³ã·ã§ãã ã»ã°ã¡ã³ããŒã·ã§ã³ãšããæ確ãªèª²é¡ã䌎ããŸãã
CLIP ã¢ãã«ãããã¯ããŒã³ãšããŠæ§ç¯ããããããã©ã³ã¹ããŒã¹ã®ãã³ãŒãã§æ¡åŒµããŠãé«å¯åºŠãªããŒã¿éä¿¡ãå¯èœã«ããŸãã
äºæž¬ãã®æ¡åŒµããŒãžã§ã³ã§ãã¬ãŒãã³ã°ããåŸã
PhraseCut ããŒã¿ã»ãããç§ãã¡ã®ã·ã¹ãã ã¯ãããªãŒããã¹ã ããã³ãããŸãã¯
ã¯ãšãªãè¡šãè¿œå ã®ç»åãåŸè
ã®ç»åããŒã¹ã®ããã³ããã®ããŸããŸãªããªãšãŒã·ã§ã³ã詳现ã«åæããŸãã
ãã®æ°ãããã€ããªããå
¥åã«ãããåçé©å¿ãå¯èœã«ãªããŸãã
åè¿°ã® 3 ã€ã®ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã®ã¿ã§ããã
ããã¹ããŸãã¯ç»åãã¯ãšãªãããã€ã㪠ã»ã°ã¡ã³ããŒã·ã§ã³ ã¿ã¹ã¯ã«
å®åŒåããããšãã§ãããæåŸã«ãã·ã¹ãã ãããŸãé©å¿ããŠããããšãããããŸãã
ã¢ãã©ãŒãã³ã¹ãŸãã¯ããããã£ãå«ãäžè¬åãããã¯ãšãª*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clipseg_architecture.png"
alt="æç»" width="600"/>
<small> CLIPSeg ã®æŠèŠã <a href="https://arxiv.org/abs/2112.10003">å
ã®è«æããæç²ã</a> </small>
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããã
å
ã®ã³ãŒã㯠[ãã](https://github.com/timojl/clipseg) ã«ãããŸãã
## Usage tips
- [`CLIPSegForImageSegmentation`] ã¯ã[`CLIPSegModel`] ã®äžã«ãã³ãŒããè¿œå ããŸããåŸè
㯠[`CLIPModel`] ãšåãã§ãã
- [`CLIPSegForImageSegmentation`] ã¯ããã¹ãæã«ä»»æã®ããã³ããã«åºã¥ããŠç»åã»ã°ã¡ã³ããŒã·ã§ã³ãçæã§ããŸããããã³ããã¯ããã¹ãã®ããããã§ã
(`input_ids` ãšããŠã¢ãã«ã«æäŸããã) ãŸãã¯ç»å (`conditional_pixel_values` ãšããŠã¢ãã«ã«æäŸããã)ãã«ã¹ã¿ã ãæäŸããããšãã§ããŸã
æ¡ä»¶ä»ãåã蟌㿠(`conditional_embeddings`ãšããŠã¢ãã«ã«æäŸãããŸã)ã
## Resources
CLIPSeg ã®äœ¿çšãéå§ããã®ã«åœ¹ç«ã€ãå
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ããããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
<PipelineTag pipeline="image-segmentation"/>
- [CLIPSeg ã䜿çšãããŒãã·ã§ããç»åã»ã°ã¡ã³ããŒã·ã§ã³](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb) ã説æããããŒãããã¯ã
## CLIPSegConfig
[[autodoc]] CLIPSegConfig
- from_text_vision_configs
## CLIPSegTextConfig
[[autodoc]] CLIPSegTextConfig
## CLIPSegVisionConfig
[[autodoc]] CLIPSegVisionConfig
## CLIPSegProcessor
[[autodoc]] CLIPSegProcessor
## CLIPSegModel
[[autodoc]] CLIPSegModel
- forward
- get_text_features
- get_image_features
## CLIPSegTextModel
[[autodoc]] CLIPSegTextModel
- forward
## CLIPSegVisionModel
[[autodoc]] CLIPSegVisionModel
- forward
## CLIPSegForImageSegmentation
[[autodoc]] CLIPSegForImageSegmentation
- forward | 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/bartpho.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BARTpho
## Overview
BARTpho ã¢ãã«ã¯ãNguyen Luong TranãDuong Minh LeãDat Quoc Nguyen ã«ãã£ãŠ [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnam](https://arxiv.org/abs/2109.09701) ã§ææ¡ãããŸããã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*BARTpho ã«ã¯ãBARTpho_word ãš BARTpho_syllable ã® 2 ã€ã®ããŒãžã§ã³ããããåã®å
¬éããã倧èŠæš¡ãªåäžèšèªã§ãã
ãããã èªçšã«äºåãã¬ãŒãã³ã°ãããã·ãŒã±ã³ã¹ããŒã·ãŒã±ã³ã¹ ã¢ãã«ãåœç€Ÿã® BARTpho ã¯ã倧èŠæš¡ãªãã¢ãŒããã¯ãã£ãšäºåãã¬ãŒãã³ã°ã䜿çšããŸã
ã·ãŒã±ã³ã¹éãã€ãºé€å»ã¢ãã« BART ã®ã¹ããŒã ãªã®ã§ãçæ NLP ã¿ã¹ã¯ã«ç¹ã«é©ããŠããŸããå®éš
ãããã èªããã¹ãèŠçŽã®äžæµã¿ã¹ã¯ã§ã¯ãèªåè©äŸ¡ãšäººéã«ããè©äŸ¡ã®äž¡æ¹ã§ãBARTpho ã
匷åãªããŒã¹ã©ã€ã³ mBART ãäžåããæå
端ã®æ§èœãåäžãããŸããå°æ¥ã容æã«ããããã«BARTphoããªãªãŒã¹ããŸã
çæçãªãããã èª NLP ã¿ã¹ã¯ã®ç 究ãšå¿çšã*
ãã®ã¢ãã«ã¯ [dqnguyen](https://huggingface.co/dqnguyen) ã«ãã£ãŠæäŸãããŸãããå
ã®ã³ãŒã㯠[ãã¡ã](https://github.com/VinAIResearch/BARTpho) ã«ãããŸãã
## Usage example
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable")
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable")
>>> line = "Chúng tÎi là những nghiên cứu viên."
>>> input_ids = tokenizer(line, return_tensors="pt")
>>> with torch.no_grad():
... features = bartpho(**input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> from transformers import TFAutoModel
>>> bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable")
>>> input_ids = tokenizer(line, return_tensors="tf")
>>> features = bartpho(**input_ids)
```
## Usage tips
- mBARTã«ç¶ããŠãBARTphoã¯BARTã®ã倧èŠæš¡ãªãã¢ãŒããã¯ãã£ã䜿çšãããã®äžã«è¿œå ã®å±€æ£èŠåå±€ãåããŠããŸãã
ãšã³ã³ãŒããšãã³ãŒãã®äž¡æ¹ããããã£ãŠã[BART ã®ããã¥ã¡ã³ã](bart) ã®äœ¿çšäŸã¯ã䜿çšã«é©å¿ããå Žåã«äœ¿çšãããŸãã
BARTpho ã䜿çšããå Žåã¯ãBART ã«ç¹åããã¯ã©ã¹ã mBART ã«ç¹åãã察å¿ããã¯ã©ã¹ã«çœ®ãæããããšã«ãã£ãŠèª¿æŽããå¿
èŠããããŸãã
äŸãã°ïŒ
```python
>>> from transformers import MBartForConditionalGeneration
>>> bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable")
>>> TXT = "Chúng tÎi là <mask> nghiên cứu viên."
>>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
>>> logits = bartpho(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
>>> probs = logits[0, masked_index].softmax(dim=0)
>>> values, predictions = probs.topk(5)
>>> print(tokenizer.decode(predictions).split())
```
- ãã®å®è£
ã¯ããŒã¯ã³åã®ã¿ãç®çãšããŠããŸãã`monolingual_vocab_file`ã¯ãããã èªã«ç¹åããåã§æ§æãããŠããŸã
å€èšèª XLM-RoBERTa ããå©çšã§ããäºåãã¬ãŒãã³ã°æžã¿ SentencePiece ã¢ãã«`vocab_file`ããæœåºãããŸãã
ä»ã®èšèª (ãµãã¯ãŒãã«ãã®äºåãã¬ãŒãã³ã°æžã¿å€èšèª SentencePiece ã¢ãã«`vocab_file`ã䜿çšããå Žå)
ã»ã°ã¡ã³ããŒã·ã§ã³ã«ãããç¬èªã®èšèªã«ç¹åãã`monolingual_vocab_file`ã䜿çšã㊠BartphoTokenizer ãåå©çšã§ããŸãã
## BartphoTokenizer
[[autodoc]] BartphoTokenizer
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/code_llama.md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CodeLlama
## Overview
Code Llama ã¢ãã«ã¯ã«ãã£ãŠ [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) ã§ææ¡ãããŸããã Baptiste RoziÚre, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*ç§ãã¡ã¯ Code Llama ããªãªãŒã¹ããŸãããã㯠Llama 2 ã«åºã¥ãã³ãŒãã®å€§èŠæš¡èšèªã¢ãã« ãã¡ããªã§ããããªãŒãã³ ã¢ãã«ã®äžã§æå
端ã®ããã©ãŒãã³ã¹ãåã蟌ã¿æ©èœã倧èŠæš¡ãªå
¥åã³ã³ããã¹ãã®ãµããŒããããã°ã©ãã³ã° ã¿ã¹ã¯ã®ãŒãã·ã§ããåœä»€è¿œåŸæ©èœãæäŸããŸãã ãå¹
åºãã¢ããªã±ãŒã·ã§ã³ãã«ããŒããããã®è€æ°ã®ãã¬ãŒããŒãæäŸããŠããŸããåºç€ã¢ãã« (Code Llama)ãPython ç¹å (Code Llama - Python)ãããã³ãããã 7Bã13Bãããã³ 34B ãã©ã¡ãŒã¿ãŒãåããåœä»€è¿œåŸã¢ãã« (Code Llama - Instruct) ã§ãããã¹ãŠã®ã¢ãã«ã¯ 16,000 ããŒã¯ã³ã®ã·ãŒã±ã³ã¹ã§ãã¬ãŒãã³ã°ãããæ倧 100,000 ããŒã¯ã³ã®å
¥åã§æ¹åãèŠãããŸãã 7B ããã³ 13B ã³ãŒã ã©ããšã³ãŒã ã©ã - åœä»€ããªã¢ã³ãã¯ãåšå²ã®ã³ã³ãã³ãã«åºã¥ããåã蟌ã¿ããµããŒãããŸãã Code Llama ã¯ãããã€ãã®ã³ãŒã ãã³ãããŒã¯ã§ãªãŒãã³ ã¢ãã«ã®äžã§æå
端ã®ããã©ãŒãã³ã¹ã«éããHumanEval ãš MBPP ã§ããããæ倧 53% ãš 55% ã®ã¹ã³ã¢ãç²åŸããŸãããç¹ã«ãCode Llama - Python 7B 㯠HumanEval ããã³ MBPP äžã§ Llama 2 70B ãããåªããããã©ãŒãã³ã¹ã瀺ãããã¹ãŠã®ã¢ãã«ã¯ MultiPL-E äžã§å
¬éãããŠããä»ã®ãã¹ãŠã®ã¢ãã«ãããåªããŠããŸããç§ãã¡ã¯ãç 究ãšåæ¥å©çšã®äž¡æ¹ãèš±å¯ããå¯å®¹ãªã©ã€ã»ã³ã¹ã«åºã¥ã㊠Code Llama ããªãªãŒã¹ããŠããŸãã*
ãã¹ãŠã® Code Llama ã¢ãã« ãã§ãã¯ãã€ã³ãã [ãã¡ã](https://huggingface.co/models?search=code_llama) ã§ç¢ºèªãã[codellama org](https://huggingface.co/codellama) ã§æ£åŒã«ãªãªãŒã¹ããããã§ãã¯ãã€ã³ãã確èªããŠãã ããã
ãã®ã¢ãã«ã¯ [ArthurZucker](https://huggingface.co/ArthurZ) ã«ãã£ãŠæäŸãããŸãããèè
ã®ãªãªãžãã«ã®ã³ãŒã㯠[ãã¡ã](https://github.com/facebookresearch/llama) ã«ãããŸãã
## Usage tips and examples
<Tip warning={true}>
Code Llama ã®ããŒã¹ãšãªã`Llama2`ãã¡ããªãŒ ã¢ãã«ã¯ã`bfloat16`ã䜿çšããŠãã¬ãŒãã³ã°ãããŸããããå
ã®æšè«ã§ã¯`float16`ã䜿çšããŸããããŸããŸãªç²ŸåºŠãèŠãŠã¿ãŸãããã
* `float32`: ã¢ãã«ã®åæåã«é¢ãã PyTorch ã®èŠçŽã§ã¯ãã¢ãã«ã®éã¿ãã©ã® `dtype` ã§æ ŒçŽããããã«é¢ä¿ãªããã¢ãã«ã `float32` ã«ããŒãããŸãã ãtransformersãããPyTorch ãšã®äžè²«æ§ãä¿ã€ããã«ãã®èŠåã«åŸã£ãŠããŸããããã¯ããã©ã«ãã§éžæãããŸãã `AutoModel` API ã§ã¹ãã¬ãŒãžã®éã¿ä»ãã¿ã€ãã䜿çšããŠãã§ãã¯ãã€ã³ãã®ããŒãããã£ã¹ãããå Žåã¯ã`torch_dtype="auto"` ãæå®ããå¿
èŠããããŸãã `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`ã
* `bfloat16`: ã³ãŒã Llama ã¯ãã®ç²ŸåºŠã§ãã¬ãŒãã³ã°ãããŠããããããããªããã¬ãŒãã³ã°ã埮調æŽã«äœ¿çšããããšããå§ãããŸãã
* `float16`: ãã®ç²ŸåºŠã䜿çšããŠæšè«ãå®è¡ããããšããå§ãããŸããé垞㯠`bfloat16` ããé«éã§ãããè©äŸ¡ã¡ããªã¯ã¹ã«ã¯ `bfloat16` ãšæ¯ã¹ãŠæãããªäœäžãèŠãããªãããã§ãã bfloat16 ã䜿çšããŠæšè«ãå®è¡ããããšãã§ããŸãã埮調æŽåŸãfloat16 ãš bfloat16 ã®äž¡æ¹ã§æšè«çµæã確èªããããšããå§ãããŸãã
äžã§è¿°ã¹ãããã«ãã¢ãã«ãåæåãããšãã« `torch_dtype="auto"` ã䜿çšããªãéããã¹ãã¬ãŒãžã®éã¿ã® `dtype` ã¯ã»ãšãã©ç¡é¢ä¿ã§ãããã®çç±ã¯ãã¢ãã«ãæåã«ããŠã³ããŒããã (ãªã³ã©ã€ã³ã®ãã§ãã¯ãã€ã³ãã® `dtype` ã䜿çš)ã次㫠`torch` ã®ããã©ã«ãã® `dtype` ã«ãã£ã¹ããããããã§ã (`torch.float32` ã«ãªããŸã)ãæå®ããã `torch_dtype` ãããå Žåã¯ã代ããã«ããã䜿çšãããŸãã
</Tip>
ãããïŒ
- å
å¡«ã¿ã¹ã¯ã¯ããã«ãµããŒããããŸããå
¥åãåãããå Žæã«ã¯ `tokenizer.fill_token` ã䜿çšããå¿
èŠããããŸãã
- ã¢ãã«å€æã¹ã¯ãªããã¯ã`Llama2` ãã¡ããªã®å Žåãšåãã§ãã
䜿çšäŸã¯æ¬¡ã®ãšããã§ãã
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
```
ã¹ã¯ãªãããå®è¡ããã«ã¯ã(æ倧ã®ããŒãžã§ã³ã§ãã£ãŠã) float16 粟床ã§ã¢ãã«å
šäœããã¹ãããã®ã«åå㪠CPU RAM ãå¿
èŠã§ããããšã«æ³šæããŠãã ããã
ããã€ãã®ãã§ãã¯ãã€ã³ãããããããããã«ã¢ãã«ã®åéã¿ã®äžéšãå«ãŸããŠããããããã¹ãŠã RAM ã«ããŒãããå¿
èŠããããŸã)ã
å€æåŸãã¢ãã«ãšããŒã¯ãã€ã¶ãŒã¯æ¬¡ã®æ¹æ³ã§ããŒãã§ããŸãã
```python
>>> from transformers import LlamaForCausalLM, CodeLlamaTokenizer
>>> tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
>>> model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf")
>>> PROMPT = '''def remove_non_ascii(s: str) -> str:
""" <FILL_ME>
return result
'''
>>> input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
>>> generated_ids = model.generate(input_ids, max_new_tokens=128)
>>> filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
>>> print(PROMPT.replace("<FILL_ME>", filling))
def remove_non_ascii(s: str) -> str:
""" Remove non-ASCII characters from a string.
Args:
s: The string to remove non-ASCII characters from.
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
```
å¡ãã€ã¶ãããéšåã ããå¿
èŠãªå Žå:
```python
>>> from transformers import pipeline
>>> import torch
>>> generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto")
>>> generator('def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return result', max_new_tokens = 128)
[{'generated_text': 'def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\n return resultRemove non-ASCII characters from a string. """\n result = ""\n for c in s:\n if ord(c) < 128:\n result += c'}]
```
å
éšã§ã¯ãããŒã¯ãã€ã¶ãŒã [`<FILL_ME>` ã«ãã£ãŠèªåçã«åå²](https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer.fill_token) ããŠã[ ã«ç¶ãæžåŒèšå®ãããå
¥åæååãäœæããŸãããªãªãžãã«ã®ãã¬ãŒãã³ã° ãã¿ãŒã³](https://github.com/facebookresearch/codellama/blob/cb51c14ec761370ba2e2bc351374a79265d0465e/llama/generation.py#L402)ãããã¯ããã¿ãŒã³ãèªåã§æºåãããããå
ç¢ã§ããããŒã¯ã³ã®æ¥çãªã©ããããã°ãéåžžã«é£ããèœãšãç©Žãåé¿ã§ããŸãããã®ã¢ãã«ãŸãã¯ä»ã®ã¢ãã«ã«å¿
èŠãª CPU ããã³ GPU ã¡ã¢ãªã®éã確èªããã«ã¯ããã®å€ã決å®ããã®ã«åœ¹ç«ã€ [ãã®èšç®ããŒã«](https://huggingface.co/spaces/hf-accelerate/model-memory-usage) ãè©ŠããŠãã ããã
LLaMA ããŒã¯ãã€ã¶ãŒã¯ã[sentencepiece](https://github.com/google/sentencepiece) ã«åºã¥ã BPE ã¢ãã«ã§ããã»ã³ãã³ã¹ããŒã¹ã®çã® 1 ã€ã¯ãã·ãŒã±ã³ã¹ããã³ãŒããããšãã«ãæåã®ããŒã¯ã³ãåèªã®å
é (äŸ: ãBananaã) ã§ããå ŽåãããŒã¯ãã€ã¶ãŒã¯æååã®å
é ã«ãã¬ãã£ãã¯ã¹ ã¹ããŒã¹ãè¿œå ããªãããšã§ãã
<Tip>
ã³ãŒã Llama ã¯ã`Llama2` ã¢ãã«ãšåãã¢ãŒããã¯ãã£ãæã£ãŠããŸããAPI ãªãã¡ã¬ã³ã¹ã«ã€ããŠã¯ã[Llama2 ã®ããã¥ã¡ã³ã ããŒãž](llama2) ãåç
§ããŠãã ããã
以äžã® Code Llama ããŒã¯ãã€ã¶ãŒã®ãªãã¡ã¬ã³ã¹ãèŠã€ããŠãã ããã
</Tip>
## CodeLlamaTokenizer
[[autodoc]] CodeLlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## CodeLlamaTokenizerFast
[[autodoc]] CodeLlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
| 0 |
mavonic_private_repos/transformers/docs/source/ja | mavonic_private_repos/transformers/docs/source/ja/model_doc/deit.md | <!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
â ïž Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeiT
## Overview
DeiT ã¢ãã«ã¯ãHugo TouvronãMatthieu CordãMatthijs DouzeãFrancisco MassaãAlexandre
Sablayrolles, Hervé Jégou.ã«ãã£ãŠ [Training data-efficient image Transformers & distillation through attention](https://arxiv.org/abs/2012.12877) ã§ææ¡ãããŸããã
ãµãã¬ã€ããŒã«ããšã«ãŽã§ã»ãžã§ã°ãŒã [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) ã§çŽ¹ä»ããã [Vision Transformer (ViT)](vit) ã¯ãæ¢åã®ç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãšåçããŸãã¯ãããäžåãããã©ãŒãã³ã¹ãçºæ®ã§ããããšã瀺ããŸããã
Transformer ãšã³ã³ãŒã (BERT ã®ãããª) ã䜿çšãããããã¯ãŒã¯ããã ãããã®è«æã§çŽ¹ä»ããã ViT ã¢ãã«ã«ã¯ã次ã®ãã¬ãŒãã³ã°ãå¿
èŠã§ããã
å€éšããŒã¿ã䜿çšããŠãæ°é±éã«ãããé«äŸ¡ãªã€ã³ãã©ã¹ãã©ã¯ãã£ã DeiT (ããŒã¿å¹çã®é«ãç»åå€æåš) ã¯ããã«åªããŠããŸã
ç»ååé¡çšã«å¹ççã«ãã¬ãŒãã³ã°ããããã©ã³ã¹ãã©ãŒããŒã«ãããå¿
èŠãªããŒã¿ãšã³ã³ãã¥ãŒãã£ã³ã° ãªãœãŒã¹ãã¯ããã«å°ãªããªããŸãã
ãªãªãžãã«ã® ViT ã¢ãã«ãšã®æ¯èŒã
è«æã®èŠçŽã¯æ¬¡ã®ãšããã§ãã
*æè¿ãçŽç²ã«æ³šæã«åºã¥ããã¥ãŒã©ã« ãããã¯ãŒã¯ããç»åãªã©ã®ç»åç解ã¿ã¹ã¯ã«å¯ŸåŠã§ããããšã瀺ãããŸããã
åé¡ããã ãããããã®ããžã¥ã¢ã« ãã©ã³ã¹ãã©ãŒããŒã¯ã
ã€ã³ãã©ã¹ãã©ã¯ãã£ãé«äŸ¡ã§ããããããã®æ¡çšãå¶éãããŠããŸãããã®äœæ¥ã§ã¯ãã³ã³ããªã¥ãŒã·ã§ã³ããªãŒã®ç«¶äºåã®ããã²ãŒã ãäœæããŸãã
Imagenet ã®ã¿ã§ãã¬ãŒãã³ã°ããŠãã©ã³ã¹ãã©ãŒããŒãäœæããŸãã 1 å°ã®ã³ã³ãã¥ãŒã¿ãŒã§ 3 æ¥ä»¥å
ã«ãã¬ãŒãã³ã°ãè¡ããŸããç§ãã¡ã®åºæºãšãªãããžã§ã³
ãã©ã³ã¹ (86M ãã©ã¡ãŒã¿) ã¯ãå€éšãªã㧠ImageNet äžã§ 83.1% (åäžã¯ãããè©äŸ¡) ã®ããã 1 ã®ç²ŸåºŠãéæããŸãã
ããŒã¿ãããã«éèŠãªã®ã¯ããã©ã³ã¹ãã©ãŒããŒã«ç¹æã®æåž«ãšçåŸã®æŠç¥ãå°å
¥ããããšã§ããèžçã«äŸåããŠãã
åŠçã泚æãæã£ãŠæåž«ããåŠã¶ããšãä¿èšŒããããŒã¯ã³ãç§ãã¡ã¯ãã®ããŒã¯ã³ããŒã¹ã«èå³ã瀺ããŸã
ç¹ã« convnet ãæåž«ãšããŠäœ¿çšããå Žåãããã«ãããconvnet ãšç«¶åããçµæãå ±åã§ããããã«ãªããŸãã
Imagenet (æ倧 85.2% ã®ç²ŸåºŠãåŸãããŸã) ãšä»ã®ã¿ã¹ã¯ã«è»¢éãããšãã®äž¡æ¹ã§ãç§ãã¡ã¯ã³ãŒããå
±æãã
ã¢ãã«ã*
ãã®ã¢ãã«ã¯ã[nielsr](https://huggingface.co/nielsr) ã«ãã£ãŠæäŸãããŸããããã®ã¢ãã«ã® TensorFlow ããŒãžã§ã³ã¯ã[amyeroberts](https://huggingface.co/amyeroberts) ã«ãã£ãŠè¿œå ãããŸããã
## Usage tips
- ViT ãšæ¯èŒããŠãDeiT ã¢ãã«ã¯ããããèžçããŒã¯ã³ã䜿çšããŠæåž«ããå¹æçã«åŠç¿ããŸã (ããã¯ã
DeiT è«æã¯ãResNet ã®ãããªã¢ãã«ã§ã)ãèžçããŒã¯ã³ã¯ãããã¯ãããã²ãŒã·ã§ã³ãéããŠããšå¯Ÿè©±ããããšã«ãã£ãŠåŠç¿ãããŸãã
ã»ã«ãã¢ãã³ã·ã§ã³å±€ãä»ããã¯ã©ã¹ ([CLS]) ãšããã ããŒã¯ã³ã
- æœåºãããã¢ãã«ã埮調æŽããã«ã¯ 2 ã€ã®æ¹æ³ããããŸãã(1) äžéšã«äºæž¬ããããé
眮ããã ãã®å€å
žçãªæ¹æ³ã
ã¯ã©ã¹ ããŒã¯ã³ã®æçµçãªé衚瀺ç¶æ
ãæœåºããèžçã·ã°ãã«ã䜿çšããªãããŸã㯠(2) äž¡æ¹ã®
äºæž¬ãããã¯ã¯ã©ã¹ ããŒã¯ã³ã®äžãšèžçããŒã¯ã³ã®äžã«ãããŸãããã®å Žåã[CLS] äºæž¬ã¯
head ã¯ãhead ã®äºæž¬ãšã°ã©ãŠã³ã ãã¥ã«ãŒã¹ ã©ãã«éã®éåžžã®ã¯ãã¹ãšã³ããããŒã䜿çšããŠãã¬ãŒãã³ã°ãããŸãã
èžçäºæž¬ãããã¯ã硬èžç (äºæž¬ãšäºæž¬ã®éã®ã¯ãã¹ãšã³ããããŒ) ã䜿çšããŠãã¬ãŒãã³ã°ãããŸãã
èžçããããšæåž«ãäºæž¬ããã©ãã«ïŒãæšè«æã«ãå¹³åäºæž¬ãååŸããŸãã
æçµçãªäºæž¬ãšããŠäž¡é ã®éã§ã (2) ã¯ãèžçã«ãã埮調æŽããšãåŒã°ããŸãã
äžæµã®ããŒã¿ã»ããã§ãã§ã«åŸ®èª¿æŽãããŠããæåž«ãã¢ãã«çã«ã¯ (1) ã«çžåœããŸãã
[`DeiTForImageClassification`] ãš (2) ã«å¯Ÿå¿ããŸãã
[`DeiTForImageClassificationWithTeacher`]ã
- èè
ã㯠(2) ã«ã€ããŠããœããèžçãè©Šã¿ãããšã«æ³šæããŠãã ãã (ãã®å Žåãèžçäºæž¬ãããã¯
æåž«ã®ãœããããã¯ã¹åºåã«äžèŽããããã« KL ãã€ããŒãžã§ã³ã¹ã䜿çšããŠãã¬ãŒãã³ã°ããŸããïŒããããŒãèžçãæè¯ã®çµæããããããŸããã
- ãªãªãŒã¹ããããã¹ãŠã®ãã§ãã¯ãã€ã³ãã¯ãImageNet-1k ã®ã¿ã§äºåãã¬ãŒãã³ã°ããã³åŸ®èª¿æŽãããŸãããå€éšããŒã¿ã¯äœ¿çšãããŸããã§ãããããã¯
JFT-300M ããŒã¿ã»ãã/Imagenet-21k ãªã©ã®å€éšããŒã¿ã䜿çšããå
ã® ViT ã¢ãã«ãšã¯å¯Ÿç
§çã§ãã
äºåãã¬ãŒãã³ã°ã
- DeiT ã®äœè
ã¯ãããå¹ççã«ãã¬ãŒãã³ã°ããã ViT ã¢ãã«ããªãªãŒã¹ããŸãããããã¯ãçŽæ¥ãã©ã°ã€ã³ã§ããŸãã
[`ViTModel`] ãŸã㯠[`ViTForImageClassification`]ãããŒã¿ãªã©ã®ãã¯ããã¯
ã¯ããã«å€§èŠæš¡ãªããŒã¿ã»ããã§ã®ãã¬ãŒãã³ã°ãã·ãã¥ã¬ãŒãããããã«ãæ¡åŒµãæé©åãæ£ååã䜿çšãããŸããã
(ãã ããäºåãã¬ãŒãã³ã°ã«ã¯ ImageNet-1k ã®ã¿ã䜿çšããŸã)ã 4 ã€ã®ããªãšãŒã·ã§ã³ (3 ã€ã®ç°ãªããµã€ãº) ãå©çšå¯èœã§ãã
*facebook/deit-tiny-patch16-224*ã*facebook/deit-small-patch16-224*ã*facebook/deit-base-patch16-224* ããã³
*facebook/deit-base-patch16-384*ã以äžãè¡ãã«ã¯ [`DeiTImageProcessor`] ã䜿çšããå¿
èŠãããããšã«æ³šæããŠãã ããã
ã¢ãã«çšã®ç»åãæºåããŸãã
## Resources
DeiT ãå§ããã®ã«åœ¹ç«ã€å
¬åŒ Hugging Face ããã³ã³ãã¥ãã㣠(ð ã§ç€ºãããŠãã) ãªãœãŒã¹ã®ãªã¹ãã
<PipelineTag pipeline="image-classification"/>
- [`DeiTForImageClassification`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) ããã³ [ããŒãããã¯](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)ã
- åç
§: [ç»ååé¡ã¿ã¹ã¯ ã¬ã€ã](../tasks/image_classification)
ããã«å ããŠ:
- [`DeiTForMaskedImageModeling`] ã¯ããã® [ãµã³ãã« ã¹ã¯ãªãã](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining) ã§ãµããŒããããŠããŸãã
ããã«å«ãããªãœãŒã¹ã®éä¿¡ã«èå³ãããå Žåã¯ããæ°è»œã«ãã« ãªã¯ãšã¹ããéããŠãã ããã審æ»ãããŠããã ããŸãããªãœãŒã¹ã¯ãæ¢åã®ãªãœãŒã¹ãè€è£œããã®ã§ã¯ãªããäœãæ°ãããã®ã瀺ãããšãçæ³çã§ãã
## DeiTConfig
[[autodoc]] DeiTConfig
## DeiTFeatureExtractor
[[autodoc]] DeiTFeatureExtractor
- __call__
## DeiTImageProcessor
[[autodoc]] DeiTImageProcessor
- preprocess
<frameworkcontent>
<pt>
## DeiTModel
[[autodoc]] DeiTModel
- forward
## DeiTForMaskedImageModeling
[[autodoc]] DeiTForMaskedImageModeling
- forward
## DeiTForImageClassification
[[autodoc]] DeiTForImageClassification
- forward
## DeiTForImageClassificationWithTeacher
[[autodoc]] DeiTForImageClassificationWithTeacher
- forward
</pt>
<tf>
## TFDeiTModel
[[autodoc]] TFDeiTModel
- call
## TFDeiTForMaskedImageModeling
[[autodoc]] TFDeiTForMaskedImageModeling
- call
## TFDeiTForImageClassification
[[autodoc]] TFDeiTForImageClassification
- call
## TFDeiTForImageClassificationWithTeacher
[[autodoc]] TFDeiTForImageClassificationWithTeacher
- call
</tf>
</frameworkcontent> | 0 |