url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/29161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29161/comments | https://api.github.com/repos/huggingface/transformers/issues/29161/events | https://github.com/huggingface/transformers/issues/29161 | 2,145,902,969 | I_kwDOCUB6oc5_5-F5 | 29,161 | To enter token in jupyter notebook issue | {
"login": "arda1906",
"id": 157398066,
"node_id": "U_kgDOCWG0Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/157398066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arda1906",
"html_url": "https://github.com/arda1906",
"followers_url": "https://api.github.com/users/arda1906/followers",
"following_url": "https://api.github.com/users/arda1906/following{/other_user}",
"gists_url": "https://api.github.com/users/arda1906/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arda1906/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arda1906/subscriptions",
"organizations_url": "https://api.github.com/users/arda1906/orgs",
"repos_url": "https://api.github.com/users/arda1906/repos",
"events_url": "https://api.github.com/users/arda1906/events{/privacy}",
"received_events_url": "https://api.github.com/users/arda1906/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @arda1906, thanks for raising an issue!\r\n\r\nWithout more information about the error i.e. what does it mean to \"not work\" and what is the expected behaviour? we won't be able to help you. \r\n\r\nFrom the snippet, it's not entirely clear how the code is being run, but there are two separate commands which should be entered on separate lines or cells\r\n\r\n```py\r\nfrom huggingface_hub import notebook_login\r\n\r\nnotebook_login()\r\n```",
"hi,I am giving details\r\n> I am trying this code to train the model\r\n\r\n>```python\r\n>trainer = Trainer(model=model,args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=emotion_encoded[\"train\"],\r\n eval_dataset=emotion_encoded[\"validation\"],\r\n tokenizer=tokenizer)\r\ntrainer.train()\r\n\r\n>and I am facing this error:\r\n>LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.\r\n>I have thought to apply the my token in the jupyter notebook like this:\r\n>```\r\n> ```python\r\n> from huggingface_hub import notebook_login\r\n> \r\n> notebook_login()\r\n>\r\n> ```\r\n>help me please:(\r\n",
"Hi @arda1906, are you running the notebook login cells before calling Trainer? Are you passing in a token to the interactive text box that appears when running notebook_login? ",
"> Hi @arda1906, are you running the notebook login cells before calling Trainer? Are you passing in a token to the interactive text box that appears when running notebook_login?\r\n\r\n![20240221_210517](https://github.com/huggingface/transformers/assets/157398066/57ee7c71-1614-4c44-8dc2-144f47cafacd)\r\n"
] | 1,708 | 1,708 | null | NONE | null | I run this [from huggingface_hub import notebook_login
notebook_login() ] on cell and enter my token. but it doesn't work:( | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29161/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29160/comments | https://api.github.com/repos/huggingface/transformers/issues/29160/events | https://github.com/huggingface/transformers/pull/29160 | 2,145,779,053 | PR_kwDOCUB6oc5neHY8 | 29,160 | [WIP] add Fusion In Decoder model | {
"login": "oh-gnues-iohc",
"id": 79557937,
"node_id": "MDQ6VXNlcjc5NTU3OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oh-gnues-iohc",
"html_url": "https://github.com/oh-gnues-iohc",
"followers_url": "https://api.github.com/users/oh-gnues-iohc/followers",
"following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}",
"gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions",
"organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs",
"repos_url": "https://api.github.com/users/oh-gnues-iohc/repos",
"events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}",
"received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
add FiD(Fusion In Decoder) models
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29160/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29160",
"html_url": "https://github.com/huggingface/transformers/pull/29160",
"diff_url": "https://github.com/huggingface/transformers/pull/29160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29160.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29159/comments | https://api.github.com/repos/huggingface/transformers/issues/29159/events | https://github.com/huggingface/transformers/issues/29159 | 2,145,650,790 | I_kwDOCUB6oc5_5Ahm | 29,159 | [tokenizer] Inconsistent behavior in slow tokenizer and fast tokenizer | {
"login": "Ki-Seki",
"id": 60967965,
"node_id": "MDQ6VXNlcjYwOTY3OTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/60967965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ki-Seki",
"html_url": "https://github.com/Ki-Seki",
"followers_url": "https://api.github.com/users/Ki-Seki/followers",
"following_url": "https://api.github.com/users/Ki-Seki/following{/other_user}",
"gists_url": "https://api.github.com/users/Ki-Seki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ki-Seki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ki-Seki/subscriptions",
"organizations_url": "https://api.github.com/users/Ki-Seki/orgs",
"repos_url": "https://api.github.com/users/Ki-Seki/repos",
"events_url": "https://api.github.com/users/Ki-Seki/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ki-Seki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Hey! Thanks for opening an issue. \r\nFew things first. You are using a custom / local checkpoint with trust remote code. \r\n\r\nFast is not erroring out when you feed OOV, while slow is and it is indeed inconsistent. Would you like to open a PR for a fix? 🤗 ",
"Yes, I'll try that. Thank you for your reply!"
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.10
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no need
- Using distributed or parallel set-up in script?: no need
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
def answer_or_exception(tokenizer, id):
print(f'<<<<<<{tokenizer.__class__}>>>>>>')
try:
print(f'"{tokenizer.decode([id])}"')
except Exception as e:
print(e)
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/phi-2", trust_remote_code=True, use_fast=False)
# vocab size: 50294
answer_or_exception(tokenizer, 50294) # correct
answer_or_exception(tokenizer, 50295) # wrong
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/phi-2", trust_remote_code=True, use_fast=True)
# vocab size: 50294
answer_or_exception(tokenizer, 50294) # correct
answer_or_exception(tokenizer, 50295) # correct
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/Llama-2-7b-chat-hf", trust_remote_code=True, use_fast=False)
# vocab size: 31999
answer_or_exception(tokenizer, 31999) # correct
answer_or_exception(tokenizer, 32000) # wrong
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/Llama-2-7b-chat-hf", trust_remote_code=True, use_fast=True)
# vocab size: 31999
answer_or_exception(tokenizer, 31999) # correct
answer_or_exception(tokenizer, 32000) # correct
```
Output:
```text
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<<<<<<<class 'transformers.models.codegen.tokenization_codegen.CodeGenTokenizer'>>>>>>>
" "
<<<<<<<class 'transformers.models.codegen.tokenization_codegen.CodeGenTokenizer'>>>>>>>
sequence item 0: expected str instance, NoneType found
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<<<<<<<class 'transformers.models.codegen.tokenization_codegen_fast.CodeGenTokenizerFast'>>>>>>>
" "
<<<<<<<class 'transformers.models.codegen.tokenization_codegen_fast.CodeGenTokenizerFast'>>>>>>>
""
<<<<<<<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>>>>>>>
"给"
<<<<<<<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>>>>>>>
piece id is out of range.
<<<<<<<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>>>>>>>
"给"
<<<<<<<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>>>>>>>
""
```
### Expected behavior
Consistent `decode` behavior in slow tokenizer and fast tokenizer when id exceeds vocab size. For example, instead of raise exceptions, the slow tokenizer output empty strings like the fast tokenizer does. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29159/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29158/comments | https://api.github.com/repos/huggingface/transformers/issues/29158/events | https://github.com/huggingface/transformers/pull/29158 | 2,145,552,337 | PR_kwDOCUB6oc5ndVY6 | 29,158 | [PyTorch/XLA] Fix extra TPU compilations introduced by recent changes | {
"login": "alanwaketan",
"id": 8573935,
"node_id": "MDQ6VXNlcjg1NzM5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8573935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanwaketan",
"html_url": "https://github.com/alanwaketan",
"followers_url": "https://api.github.com/users/alanwaketan/followers",
"following_url": "https://api.github.com/users/alanwaketan/following{/other_user}",
"gists_url": "https://api.github.com/users/alanwaketan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanwaketan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanwaketan/subscriptions",
"organizations_url": "https://api.github.com/users/alanwaketan/orgs",
"repos_url": "https://api.github.com/users/alanwaketan/repos",
"events_url": "https://api.github.com/users/alanwaketan/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanwaketan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR tries to fix some extra TPU compilations caused by recent HF changes.
1. PyTorch/XLA doesn't support sdpa yet. So we need to set the default attention implementation to eager.
2. tensor.item() will trigger tpu graph synchronization. We should avoid using it in the training loop.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29158",
"html_url": "https://github.com/huggingface/transformers/pull/29158",
"diff_url": "https://github.com/huggingface/transformers/pull/29158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29158.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29157/comments | https://api.github.com/repos/huggingface/transformers/issues/29157/events | https://github.com/huggingface/transformers/issues/29157 | 2,145,549,903 | I_kwDOCUB6oc5_4n5P | 29,157 | Error while saving with EarlyStoppingCallback | {
"login": "dhruvmullick",
"id": 7004024,
"node_id": "MDQ6VXNlcjcwMDQwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7004024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvmullick",
"html_url": "https://github.com/dhruvmullick",
"followers_url": "https://api.github.com/users/dhruvmullick/followers",
"following_url": "https://api.github.com/users/dhruvmullick/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvmullick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvmullick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvmullick/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvmullick/orgs",
"repos_url": "https://api.github.com/users/dhruvmullick/repos",
"events_url": "https://api.github.com/users/dhruvmullick/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvmullick/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.28.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DeepSpeed
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
* SFTTrainer is used for training the model
* transformers.EarlyStoppingCallback is added to the trainer prior to .train()
This error has appeared in the last few days, likely due to some recent change.
Error is fixed by either rolling back to transformers version 4.37.2 or remove the early stopping callback.
Here's the stack trace:
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 331, in train
> output = super().train(*args, **kwargs)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 1624, in train
> return inner_training_loop(
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2029, in _inner_training_loop
> self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2423, in _maybe_log_save_evaluate
> self._save_checkpoint(model, trial, metrics=metrics)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2525, in _save_checkpoint
> self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer_callback.py", line 113, in save_to_json
> json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n"
> File "/usr/lib/python3.10/json/__init__.py", line 238, in dumps
> **kw).encode(obj)
> File "/usr/lib/python3.10/json/encoder.py", line 201, in encode
> chunks = list(chunks)
> File "/usr/lib/python3.10/json/encoder.py", line 431, in _iterencode
> yield from _iterencode_dict(o, _current_indent_level)
> File "/usr/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 325, in _iterencode_list
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 438, in _iterencode
> o = _default(o)
> File "/usr/lib/python3.10/json/encoder.py", line 179, in default
> raise TypeError(f'Object of type {o.__class__.__name__} '
> TypeError: Object of type Tensor is not JSON serializable
>
### Expected behavior
No error with 4.38.0.dev0 transformers version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29157/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29156/comments | https://api.github.com/repos/huggingface/transformers/issues/29156/events | https://github.com/huggingface/transformers/pull/29156 | 2,145,522,407 | PR_kwDOCUB6oc5ndO3J | 29,156 | Making extensible | {
"login": "ddevaul",
"id": 71190628,
"node_id": "MDQ6VXNlcjcxMTkwNjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/71190628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddevaul",
"html_url": "https://github.com/ddevaul",
"followers_url": "https://api.github.com/users/ddevaul/followers",
"following_url": "https://api.github.com/users/ddevaul/following{/other_user}",
"gists_url": "https://api.github.com/users/ddevaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddevaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddevaul/subscriptions",
"organizations_url": "https://api.github.com/users/ddevaul/orgs",
"repos_url": "https://api.github.com/users/ddevaul/repos",
"events_url": "https://api.github.com/users/ddevaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddevaul/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @ddevaul, what is the purpose of this PR? \r\n"
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29156/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29156",
"html_url": "https://github.com/huggingface/transformers/pull/29156",
"diff_url": "https://github.com/huggingface/transformers/pull/29156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29156.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29155/comments | https://api.github.com/repos/huggingface/transformers/issues/29155/events | https://github.com/huggingface/transformers/issues/29155 | 2,145,382,760 | I_kwDOCUB6oc5_3_Fo | 29,155 | PyTest import error | {
"login": "loadams",
"id": 114770087,
"node_id": "U_kgDOBtdApw",
"avatar_url": "https://avatars.githubusercontent.com/u/114770087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loadams",
"html_url": "https://github.com/loadams",
"followers_url": "https://api.github.com/users/loadams/followers",
"following_url": "https://api.github.com/users/loadams/following{/other_user}",
"gists_url": "https://api.github.com/users/loadams/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loadams/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loadams/subscriptions",
"organizations_url": "https://api.github.com/users/loadams/orgs",
"repos_url": "https://api.github.com/users/loadams/repos",
"events_url": "https://api.github.com/users/loadams/events{/privacy}",
"received_events_url": "https://api.github.com/users/loadams/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### System Info
Current head of transformers shows this issue, when importing functions from pytest, the `import_path` function is not found. Sample error from DeepSpeed's unit tests [here](https://github.com/microsoft/DeepSpeed/actions/runs/7977730884/job/21781270161?pr=5164#step:7:391).
```
______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________
ImportError while importing test module '/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/accelerate/tests/deepspeed/test_deepspeed.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../unit-test-venv/lib/python3.8/site-packages/_pytest/python.py:538: in importtestmodule
mod = import_path(path, mode=importmode, root=config.rootpath)
../unit-test-venv/lib/python3.8/site-packages/_pytest/pathlib.py:566: in import_path
importlib.import_module(module_name)
/opt/conda/envs/ptca/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
../unit-test-venv/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/deepspeed/test_deepspeed.py:26: in <module>
from transformers.testing_utils import mockenv_context
../unit-test-venv/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
from _pytest.doctest import (
E ImportError: cannot import name 'import_path' from '_pytest.doctest' (/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.8/site-packages/_pytest/doctest.py)
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 4.71s ===============================
```
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
With pytest 8.0.1:
1. `from _pytest.doctest import import_path`
2. observe error.
### Expected behavior
No errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29155/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29154/comments | https://api.github.com/repos/huggingface/transformers/issues/29154/events | https://github.com/huggingface/transformers/pull/29154 | 2,145,294,779 | PR_kwDOCUB6oc5nccpR | 29,154 | Update pytest `import_path` location | {
"login": "loadams",
"id": 114770087,
"node_id": "U_kgDOBtdApw",
"avatar_url": "https://avatars.githubusercontent.com/u/114770087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loadams",
"html_url": "https://github.com/loadams",
"followers_url": "https://api.github.com/users/loadams/followers",
"following_url": "https://api.github.com/users/loadams/following{/other_user}",
"gists_url": "https://api.github.com/users/loadams/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loadams/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loadams/subscriptions",
"organizations_url": "https://api.github.com/users/loadams/orgs",
"repos_url": "https://api.github.com/users/loadams/repos",
"events_url": "https://api.github.com/users/loadams/events{/privacy}",
"received_events_url": "https://api.github.com/users/loadams/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
Fixes location of `import_path` from pytest from `_pytest.doctest` to `_pytest.pathlib` when using PyTest 8.0.1+ since it is finally deprecated from being in `_pytest.doctest`. It is provided in `_pytest.pathlib` from at least 7.2.0+ so we do not need to modify the supported pytest range in `setup.py`
Tested [here in DeepSpeed](https://github.com/microsoft/DeepSpeed/pull/5164) and tests appear to be passing.
Fixes: #29155
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29154",
"html_url": "https://github.com/huggingface/transformers/pull/29154",
"diff_url": "https://github.com/huggingface/transformers/pull/29154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29154.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29153/comments | https://api.github.com/repos/huggingface/transformers/issues/29153/events | https://github.com/huggingface/transformers/issues/29153 | 2,145,101,851 | I_kwDOCUB6oc5_26gb | 29,153 | Plans to add DoRA? | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada @pacman100 ",
"Hi @RonanKMcGovern ! \r\nThanks for the feature request! There is already an ongoing work from @BenjaminBossan to add DoRA in PEFT: https://github.com/huggingface/peft/pull/1474",
"Closing as there is a PR underway.",
"OK thank you @RonanKMcGovern !"
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Improves on LoRA by allowing magnitude fine-tuning.
### Motivation
Improved perplexity.
### Your contribution
Sebastien Bubeck has published demo code. https://github.com/rasbt/dora-from-scratch | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29153/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29152/comments | https://api.github.com/repos/huggingface/transformers/issues/29152/events | https://github.com/huggingface/transformers/pull/29152 | 2,145,071,699 | PR_kwDOCUB6oc5nbr5K | 29,152 | Alternative approach | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @Rocketknight1 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29152). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Alternative way to use stop words for generated sequences. Note - it doesn't
<details>
<summary>Script</summary>
```py
import time
import numpy as np
from transformers.generation.stopping_criteria import StopStringCriteria, StopStringCriteria2
from transformers import AutoTokenizer
model_id = "google-bert/bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_id)
stopping_criteria = StopStringCriteria(stop_strings=["giraffe", "polo"], tokenizer=tokenizer)
long_sentence = "This is a long sentence which should eventually stop because I have the word giraffe. This is a generated sentence"
input_ids = tokenizer(long_sentence, return_tensors="pt").input_ids
# Let's iterate over input_ids, increasing the length of the input sequence at each iteration and see when the
# criterion is met
print("Current implementation")
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria(input_ids=input_ids[:, :i], scores=None)
print(f"Current length: {i}, stops: {is_done}, input sequence: {tokenizer.batch_decode(input_seq)}")
N_RUNS = 100
times = []
for _ in range(N_RUNS):
start = time.time()
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria(input_ids=input_ids[:, :i], scores=None)
end = time.time()
times.append(end - start)
print(f"Average time taken - current: {np.mean(times)}, std: {np.std(times)}")
print("\nAlternative implementation")
stopping_criteria_2 = StopStringCriteria2(stop_strings=["giraffe", "polo"], tokenizer=tokenizer)
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria_2(input_ids=input_ids[:, :i], scores=None)
print(f"Current length: {i}, stops: {is_done}, input sequence: {tokenizer.batch_decode(input_seq)}")
times = []
for _ in range(N_RUNS):
start = time.time()
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria_2(input_ids=input_ids[:, :i], scores=None)
end = time.time()
times.append(end - start)
print(f"Average time taken - new: {np.mean(times)}, std: {np.std(times)}")
```
</details>
Not sure if testing assumption is correct i.e. how the input ids are passed in. When testing this alternative and the current implementation, the original `StopStringCriteria` does not stop when "giraffe" is in the sentence.
This alternative is also faster on this small test.
Note: the alternative will stop when any of the generated strings has a stop word (which AFAICT is the same for the current `StopStringCriteria` too)
<details>
<summary>Output</summary>
```
Current implementation
Current length: 1, stops: False, input sequence: ['[CLS]']
Current length: 2, stops: False, input sequence: ['[CLS] this']
Current length: 3, stops: False, input sequence: ['[CLS] this is']
Current length: 4, stops: False, input sequence: ['[CLS] this is a']
Current length: 5, stops: False, input sequence: ['[CLS] this is a long']
Current length: 6, stops: False, input sequence: ['[CLS] this is a long sentence']
Current length: 7, stops: False, input sequence: ['[CLS] this is a long sentence which']
Current length: 8, stops: False, input sequence: ['[CLS] this is a long sentence which should']
Current length: 9, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually']
Current length: 10, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop']
Current length: 11, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because']
Current length: 12, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i']
Current length: 13, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have']
Current length: 14, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the']
Current length: 15, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word']
Current length: 16, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word gi']
Current length: 17, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraf']
Current length: 18, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe']
Current length: 19, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe.']
Current length: 20, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this']
Current length: 21, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is']
Current length: 22, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a']
Current length: 23, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated']
Current length: 24, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence']
Current length: 25, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence [SEP]']
Average time taken - current: 0.007625923156738281, std: 0.00019846132464233505
Alternative implementation
Current length: 1, stops: False, input sequence: ['[CLS]']
Current length: 2, stops: False, input sequence: ['[CLS] this']
Current length: 3, stops: False, input sequence: ['[CLS] this is']
Current length: 4, stops: False, input sequence: ['[CLS] this is a']
Current length: 5, stops: False, input sequence: ['[CLS] this is a long']
Current length: 6, stops: False, input sequence: ['[CLS] this is a long sentence']
Current length: 7, stops: False, input sequence: ['[CLS] this is a long sentence which']
Current length: 8, stops: False, input sequence: ['[CLS] this is a long sentence which should']
Current length: 9, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually']
Current length: 10, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop']
Current length: 11, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because']
Current length: 12, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i']
Current length: 13, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have']
Current length: 14, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the']
Current length: 15, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word']
Current length: 16, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word gi']
Current length: 17, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraf']
Current length: 18, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe']
Current length: 19, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe.']
Current length: 20, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this']
Current length: 21, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is']
Current length: 22, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a']
Current length: 23, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated']
Current length: 24, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence']
Current length: 25, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence [SEP]']
Average time taken - new: 0.0011045789718627929, std: 2.974982062175288e-05
```
</details>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29152",
"html_url": "https://github.com/huggingface/transformers/pull/29152",
"diff_url": "https://github.com/huggingface/transformers/pull/29152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29152.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29151/comments | https://api.github.com/repos/huggingface/transformers/issues/29151/events | https://github.com/huggingface/transformers/issues/29151 | 2,145,069,207 | I_kwDOCUB6oc5_2yiX | 29,151 | Static cache + torch.compile: support prefill static sequence length | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @gante ",
"@fxmarty this is the same problem as we have in TF and Flax. There, we nudged users to use the `pad_to_multiple_of` argument in the tokenizer, which I believe solves the problem 🤗 \r\n\r\nHow do you suggest us to let users know about this feature, other than docs?"
] | 1,708 | 1,708 | null | COLLABORATOR | null | ### Feature request
When using torch.compile, the prefill is recompiled for every new sequence length, which is slow. It may be nice to be able to compile only say for some sequence lengths (`1, 2, 4, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc`) on the fly depending on the input lengths, using some padding.
### Motivation
torch.compile compilation is prohibitively slow even with https://github.com/huggingface/transformers/pull/29114
If people want to use transformers + static cache + torch.compile, it should be FAST to run `generate` on new sequence lengths.
### Your contribution
None for now | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29151/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29150/comments | https://api.github.com/repos/huggingface/transformers/issues/29150/events | https://github.com/huggingface/transformers/issues/29150 | 2,144,941,834 | I_kwDOCUB6oc5_2TcK | 29,150 | Difficulty in adding custom model | {
"login": "El-chapo-007",
"id": 125077963,
"node_id": "U_kgDOB3SJyw",
"avatar_url": "https://avatars.githubusercontent.com/u/125077963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/El-chapo-007",
"html_url": "https://github.com/El-chapo-007",
"followers_url": "https://api.github.com/users/El-chapo-007/followers",
"following_url": "https://api.github.com/users/El-chapo-007/following{/other_user}",
"gists_url": "https://api.github.com/users/El-chapo-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/El-chapo-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/El-chapo-007/subscriptions",
"organizations_url": "https://api.github.com/users/El-chapo-007/orgs",
"repos_url": "https://api.github.com/users/El-chapo-007/repos",
"events_url": "https://api.github.com/users/El-chapo-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/El-chapo-007/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @El-chapo-007, thanks for opening this issue! \r\n\r\nGlad to hear that your journey has been mostly successful 🤗 \r\n\r\nHave you seen our documentation page about adding custom models? This should contain all the info and example code needed to get started: https://huggingface.co/docs/transformers/custom_models\r\n\r\nLet us know if anything does work! ",
"could you please add a Jupiter notebook template, as it would be alot more helpful as most of the other parts the hugging face team has put as a tutorial to better familirize with enviornment .....\r\nAnd also my custom architecture projects feed forward layer to the multiple of d_model,\r\nas other models project it back to same dimension it just up_project and i have tested several other models but this model yeilds order of magnitude more performance with respect to number of parameters,hence i can deploy it on edge devices ..\r\n\r\nBut the issue as far as i know i don't have to use lm head liner layer but hugging face library automatically put we call a certain class for example AutoModelForCausalLM....\r\n\r\nSince my model dimensions are same but in final liner layer how can I project it to multiple of d_model to final linear layer ...\r\n",
"also there is a bit confusion on https://huggingface.co/docs/transformers/custom_models \r\nvs https://huggingface.co/docs/transformers/add_new_model",
"Hi @El-chapo-007, \r\n\r\nWe can definitely think about adding a jupyter notebook. In the meantime, you should be able to run the code snippets in the documentation in cells in your own notebook.\r\n\r\nI'm not sure I understand your question about modifying the models. However, this is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nRegarding the documentation pages: \r\n* https://huggingface.co/docs/transformers/add_new_model outlines how to add a model into the transformers repo\r\n* https://huggingface.co/docs/transformers/custom_models outlines how to add a model on the hub\r\n",
"thanks alot"
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Hi
Hope all the team members of hugging face are well
I am a student and currently doing work on nlp projects , although most of my journey was successful because well documented information for starters especially example notebooks but what part is confusing and difficult is to upload and create a custom model from scratch i have and several other users have difficulty in it could you please make a notebook for step by step guidence to help me and other researchers to focus on thier projects not on these lenghty procedures
### Motivation
Difficulty in portting custom model
### Your contribution
Student | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29150/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29149/comments | https://api.github.com/repos/huggingface/transformers/issues/29149/events | https://github.com/huggingface/transformers/issues/29149 | 2,144,914,235 | I_kwDOCUB6oc5_2Ms7 | 29,149 | Generate: support passing position_ids | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] | [
"@zucchini-nlp FYI. We shouldn't fix this now, as it requires significant manual labor to update all models. After the static cache sprint we should have a look at this :)"
] | 1,708 | 1,708 | null | MEMBER | null | Thank you @tengomucho, for uncovering this bug.
### The problem
In a nutshell, passing the correct `position_ids` to `generate` should result in exactly the same results as not passing them. In other words, the following test should pass on all models, if added to `GenerationTesterMixin`. We can see that it is failing in general.
```py
def test_passing_position_ids(self):
# Check that passing position ids to generate yields the same results as not passing them, if the position ids
# are correctly built. If the test fails, it means one of two things:
# 1 - the manual position ids are not being piped correctly; OR
# 2 - the automated position ids are not being correctly built.
for model_class in self.all_generative_model_classes:
config, input_ids, attention_mask, _ = self._get_input_ids_and_config(batch_size=1)
if config.is_encoder_decoder:
self.skipTest("This model does not support position_ids")
# To truly test this property, let's create a batch where the second row corresponds to the test input with
# left padding of 1.
pad_token = torch.tensor([[config.pad_token_id or 0]], device=input_ids.device, dtype=input_ids.dtype)
input_ids = torch.cat((input_ids, torch.cat((pad_token, input_ids[:, 1:]), dim=1)), dim=0)
pad_mask = torch.zeros((1, 1), dtype=attention_mask.dtype, device=attention_mask.device)
attention_mask = torch.cat((attention_mask, torch.cat((pad_mask, attention_mask[:, 1:]), dim=1)), dim=0)
position_ids = torch.clamp(torch.cumsum(attention_mask, dim=-1) - 1, min=0)
config.use_cache = True
config.is_decoder = True
model = model_class(config).to(torch_device).eval()
try:
output_position_ids = model.generate(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
max_new_tokens=10
)
except ValueError as exc:
if "The following `model_kwargs` are not used by the model: ['position_ids']" in str(exc):
self.skipTest("This model does not support position_ids")
else:
raise
output_no_position_ids = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=10
)
self.assertListEqual(output_no_position_ids.tolist(), output_position_ids.tolist())
```
### The fix
There are two root causes for this:
1. `position_ids` is rejected in some models when it is passed (e.g. see [here](https://github.com/huggingface/transformers/blob/3c00b885b92fbcd0e7451e56ccf424a2d5a19bbb/src/transformers/models/gpt2/modeling_gpt2.py#L1022)). These models often assume no padding when `position_ids` is rejected.
2. `position_ids` is never updated, so it is only correct when created from scratch (=not passed).
As such, a fix to this problem should consist in updating `position_ids` in `generate`, with `prepare_inputs_for_generation` only creating new `position_ids` when they don't exist.
The test pasted above should be part of our tests after fixing the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29149/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29148/comments | https://api.github.com/repos/huggingface/transformers/issues/29148/events | https://github.com/huggingface/transformers/pull/29148 | 2,144,911,415 | PR_kwDOCUB6oc5nbILV | 29,148 | Token level timestamps for long-form generation in Whisper | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29148). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
Continuation of PR #28984. Adds token level timestamps for long-form generation. The previous PR had a quite different of way to add timestamps, specifically by calling `extract_timestamps` for each segment and each batch separately. I believe, it can be done in one batch, and then divided into segments the same way sequences are divided.
The final timestamps are already aligned with the total length, so there is not need to add start_time for each segment. Although, I am not sure if that is what we want to have, so I can remove this "total duration alignment" is needed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
@patrickvonplaten
@gante ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29148",
"html_url": "https://github.com/huggingface/transformers/pull/29148",
"diff_url": "https://github.com/huggingface/transformers/pull/29148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29148.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29147/comments | https://api.github.com/repos/huggingface/transformers/issues/29147/events | https://github.com/huggingface/transformers/pull/29147 | 2,144,785,389 | PR_kwDOCUB6oc5nasd- | 29,147 | Fix drop path being ignored in DINOv2 | {
"login": "fepegar",
"id": 12688084,
"node_id": "MDQ6VXNlcjEyNjg4MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12688084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fepegar",
"html_url": "https://github.com/fepegar",
"followers_url": "https://api.github.com/users/fepegar/followers",
"following_url": "https://api.github.com/users/fepegar/following{/other_user}",
"gists_url": "https://api.github.com/users/fepegar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fepegar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fepegar/subscriptions",
"organizations_url": "https://api.github.com/users/fepegar/orgs",
"repos_url": "https://api.github.com/users/fepegar/repos",
"events_url": "https://api.github.com/users/fepegar/events{/privacy}",
"received_events_url": "https://api.github.com/users/fepegar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reviewing, @amyeroberts!"
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
A `drop_path_rate` parameter exists in the DINOv2 model, which is propagated all the way to the DINOv2 layers, but never used. This PR addresses this by using the drop path layers in the `forward` pass of the DINOv2 layers, and removing an (I think) unnecessary extra instantiation.
Fixes #29140.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @NielsRogge @molbap
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29147",
"html_url": "https://github.com/huggingface/transformers/pull/29147",
"diff_url": "https://github.com/huggingface/transformers/pull/29147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29147.patch",
"merged_at": 1708450319000
} |
https://api.github.com/repos/huggingface/transformers/issues/29146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29146/comments | https://api.github.com/repos/huggingface/transformers/issues/29146/events | https://github.com/huggingface/transformers/pull/29146 | 2,144,586,510 | PR_kwDOCUB6oc5naAbp | 29,146 | Generate: missing generation config eos token setting in encoder-decoder tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?
These tests were failing with low likelihood, all for the same reason as fixed in [this recent PR](https://github.com/huggingface/transformers/pull/28923): there should be no EOS token to enable endless generation, but the generation config still had the default value.
I couldn't find more occurrences of this pattern.
Example of a failed run fixed by this PR: https://app.circleci.com/jobs/github/huggingface/transformers/1099434 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29146",
"html_url": "https://github.com/huggingface/transformers/pull/29146",
"diff_url": "https://github.com/huggingface/transformers/pull/29146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29146.patch",
"merged_at": 1708445871000
} |
https://api.github.com/repos/huggingface/transformers/issues/29145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29145/comments | https://api.github.com/repos/huggingface/transformers/issues/29145/events | https://github.com/huggingface/transformers/issues/29145 | 2,144,556,865 | I_kwDOCUB6oc5_01dB | 29,145 | AI2 Olmo 7B does not support Flash-Attention 2.0. ValueError: OLMoForCausalLM does not support Flash Attention 2.0 yet. | {
"login": "KaifAhmad1",
"id": 98801504,
"node_id": "U_kgDOBeOXYA",
"avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaifAhmad1",
"html_url": "https://github.com/KaifAhmad1",
"followers_url": "https://api.github.com/users/KaifAhmad1/followers",
"following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}",
"gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions",
"organizations_url": "https://api.github.com/users/KaifAhmad1/orgs",
"repos_url": "https://api.github.com/users/KaifAhmad1/repos",
"events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaifAhmad1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | ### Model description
Model Name: allenai/OLMo-7B
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29145/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29144/comments | https://api.github.com/repos/huggingface/transformers/issues/29144/events | https://github.com/huggingface/transformers/pull/29144 | 2,144,483,260 | PR_kwDOCUB6oc5nZpun | 29,144 | bug-fix: avoid 'Expected all tensors to be on the same device' error when doing multi-GPU training | {
"login": "kallewoof",
"id": 250224,
"node_id": "MDQ6VXNlcjI1MDIyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/250224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kallewoof",
"html_url": "https://github.com/kallewoof",
"followers_url": "https://api.github.com/users/kallewoof/followers",
"following_url": "https://api.github.com/users/kallewoof/following{/other_user}",
"gists_url": "https://api.github.com/users/kallewoof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kallewoof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kallewoof/subscriptions",
"organizations_url": "https://api.github.com/users/kallewoof/orgs",
"repos_url": "https://api.github.com/users/kallewoof/repos",
"events_url": "https://api.github.com/users/kallewoof/events{/privacy}",
"received_events_url": "https://api.github.com/users/kallewoof/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | When doing DPO training, if the model has been split over multiple GPUs, the `tr_loss` and the `tr_loss_step` end up on different devices at some point, resulting in a
```
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1
```
error. This patch makes an explicit copy of the `tr_loss_step` value on the same device as the `tr_loss` value, when necessary.
Ping @patrickvonplaten (git blame last touched), @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29144/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29144",
"html_url": "https://github.com/huggingface/transformers/pull/29144",
"diff_url": "https://github.com/huggingface/transformers/pull/29144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29144.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29143/comments | https://api.github.com/repos/huggingface/transformers/issues/29143/events | https://github.com/huggingface/transformers/pull/29143 | 2,144,476,455 | PR_kwDOCUB6oc5nZoPN | 29,143 | Llama: update rope scaling to match static cache changes | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29143). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
(see title :))
Review suggestion:
1. Review changes in Llama
2. Review the rest | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29143",
"html_url": "https://github.com/huggingface/transformers/pull/29143",
"diff_url": "https://github.com/huggingface/transformers/pull/29143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29143.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29142/comments | https://api.github.com/repos/huggingface/transformers/issues/29142/events | https://github.com/huggingface/transformers/pull/29142 | 2,144,430,707 | PR_kwDOCUB6oc5nZeOR | 29,142 | Add training version check for AQLM quantizer. | {
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29142). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following this [PR](https://github.com/Vahe1994/AQLM/pull/26) form `aqlm` and this [PR](https://github.com/huggingface/peft/pull/1476) from `PEFT` it is necessary to check if AQLM supports training or not. It appears that this check is bypassed when not using trainer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29142",
"html_url": "https://github.com/huggingface/transformers/pull/29142",
"diff_url": "https://github.com/huggingface/transformers/pull/29142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29142.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29141/comments | https://api.github.com/repos/huggingface/transformers/issues/29141/events | https://github.com/huggingface/transformers/pull/29141 | 2,144,232,619 | PR_kwDOCUB6oc5nYyzq | 29,141 | Save (circleci) cache at the end of a job | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29141). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
This way, `pytest` will run before `cache saving` and we have access to the results earlier in the case of partial or no cache loaded. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29141",
"html_url": "https://github.com/huggingface/transformers/pull/29141",
"diff_url": "https://github.com/huggingface/transformers/pull/29141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29141.patch",
"merged_at": 1708435896000
} |
https://api.github.com/repos/huggingface/transformers/issues/29140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29140/comments | https://api.github.com/repos/huggingface/transformers/issues/29140/events | https://github.com/huggingface/transformers/issues/29140 | 2,144,160,231 | I_kwDOCUB6oc5_zUnn | 29,140 | Drop path is ignored in DINOv2 | {
"login": "fepegar",
"id": 12688084,
"node_id": "MDQ6VXNlcjEyNjg4MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12688084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fepegar",
"html_url": "https://github.com/fepegar",
"followers_url": "https://api.github.com/users/fepegar/followers",
"following_url": "https://api.github.com/users/fepegar/following{/other_user}",
"gists_url": "https://api.github.com/users/fepegar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fepegar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fepegar/subscriptions",
"organizations_url": "https://api.github.com/users/fepegar/orgs",
"repos_url": "https://api.github.com/users/fepegar/repos",
"events_url": "https://api.github.com/users/fepegar/events{/privacy}",
"received_events_url": "https://api.github.com/users/fepegar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, thanks for the issue! I've checked out your branch, from what I'm seeing tests are passing on your fix, would you mind opening a PR? \r\nAlso, since this will affect training, do you have a script that compares both in a training scenario? AFAIK current integration tests for Dinov2 are not in a training setting.",
"Hi @molbap. Thanks for your response! I've created\r\n- #29147.\r\n\r\nThe high-level effects of these changes would take quite a lot of work to measure, but here's a little snippet:\r\n\r\n```python\r\n>>> import torch\r\n>>> from transformers import Dinov2Model\r\n>>> \r\n>>> torch.set_grad_enabled(False)\r\n>>> torch.manual_seed(0)\r\n>>> \r\n>>> model = Dinov2Model.from_pretrained(\"facebook/dinov2-base\", drop_path_rate=0.3)\r\n>>> model.train()\r\n>>> \r\n>>> x = torch.rand(1, 3, 224, 224)\r\n>>> out_1 = model(x)\r\n>>> out_2 = model(x)\r\n>>> torch.all(out_1.last_hidden_state == out_2.last_hidden_state)\r\n```\r\n\r\nThe output is `tensor(True)` in `main`, indicating that the depth is deterministic because the drop path rate is not being used, where it's `tensor(False)` in my branch due to stochastic depth being properly enabled."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: not necessarily
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm not getting any errors, and I think sharing a script doesn't make sense in this case.
The issue is simply that two `Dinov2DropPath` layers are being instantiated in the `Dinov2Layer`:
https://github.com/huggingface/transformers/blob/7d312ad2e9473cd3a0ea3e9b206b8ed3c147e9be/src/transformers/models/dinov2/modeling_dinov2.py#L374-L392
But they're not being used anywhere else:
https://github.com/huggingface/transformers/blob/7d312ad2e9473cd3a0ea3e9b206b8ed3c147e9be/src/transformers/models/dinov2/modeling_dinov2.py#L394-L423
### Expected behavior
These layers should probably not be ignored. Moreover, I think there's no reason to instantiate two different ones.
I've implemented a fix in https://github.com/fepegar/transformers/pull/1/files. Please let me know if you'd like me to open a PR here.
I've tried running `pytest`, but I'm getting some `pytest`-related errors (not failing tests). Happy to report that somewhere else as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29139/comments | https://api.github.com/repos/huggingface/transformers/issues/29139/events | https://github.com/huggingface/transformers/issues/29139 | 2,144,132,992 | I_kwDOCUB6oc5_zN-A | 29,139 | past_key_values for SeamlessM4Tv2ForSpeechToText is not working as expected | {
"login": "vapemaster-kz",
"id": 65128133,
"node_id": "MDQ6VXNlcjY1MTI4MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65128133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vapemaster-kz",
"html_url": "https://github.com/vapemaster-kz",
"followers_url": "https://api.github.com/users/vapemaster-kz/followers",
"following_url": "https://api.github.com/users/vapemaster-kz/following{/other_user}",
"gists_url": "https://api.github.com/users/vapemaster-kz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vapemaster-kz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vapemaster-kz/subscriptions",
"organizations_url": "https://api.github.com/users/vapemaster-kz/orgs",
"repos_url": "https://api.github.com/users/vapemaster-kz/repos",
"events_url": "https://api.github.com/users/vapemaster-kz/events{/privacy}",
"received_events_url": "https://api.github.com/users/vapemaster-kz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ylacombe "
] | 1,708 | 1,708 | null | NONE | null | ### System Info
transformers version: 4.37.2
python verison: 3.8.6.
OS: Windows 11
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have segments of audio, and I would like to pass past_key_values between them. I exptected the transcribtion quality to increase, but rather it became unreadable.
```python
processor = AutoProcessor.from_pretrained(path_to_model)
model = SeamlessM4Tv2ForSpeechToText.from_pretrained(path_to_model)
audio_chunks = [audio_segments]
past_key_values= None
for i in range(5):
audio_inputs = processor(audios=audio_chunks[i], return_tensors="pt", sampling_rate=16_000)
output = model.generate(**audio_inputs, tgt_lang="rus", repetition_penalty=1.1, return_dict_in_generate=True, past_key_values=past_key_values)
tmp_result = processor.decode(output[0][0], skip_special_tokens=True)
past_key_values = output['past_key_values']
```
### Expected behavior
The transcription quality is supposed to increase when I pass past_key_values (or at least have similar transcribation quality as when past_key_values=None).
The audio is the same. In other words, I had some audio, applied VAD to segment it into batches, and then fed this segments to the model one by one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29139/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29138/comments | https://api.github.com/repos/huggingface/transformers/issues/29138/events | https://github.com/huggingface/transformers/pull/29138 | 2,144,115,768 | PR_kwDOCUB6oc5nYZN3 | 29,138 | Fix ROPE embeddings for LLama | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?
This [test](https://app.circleci.com/pipelines/github/huggingface/transformers/84847/workflows/2a5e5769-9431-4e2b-babb-81a112558a97/jobs/1098065) failed on my PR and I checked to see the reason. I found that the changes introduced to make llama compile compatible are causing the issue.
The fixes here are tested with fullgraph compile, compilation is still working without graph breaks. Additionally, the failing [test]((https://app.circleci.com/pipelines/github/huggingface/transformers/84847/workflows/2a5e5769-9431-4e2b-babb-81a112558a97/jobs/1098065)) was run 500 times. I found that aside from rope embeddings, the cause of test failure was in SDPA attention. I cannot say what is the reason exactly, but running the test 500 times gives 95% success in SDPA and 100% success in eager, using the fixes introduced in this PR. Prior to these fixes, the tests were running with 90% success for both attentions.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29138",
"html_url": "https://github.com/huggingface/transformers/pull/29138",
"diff_url": "https://github.com/huggingface/transformers/pull/29138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29138.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29137/comments | https://api.github.com/repos/huggingface/transformers/issues/29137/events | https://github.com/huggingface/transformers/issues/29137 | 2,144,069,859 | I_kwDOCUB6oc5_y-jj | 29,137 | transformers.AutoTokenizer.from_pretrained( ... use_Fast=False) fails with 'TypeError: not a string' for some tokenizers | {
"login": "Jeronymous",
"id": 22522728,
"node_id": "MDQ6VXNlcjIyNTIyNzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22522728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeronymous",
"html_url": "https://github.com/Jeronymous",
"followers_url": "https://api.github.com/users/Jeronymous/followers",
"following_url": "https://api.github.com/users/Jeronymous/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeronymous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jeronymous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeronymous/subscriptions",
"organizations_url": "https://api.github.com/users/Jeronymous/orgs",
"repos_url": "https://api.github.com/users/Jeronymous/repos",
"events_url": "https://api.github.com/users/Jeronymous/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jeronymous/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker ",
"Hey! Thanks for reporting. \r\n`tokenizer.Load(self.vocab_file)` seems to be the issue here. If you check the repo it does not have the `tokenizer.model` .\r\nYou should raise the issue there! \r\n",
"Thanks @ArthurZucker 👍 "
] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Also I tried the following versions:
- `tokenizers` version: 0.15.0 and 0.15.2 (latest)
- `sentencepiece` version: 0.1.99 and 0.2.0 (latest)
### Who can help?
[ArthurZucker](https://github.com/ArthurZucker)
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(that may be a duplicate of https://github.com/huggingface/transformers/issues/27845)
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("croissantllm/CroissantLLMBase", use_fast=False)
```
This fails with
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-9fe531439285> in <module>
1 import transformers
----> 2 tokenizer = transformers.AutoTokenizer.from_pretrained("croissantllm/CroissantLLMBase", use_fast=False)
~/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
812 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
813 )
--> 814 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
815
816 # Otherwise we have to be creative.
~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
2027 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}")
2028
-> 2029 return cls._from_pretrained(
2030 resolved_vocab_files,
2031 pretrained_model_name_or_path,
~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs)
2259 # Instantiate the tokenizer.
2260 try:
-> 2261 tokenizer = cls(*init_inputs, **init_kwargs)
2262 except OSError:
2263 raise OSError(
~/.local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py in __init__(self, vocab_file, unk_token, bos_token, eos_token, pad_token, sp_model_kwargs, add_bos_token, add_eos_token, clean_up_tokenization_spaces, use_default_system_prompt, spaces_between_special_tokens, legacy, **kwargs)
176 self.add_eos_token = add_eos_token
177 self.use_default_system_prompt = use_default_system_prompt
--> 178 self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
179
180 super().__init__(
~/.local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py in get_spm_processor(self, from_slow)
201 tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)
202 if self.legacy or from_slow: # no dependency on protobuf
--> 203 tokenizer.Load(self.vocab_file)
204 return tokenizer
205
/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto)
903 if model_proto:
904 return self.LoadFromSerializedProto(model_proto)
--> 905 return self.LoadFromFile(model_file)
906
907
/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py in LoadFromFile(self, arg)
308
309 def LoadFromFile(self, arg):
--> 310 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
311
312 def _EncodeAsIds(self, text, enable_sampling, nbest_size, alpha, add_bos, add_eos, reverse, emit_unk_piece):
TypeError: not a string
```
### Expected behavior
I would expect that tokenizer to load.
(Note: I had this error while investigating why the fast tokenizer does not scale well with the text length, but this is another issue) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29136/comments | https://api.github.com/repos/huggingface/transformers/issues/29136/events | https://github.com/huggingface/transformers/pull/29136 | 2,144,048,828 | PR_kwDOCUB6oc5nYKjd | 29,136 | Generate: low memory tests are flaky | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29136). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts #29109 seems to have fixed most of the issue (this test does compare batched vs unbatched generation under the hood, which the PR linked above fixes)\r\n\r\nThe root issue about this and other tests being non-deterministic tests persists, though :) I'm going to close the PR and move the discussion to slack at a future time :)"
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
As identified by @molbap -- generate tests with the `low_memory` flag are flaky. The full reason is the same as explained in [this comment](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535).
The error likelihood has low (~3%), but still quite disruptive for transformers devs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29136",
"html_url": "https://github.com/huggingface/transformers/pull/29136",
"diff_url": "https://github.com/huggingface/transformers/pull/29136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29136.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29135/comments | https://api.github.com/repos/huggingface/transformers/issues/29135/events | https://github.com/huggingface/transformers/pull/29135 | 2,144,037,386 | PR_kwDOCUB6oc5nYICS | 29,135 | Revert low cpu mem tie weights | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29135). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sounds good, thanks for taking care of this!"
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Reverts #28948 and #29043
See relevant comment: https://github.com/huggingface/transformers/pull/29110#issuecomment-1953847826
cc @hackyon @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29135/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29135",
"html_url": "https://github.com/huggingface/transformers/pull/29135",
"diff_url": "https://github.com/huggingface/transformers/pull/29135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29135.patch",
"merged_at": 1708430807000
} |
https://api.github.com/repos/huggingface/transformers/issues/29134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29134/comments | https://api.github.com/repos/huggingface/transformers/issues/29134/events | https://github.com/huggingface/transformers/pull/29134 | 2,143,960,967 | PR_kwDOCUB6oc5nX3V4 | 29,134 | Add generate kwargs to VQA pipeline | {
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29134). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29134",
"html_url": "https://github.com/huggingface/transformers/pull/29134",
"diff_url": "https://github.com/huggingface/transformers/pull/29134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29134.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29133/comments | https://api.github.com/repos/huggingface/transformers/issues/29133/events | https://github.com/huggingface/transformers/pull/29133 | 2,143,951,741 | PR_kwDOCUB6oc5nX1Va | 29,133 | [`cuda kernels`] only compile them when initializing | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'll make sure of that before merging! Testing now!",
"```bash\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head - AssertionError: False is not true\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_equivalence_cpu_gpu - AssertionError: assert False\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_with_box_refine_two_stage - AssertionError: False is not true\r\n```\r\nfailing. Logits do no match, failing on main as well\r\n\r\n```bash \r\nFAILED tests/models/deta/test_modeling_deta.py::DetaModelIntegrationTests::test_inference_object_detection_head - AssertionError: False is not true\r\nFAILED tests/models/deta/test_modeling_deta.py::DetaModelIntegrationTests::test_inference_object_detection_head_swin_backbone - AssertionError: False is not true\r\n```\r\nfailing as well on main. Merging\r\n\r\nYoso is alright"
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Fixes #29130, from 1min to 6seconds | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29133/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29133",
"html_url": "https://github.com/huggingface/transformers/pull/29133",
"diff_url": "https://github.com/huggingface/transformers/pull/29133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29133.patch",
"merged_at": 1708429139000
} |
https://api.github.com/repos/huggingface/transformers/issues/29132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29132/comments | https://api.github.com/repos/huggingface/transformers/issues/29132/events | https://github.com/huggingface/transformers/issues/29132 | 2,143,872,350 | I_kwDOCUB6oc5_yOVe | 29,132 | SPAM | {
"login": "cook9019",
"id": 141466977,
"node_id": "U_kgDOCG6dYQ",
"avatar_url": "https://avatars.githubusercontent.com/u/141466977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cook9019",
"html_url": "https://github.com/cook9019",
"followers_url": "https://api.github.com/users/cook9019/followers",
"following_url": "https://api.github.com/users/cook9019/following{/other_user}",
"gists_url": "https://api.github.com/users/cook9019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cook9019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cook9019/subscriptions",
"organizations_url": "https://api.github.com/users/cook9019/orgs",
"repos_url": "https://api.github.com/users/cook9019/repos",
"events_url": "https://api.github.com/users/cook9019/events{/privacy}",
"received_events_url": "https://api.github.com/users/cook9019/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29132/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29131/comments | https://api.github.com/repos/huggingface/transformers/issues/29131/events | https://github.com/huggingface/transformers/pull/29131 | 2,143,812,725 | PR_kwDOCUB6oc5nXWfA | 29,131 | added the max_matching_ngram_size to GenerationConfig | {
"login": "mosheber",
"id": 22236370,
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosheber",
"html_url": "https://github.com/mosheber",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"repos_url": "https://api.github.com/users/mosheber/repos",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
* Added the max_matching_ngram_size parameter into the GenerationConfig, for the PromptLookupCandidateGenerator.
* Included the max_matching_ngram_size when calling the __init__ of PromptLookupCandidateGenerator in _get_candidate_generator, in case it is specified.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante , would appreciate it if you could give this PR a glance, and thank you in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29131",
"html_url": "https://github.com/huggingface/transformers/pull/29131",
"diff_url": "https://github.com/huggingface/transformers/pull/29131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29131.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29130/comments | https://api.github.com/repos/huggingface/transformers/issues/29130/events | https://github.com/huggingface/transformers/issues/29130 | 2,143,788,296 | I_kwDOCUB6oc5_x50I | 29,130 | Move kernel compilation to init rather than at import stage | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | ### Feature request
Some models like Deformable DETR rely on custom CUDA kernels to be compiled as seen [here](https://github.com/huggingface/transformers/blob/f7ef7cec6c6c162087421f36a17eabdbb223579d/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L54).
Currently these are compiled when importing the Transformers library, but this needs to happen later, when initializing the models.
All custom CUDA kernels are defined here: https://github.com/huggingface/transformers/tree/main/src/transformers/kernels
### Motivation
This is pretty important to fix actually as currently running `make fixup` on machines that have `ninja` installed will compile all these kernels before running the quality checks, making it super slow.. thanks @younesbelkada for the info
### Your contribution
Not sure I can help with this, the current workaround is simply removing `ninja` from the environment | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29130/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29130/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29129/comments | https://api.github.com/repos/huggingface/transformers/issues/29129/events | https://github.com/huggingface/transformers/issues/29129 | 2,143,773,084 | I_kwDOCUB6oc5_x2Gc | 29,129 | Flash attention implementation with BERT base model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Not that expert but I suggest you can try bettertransformer for extreme speed up. ( In my knowledge that flash-attn is mainly focused on kv cache which is not exist on Bert-like model in most cases. )",
"> Not that expert but I suggest you can try bettertransformer for extreme speed up. ( In my knowledge that flash-attn is mainly focused on kv cache which is not exist on Bert-like model in most cases. )\r\n\r\nmosaic bert is based on flash attention. ",
"Hi @rahul-k01, \r\n\r\nPlease make sure to only tag a limited set of relevant people when opening an issue - everyone is very busy and if this was done on all issues we wouldn't be able to meaningfully address our notifications. \r\n\r\nFlashAttention isn't implemented yet for BERT. There's an open issue where you can track which models have it added and on-going work: #26350. At the moment, it seems @filippo82 is working on the addition for BERT. ",
"> Hi @rahul-k01,\r\n> \r\n> Please make sure to only tag a limited set of relevant people when opening an issue - everyone is very busy and if this was done on all issues we wouldn't be able to meaningfully address our notifications.\r\n> \r\n> FlashAttention isn't implemented yet for BERT. There's an open issue where you can track which models have it added and on-going work: #26350. At the moment, it seems @filippo82 is working on the addition for BERT.\r\n\r\nThanks for your response\r\n"
] | 1,708 | 1,708 | null | NONE | null | ### Model description
hello and thanks community.
I am trying to replace standard attention by flash attention in the BERT base Model. Anyone please help not able to find any tutorial or any discussions.
or just give some directions how to do that ..I have got the idea of making attention prob drop prob = 0 . it makes sense but not sure how it is going to work.
@tridao
@arthur
@jamaliki
@sorenmc
@LysandreJik @ArthurZucker
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29129/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29128/comments | https://api.github.com/repos/huggingface/transformers/issues/29128/events | https://github.com/huggingface/transformers/issues/29128 | 2,143,692,799 | I_kwDOCUB6oc5_xif_ | 29,128 | bart-large-xsum model: There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight']. | {
"login": "Aisuko",
"id": 8053949,
"node_id": "MDQ6VXNlcjgwNTM5NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8053949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aisuko",
"html_url": "https://github.com/Aisuko",
"followers_url": "https://api.github.com/users/Aisuko/followers",
"following_url": "https://api.github.com/users/Aisuko/following{/other_user}",
"gists_url": "https://api.github.com/users/Aisuko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aisuko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aisuko/subscriptions",
"organizations_url": "https://api.github.com/users/Aisuko/orgs",
"repos_url": "https://api.github.com/users/Aisuko/repos",
"events_url": "https://api.github.com/users/Aisuko/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aisuko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ArthurZucker @younesbelkada ",
"Hey @Aisuko, could you provide a **minimal** reproducer ? That would help use! \r\nAlso note that the `generation parameters` issues can probably be safely ignored. The missing keys is however a bit more problematic! \r\nMight be tied weights that are not tied properly, is `tie_word_embeddings` used ? ",
"Hi, guys. Thanks for your quick response.\r\n\r\nThe minimal code see below, the code only including the steps of processing data and training. And we cat get same result from it. https://www.kaggle.com/code/aisuko/minimal-reproducer-for-issue-29128/notebook\r\n\r\nThe embedding process without using `tie_word_embeddings` parameter.\r\n\r\n\r\n## libraries\r\n\r\n```\r\n!pip install transformers==4.37.2\r\n!pip install datasets==2.17.0\r\n!pip install evaluate==0.4.1\r\n!pip install rouge-score==0.1.2\r\n```\r\n\r\n## Code\r\n```\r\n# Import libraries\r\nimport os\r\nimport re\r\nimport nltk\r\nimport pandas as pd\r\nimport numpy as np\r\nimport warnings\r\nfrom datasets import Dataset\r\nfrom datasets import load_metric\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\nfrom transformers import BartForConditionalGeneration\r\nfrom transformers import DataCollatorForSeq2Seq\r\nfrom transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n\r\nos.environ['MODEL']='facebook/bart-large-xsum'\r\nos.environ[\"WANDB_NAME\"] = \"ft-facebook-bart-large-xsum-on-samsum\"\r\n\r\nwarnings.filterwarnings('ignore')\r\n\r\n# Loading and preprocessing data from https://www.kaggle.com/datasets/nileshmalode1/samsum-dataset-text-summarization\r\ntrain=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-train.csv')\r\ntest=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-test.csv')\r\nval=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-validation.csv')\r\n\r\ndef clean_tags(text):\r\n clean=re.compile('<.*?>') # compiling tags\r\n clean=re.sub(clean, '', text) # replacing tags text by an empty string\r\n \r\n # removing empty dialogues\r\n clean='\\n'.join([line for line in clean.split('\\n') if not re.match('.*:\\s*$', line)])\r\n return clean\r\n\r\ndef clean_df(df, cols):\r\n for col in cols:\r\n df[col]=df[col].fillna('').apply(clean_tags)\r\n return df\r\n\r\ntrain=clean_df(train, ['dialogue','summary'])\r\ntest=clean_df(test, ['dialogue', 'summary'])\r\nval=clean_df(val, ['dialogue', 'summary'])\r\n\r\ntrain_ds=Dataset.from_pandas(train)\r\ntest_ds=Dataset.from_pandas(test)\r\nval_ds=Dataset.from_pandas(val)\r\n\r\n# Tokenizer\r\ntokenizer=BartTokenizer.from_pretrained(os.getenv('MODEL'))\r\n\r\ndef preprocess_func(example):\r\n # Iterating over every `dialogue` in the datset and saving them as input to the model\r\n inputs=[doc for doc in example['dialogue']]\r\n # we use tokenizer convert the input dialogues into tokens that can be easily understood by the BART model.\r\n # The truncation=True parameter ensures that all dialogues have a maximum number of 1024 tokens, as defined by the `max_length` parameter\r\n model_inputs=tokenizer(inputs, max_length=1024, truncation=True)\r\n \r\n # Setup the tokenizer for targets\r\n with tokenizer.as_target_tokenizer():\r\n # we tokenizes the target variable, which is our summaries. And we expect summaries to be a much shorter text than that of dialogues max_length=128\r\n labels=tokenizer(example['summary'], max_length=128, truncation=True)\r\n \r\n # we adding the tokenized labels to the preprocessed dataset, alongside the tokenized inputs.\r\n model_inputs['labels']=labels['input_ids']\r\n return model_inputs\r\n\r\n\r\ntokenized_train= train_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\ntokenized_test=test_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\ntokenized_val=val_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\n\r\n# Loading the model\r\nmodel=BartForConditionalGeneration.from_pretrained(os.getenv('MODEL'))\r\n\r\n# Loading DataCollator\r\ndata_collator= DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)\r\n\r\n# Customizing metrics\r\nmetric=load_metric('rouge')\r\n\r\nnltk.download('punkt') # this divides a text into a list of sentences\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels=eval_pred # obtaining predictions and true labels\r\n \r\n # decoding predictions\r\n decoded_preds=tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n \r\n # obtaining the true labels tokens, while eliminating any possible masked token (i.e: label=-100)\r\n labels=np.where(labels!=-100, labels, tokenizer.pad_token_id)\r\n decoded_labels=tokenizer.batch_decode(labels, skip_special_tokens=True)\r\n \r\n # rouge expects a newline after each sentence\r\n decoded_preds=['\\n'.join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]\r\n decoded_labels=['\\n'.join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]\r\n \r\n # computing rouge score\r\n result=metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n result={key: value.mid.fmeasure*100 for key, value in result.items()} # extracting some results\r\n \r\n # add mean-genrated length\r\n prediction_lens=[np.count_nonzero(pred!=tokenizer.pad_token_id) for pred in predictions]\r\n result['gen_len']=np.mean(prediction_lens)\r\n return {k: round(v,4) for k,v in result.items()}\r\n\r\n\r\n# Training\r\ntraining_args=Seq2SeqTrainingArguments(\r\n output_dir=os.getenv('WANDB_NAME'),\r\n evaluation_strategy='epoch',\r\n save_strategy='epoch',\r\n load_best_model_at_end=True,\r\n metric_for_best_model='eval_loss',\r\n seed=42,\r\n learning_rate=2e-5,\r\n max_steps=100,\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n gradient_accumulation_steps=4,\r\n weight_decay=0.01,\r\n save_total_limit=2,\r\n num_train_epochs=1, # only for testing\r\n predict_with_generate=True,\r\n fp16=True,\r\n report_to='none',\r\n)\r\n\r\ntrainer=Seq2SeqTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_train,\r\n eval_dataset=tokenized_test,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics\r\n)\r\n\r\ntrainer.train()\r\n```"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23.dev20240116
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Thanks for your great work.
Please take a look at the notebook below in Kaggle. https://www.kaggle.com/code/aisuko/text-summarization-with-bart-series-llm/notebook
After training process finished it will show the warning message below
```
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
Non-default generation parameters: {'max_length': 62, 'min_length': 11, 'early_stopping': True, 'num_beams': 6, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight'].
```
And the fine-tuned model cannot be used to do inference. I saw a similar type of issue https://github.com/huggingface/transformers/issues/27972
### Expected behavior
No warning issue and I can use the fine-tuned model to do inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29128/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29127/comments | https://api.github.com/repos/huggingface/transformers/issues/29127/events | https://github.com/huggingface/transformers/issues/29127 | 2,143,620,996 | I_kwDOCUB6oc5_xQ-E | 29,127 | err_handle(layoutlmv3): Error message doesn't give much clarity when boxes not containing enough information | {
"login": "Sushaanth-Suresh-Kumar",
"id": 123300765,
"node_id": "U_kgDOB1lrnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123300765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sushaanth-Suresh-Kumar",
"html_url": "https://github.com/Sushaanth-Suresh-Kumar",
"followers_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/followers",
"following_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/orgs",
"repos_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/repos",
"events_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Would you like to open a PR to improve the error? 🤗 ",
"Sure"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
****
Model I am using LayoutLMv3:
when `boxes = [[123, 53], [36, 87], ...]` (basically any list which is not according to the proper format)
by proper format I mean `[[123, 346, 234, 634], [356, 568, 234, 25], ...]`
```python
encoding = processor(
image_1,
text,
boxes=boxes,
max_length=512,
padding="max_length",
truncation=True,
return_tensors="pt"
)
```
It produces a this error message
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (labels in this case) have excessive nesting (inputs type list where type int is expected).
```
**To Reproduce**
Steps to reproduce the behavior:
1. add any list of boxes with not enough values like `boxes = [[123, 53], [36, 87], ...]`
2. when run it throws the ValueError mentioned above
### Expected behavior
Can throw an error saying
```
ValueError: boxes doesn't have enough values inside each box. Each box should contain 4 values
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29127/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29126/comments | https://api.github.com/repos/huggingface/transformers/issues/29126/events | https://github.com/huggingface/transformers/issues/29126 | 2,143,539,045 | I_kwDOCUB6oc5_w89l | 29,126 | WARNING: tokenization mismatch: 43 vs. 44. (ignored) | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @lucasjinreal, \r\n\r\nWithout a code sample to replicate, information about the running environment or more information about the error - including full trackback - there isn't much we can do to help you here."
] | 1,708 | 1,708 | null | NONE | null | Recently there are many errors got either from fastchat or llava code base if using latest transfomers.
WARNING: tokenization mismatch: 43 vs. 44. (ignored)
What does this happen and how to dismiss it? Will it effect the final training result? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29126/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29125/comments | https://api.github.com/repos/huggingface/transformers/issues/29125/events | https://github.com/huggingface/transformers/pull/29125 | 2,143,504,797 | PR_kwDOCUB6oc5nWUBE | 29,125 | feat: Upgrade Weights & Biases callback | {
"login": "parambharat",
"id": 12809212,
"node_id": "MDQ6VXNlcjEyODA5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parambharat",
"html_url": "https://github.com/parambharat",
"followers_url": "https://api.github.com/users/parambharat/followers",
"following_url": "https://api.github.com/users/parambharat/following{/other_user}",
"gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parambharat/subscriptions",
"organizations_url": "https://api.github.com/users/parambharat/orgs",
"repos_url": "https://api.github.com/users/parambharat/repos",
"events_url": "https://api.github.com/users/parambharat/events{/privacy}",
"received_events_url": "https://api.github.com/users/parambharat/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds a few new functionalities to the Weights & Biases Callback
- Logs Peft and Lora Config to wandb if present
- Adds model parameter counts to wandb config and artifact metadata
- Adds on_predict methods to log prediction metrics
- Prints the model architecture to a file alongside the wandb artifact
- Logs initial and final models to the wandb artifact to full reproducibility
- Adds steps and epoch aliases to checkpoint artifacts
- Here's a [link](https://wandb.ai/parambharat/test-transformers/artifacts/model/model-rg4pcjcv/v3) to the what the logged artifacts look like
- Here's a run [overview page](https://wandb.ai/parambharat/test-transformers/runs/rg4pcjcv/overview?workspace=user-parambharat) with added config and metadata for the run with peft configs logged
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
## Who can review?
- trainer: @muellerzr and @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29125/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29125",
"html_url": "https://github.com/huggingface/transformers/pull/29125",
"diff_url": "https://github.com/huggingface/transformers/pull/29125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29125.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29124/comments | https://api.github.com/repos/huggingface/transformers/issues/29124/events | https://github.com/huggingface/transformers/pull/29124 | 2,143,420,111 | PR_kwDOCUB6oc5nWBoW | 29,124 | added unrolled whisper_generation.py | {
"login": "robertgshaw2-neuralmagic",
"id": 114415538,
"node_id": "U_kgDOBtHXsg",
"avatar_url": "https://avatars.githubusercontent.com/u/114415538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robertgshaw2-neuralmagic",
"html_url": "https://github.com/robertgshaw2-neuralmagic",
"followers_url": "https://api.github.com/users/robertgshaw2-neuralmagic/followers",
"following_url": "https://api.github.com/users/robertgshaw2-neuralmagic/following{/other_user}",
"gists_url": "https://api.github.com/users/robertgshaw2-neuralmagic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robertgshaw2-neuralmagic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robertgshaw2-neuralmagic/subscriptions",
"organizations_url": "https://api.github.com/users/robertgshaw2-neuralmagic/orgs",
"repos_url": "https://api.github.com/users/robertgshaw2-neuralmagic/repos",
"events_url": "https://api.github.com/users/robertgshaw2-neuralmagic/events{/privacy}",
"received_events_url": "https://api.github.com/users/robertgshaw2-neuralmagic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29124",
"html_url": "https://github.com/huggingface/transformers/pull/29124",
"diff_url": "https://github.com/huggingface/transformers/pull/29124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29124.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29123/comments | https://api.github.com/repos/huggingface/transformers/issues/29123/events | https://github.com/huggingface/transformers/pull/29123 | 2,143,416,822 | PR_kwDOCUB6oc5nWA8d | 29,123 | [`Core generation`] Let's be less restrictive on the arguments passed to the generation calls. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29123). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Updates generate calls | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29123/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29123",
"html_url": "https://github.com/huggingface/transformers/pull/29123",
"diff_url": "https://github.com/huggingface/transformers/pull/29123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29123.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29122/comments | https://api.github.com/repos/huggingface/transformers/issues/29122/events | https://github.com/huggingface/transformers/pull/29122 | 2,143,413,555 | PR_kwDOCUB6oc5nWARN | 29,122 | FIX [`bnb` / `tests`] Propagate the changes from #29092 to 4-bit tests | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29122). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
As per title, I overlooked the fix and forgot to push the changes of https://github.com/huggingface/transformers/pull/29092 in 4-bit tests 😢
cc @amyeroberts @Titus-von-Koeller | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29122",
"html_url": "https://github.com/huggingface/transformers/pull/29122",
"diff_url": "https://github.com/huggingface/transformers/pull/29122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29122.patch",
"merged_at": 1708423875000
} |
https://api.github.com/repos/huggingface/transformers/issues/29121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29121/comments | https://api.github.com/repos/huggingface/transformers/issues/29121/events | https://github.com/huggingface/transformers/issues/29121 | 2,143,187,142 | I_kwDOCUB6oc5_vnDG | 29,121 | AttributeError: 'DistilBertModel' object has no attribute '_use_flash_attention_2' | {
"login": "javilonso",
"id": 31996659,
"node_id": "MDQ6VXNlcjMxOTk2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31996659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javilonso",
"html_url": "https://github.com/javilonso",
"followers_url": "https://api.github.com/users/javilonso/followers",
"following_url": "https://api.github.com/users/javilonso/following{/other_user}",
"gists_url": "https://api.github.com/users/javilonso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/javilonso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javilonso/subscriptions",
"organizations_url": "https://api.github.com/users/javilonso/orgs",
"repos_url": "https://api.github.com/users/javilonso/repos",
"events_url": "https://api.github.com/users/javilonso/events{/privacy}",
"received_events_url": "https://api.github.com/users/javilonso/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @javilonso ! \r\nI quickly tried on transformers main: \r\n```python\r\nfrom transformers import pipeline\r\n\r\nunmasker = pipeline('fill-mask', model='distilbert-base-uncased')\r\nunmasker(\"Hello I'm a [MASK] model.\")\r\n```\r\nBut I did not managed to repro, can you share a snippet to reproduce the issue?\r\nI also tried:\r\n```python\r\nfrom transformers import DistilBertTokenizer, DistilBertModel\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\nmodel = DistilBertModel.from_pretrained(\"distilbert-base-uncased\")\r\ntext = \"Replace me by any text you'd like.\"\r\n\r\nencoded_input = tokenizer(text, return_tensors='pt')\r\noutput = model(**encoded_input)\r\n```\r\nCan you also try on transformers main?",
"Hi, I am also facing the same issue, the code @younesbelkada gave works well on my system. However, I get\r\n `AttributeError: DistilBertModel' object has no attribute '_use_flash_attention_2` \r\nwhen running my prediction with my finetuned distilbert model. I am also running transformers 4.37.2. It works fine on 4.35.2.\r\n\r\nThe error happens when trying to perform the prediction with a local copy of the model:\r\n```python\r\ninputs = tokenizer.encode(text, return_tensors=\"pt\").to(device)\r\nlogits = model(inputs).logits\r\n```\r\n\r\nThis is how I loaded the tokenizer:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n'models/tokenizer', add_prefix_space=True)\r\n```\r\n\r\nI am also loading a saved local copy of the model here:\r\n```python\r\nmodel = torch.load(model_name, map_location=torch.device(device))\r\n```\r\n\r\nHope the information provided is enough! Thanks in advance.\r\n ",
"This issue does not seem to occur when I finetune my model again on transformers 4.38.0.\nI guess the solution would be to update the transformers package."
] | 1,708 | 1,708 | null | NONE | null | ### System Info
Obtaining this error in last transformers 4.37.2, but works correctly in transformers 4.35.2
Simple inference with a finetuned distilbert model.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformers 4.37.2
2. Perform inference with model https://huggingface.co/distilbert/distilbert-base-uncased
### Expected behavior
Inference should go through without errors | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29121/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29120/comments | https://api.github.com/repos/huggingface/transformers/issues/29120/events | https://github.com/huggingface/transformers/pull/29120 | 2,143,042,742 | PR_kwDOCUB6oc5nUwcG | 29,120 | Starcoder2 model | {
"login": "jlamypoirier",
"id": 18523627,
"node_id": "MDQ6VXNlcjE4NTIzNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamypoirier",
"html_url": "https://github.com/jlamypoirier",
"followers_url": "https://api.github.com/users/jlamypoirier/followers",
"following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions",
"organizations_url": "https://api.github.com/users/jlamypoirier/orgs",
"repos_url": "https://api.github.com/users/jlamypoirier/repos",
"events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamypoirier/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | The Starcoder2 model, adapted from Mistral. All changes are done through options, so Mistral itself is still supported. Main changes:
* Use layer norm (RMS still available as option)
* Use standard MLP (gated still available as option)
* Add back biases (optional)
* Change (default?) tokenizer class
*Embedding and residual dropout
It does not support absolute embeddings, so can't support Santacoder or Starcoder
Todo:
* Forward changes from #27931, #29027 (and future changes from Feb. 19)
* Documentation
* Copyright
* Point to starcoder2 checkpoint
* Other minor things (see todos)
@younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29120",
"html_url": "https://github.com/huggingface/transformers/pull/29120",
"diff_url": "https://github.com/huggingface/transformers/pull/29120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29120.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29119/comments | https://api.github.com/repos/huggingface/transformers/issues/29119/events | https://github.com/huggingface/transformers/pull/29119 | 2,143,005,049 | PR_kwDOCUB6oc5nUoNF | 29,119 | Generate: unset GenerationConfig parameters do not raise warning | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?:
Thank you @fxmarty for raising [this issue](https://github.com/huggingface/transformers/pull/25381#issuecomment-1952527813).
This PR allows users to unset (= set to `None`) unused parameters to ensure `generation_config.validate()` doesn't throw a warning. Previously, this was not possible when a parameter had a non-`None` default.
For instance, the following snippet would throw a warning before this PR:
```py
from transformers import GenerationConfig
generation_config = GenerationConfig()
generation_config.update(temperature=None)
generation_config.validate()
# "... UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`."
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29119",
"html_url": "https://github.com/huggingface/transformers/pull/29119",
"diff_url": "https://github.com/huggingface/transformers/pull/29119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29119.patch",
"merged_at": 1708428871000
} |
https://api.github.com/repos/huggingface/transformers/issues/29118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29118/comments | https://api.github.com/repos/huggingface/transformers/issues/29118/events | https://github.com/huggingface/transformers/pull/29118 | 2,142,996,665 | PR_kwDOCUB6oc5nUmWT | 29,118 | Skipping test_save_load_low_cpu_mem_usage() for all failing models | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hello @amyeroberts,\r\n\r\nI came up this PR to simply ignores all the failing tests for test_save_load_low_cpu_mem_usage(). This change should be safe as it only touches tests.\r\n\r\nThis should help unblock any PRs from being merged, while we work on getting tie_weights() into some of these models with #29024.\r\n\r\nI leave it up to you to decide this is worth merging.\r\n\r\nThanks! 🙏",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@hackyon Thanks for opening this and all of your work on adding this feature to our models! \r\n\r\nWe're going to revert #28948 and then add it back after the next release, which should happen tomorrow. \r\n\r\nUnfortunately, our test suite doesn't seem to be correctly picking up all of the necessary tests. I know you've run a check against all models but we want to be confident that everything is in a good and stable state before the release. \r\n\r\nWe can then combine the work in #29024 with all models having similar changes which address the current review comments. ",
"Sounds good, thanks for taking care of this.\r\n\r\nOnce this rollback goes in, I can come up with a single PR that covers everything all in one. "
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
Skips all failing unit tests for test_save_load_low_cpu_mem_usage(). This should be temporary for some of them until the correct tie_weights() have been added to the models.
I created this temporary PR just in case it takes longer to make progress with #29024, and we don't want to block other PRs while we wait for feedback.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29118",
"html_url": "https://github.com/huggingface/transformers/pull/29118",
"diff_url": "https://github.com/huggingface/transformers/pull/29118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29118.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29117/comments | https://api.github.com/repos/huggingface/transformers/issues/29117/events | https://github.com/huggingface/transformers/pull/29117 | 2,142,967,049 | PR_kwDOCUB6oc5nUf4p | 29,117 | Move misplaced line | {
"login": "kno10",
"id": 3997899,
"node_id": "MDQ6VXNlcjM5OTc4OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3997899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kno10",
"html_url": "https://github.com/kno10",
"followers_url": "https://api.github.com/users/kno10/followers",
"following_url": "https://api.github.com/users/kno10/following{/other_user}",
"gists_url": "https://api.github.com/users/kno10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kno10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kno10/subscriptions",
"organizations_url": "https://api.github.com/users/kno10/orgs",
"repos_url": "https://api.github.com/users/kno10/repos",
"events_url": "https://api.github.com/users/kno10/events{/privacy}",
"received_events_url": "https://api.github.com/users/kno10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29117). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Move misplaced line, improve code comment.
No functional change, the loss_fct is not used earlier and did not match the code comment either.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@ArthurZucker and @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29117/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29117",
"html_url": "https://github.com/huggingface/transformers/pull/29117",
"diff_url": "https://github.com/huggingface/transformers/pull/29117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29117.patch",
"merged_at": 1708392288000
} |
https://api.github.com/repos/huggingface/transformers/issues/29116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29116/comments | https://api.github.com/repos/huggingface/transformers/issues/29116/events | https://github.com/huggingface/transformers/pull/29116 | 2,142,944,502 | PR_kwDOCUB6oc5nUbC6 | 29,116 | Track each row separately for stopping criteria | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29116). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Yep, should be unrelated to stopping criteria but I will check ",
"@gante , I found what was the reason for tests to fail. It was flaky, so I just did an empty commit here. Now it's all green :)\r\n\r\nAlso, I tried to fix the flaky test, the PR for it is [here](https://github.com/huggingface/transformers/pull/29138) . I would love to get you review on that",
"I support this - I'll make sure not to merge #28932 until this is in. After this is merged, I'll rebase and then modify my PR to also return a per-sample vector.",
"@amyeroberts \r\n1. I ran all the slow tests, everything is fine.\r\n2. In this PR the stopping criterias will only give all true or all false. Since the main goal was to adapt to #28932 which can stop some sequences and not others, I believe it's better to add the test there. "
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
Addresses the question raised in #28932. I accidentally messed up the first PR (#29056) that was approved, so this is the second version with the same changes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante , can you please look again into this. It's the same thing you approved earlier. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29116",
"html_url": "https://github.com/huggingface/transformers/pull/29116",
"diff_url": "https://github.com/huggingface/transformers/pull/29116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29116.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29115/comments | https://api.github.com/repos/huggingface/transformers/issues/29115/events | https://github.com/huggingface/transformers/pull/29115 | 2,142,911,771 | PR_kwDOCUB6oc5nUT_w | 29,115 | Switch transformer for sequence classification | {
"login": "jlamprou",
"id": 41962910,
"node_id": "MDQ6VXNlcjQxOTYyOTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/41962910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamprou",
"html_url": "https://github.com/jlamprou",
"followers_url": "https://api.github.com/users/jlamprou/followers",
"following_url": "https://api.github.com/users/jlamprou/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamprou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamprou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamprou/subscriptions",
"organizations_url": "https://api.github.com/users/jlamprou/orgs",
"repos_url": "https://api.github.com/users/jlamprou/repos",
"events_url": "https://api.github.com/users/jlamprou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamprou/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"> Hey! Thanks for contributing. As there are no released checkpoints for sequence classification, we usually try to:\r\n> \r\n> 1. Open an issue with the feature request\r\n> \r\n> 2. If the issue has strong support from the community (usually around 10 likes for example) the add support for it 🤗\r\n> \r\n> \r\n> In this case you are the first user to request this new class! 🤗\r\n\r\nHi, I'm working on Classification checkpoint, it will be available once the training is done.",
"@ArthurZucker @younesbelkada, I would also like your opinion about the usage of Z and Aux Loss on the Sequence Classification task. By my understanding Z loss is designed to improve the training stability of large language models and Mixture-of-Experts (MoE) models and the Aux loss for Multi-task learning and Regularization. ETA-50minutes for trained checkpoint on MNLI using the [run_glue.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py).",
"A trained checkpoint on SST2 [here](https://huggingface.co/glamprou/switch-base-8-sst2) and MNLI [here](https://huggingface.co/glamprou/switch-base-8-mnli)"
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
This adds a sequence classification head to the PyTorch implementation of SwitchTransformers, following the pattern of T5ForSequenceClassification since it is also an encoder-decoder sequence classification model.
# NOTE:
- [ ] Failing tests, because we haven't a Checkpoint trained on classification.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29115",
"html_url": "https://github.com/huggingface/transformers/pull/29115",
"diff_url": "https://github.com/huggingface/transformers/pull/29115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29115.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29114/comments | https://api.github.com/repos/huggingface/transformers/issues/29114/events | https://github.com/huggingface/transformers/pull/29114 | 2,142,818,811 | PR_kwDOCUB6oc5nT_rS | 29,114 | Make torch.compile compilation >2x faster when using static cache + `generate` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29114). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@ArthurZucker @gante @LysandreJik This PR fixes many issues with the current `torch.compile` + static cache + generate implementation, as follow.\r\n\r\n## 1. Always keep the same stride for inputs in the decode phase\r\n\r\n`generate` apparently does not use directly the [`next_tokens`](https://github.com/huggingface/transformers/blob/1c81132e80478e278681686fe44dfec793d5dee9/src/transformers/generation/utils.py#L2440) variable as the next `input_ids`. Instead, the `next_tokens` are concatenated with previous tokens, and then [sliced](https://github.com/huggingface/transformers/blob/1c81132e80478e278681686fe44dfec793d5dee9/src/transformers/models/llama/modeling_llama.py#L1266), which results in the input tensors having different stride while having the same shape:\r\n\r\n```\r\n------------- loop forward in generate 0\r\nmodel_inputs input ids shape torch.Size([2, 7])\r\nmodel_inputs input ids stride (7, 1)\r\nmodel_inputs position_ids shape torch.Size([2, 7])\r\nmodel_inputs position_ids stride (7, 1)\r\n------------- loop forward in generate 1\r\nmodel_inputs input ids shape torch.Size([2, 1])\r\nmodel_inputs input ids stride (8, 1)\r\nmodel_inputs position_ids shape torch.Size([2, 1])\r\nmodel_inputs position_ids stride (8, 1)\r\n------------- loop forward in generate 2\r\nmodel_inputs input ids shape torch.Size([2, 1])\r\nmodel_inputs input ids stride (9, 1)\r\nmodel_inputs position_ids shape torch.Size([2, 1])\r\nmodel_inputs position_ids stride (9, 1)\r\n------------- loop forward in generate 3\r\nmodel_inputs input ids shape torch.Size([2, 1])\r\nmodel_inputs input ids stride (10, 1)\r\nmodel_inputs position_ids shape torch.Size([2, 1])\r\nmodel_inputs position_ids stride (10, 1)\r\n------------- loop forward in generate 4\r\netc.\r\n```\r\n\r\nThis is bad because with torch.compile there are guards on the stride of the inputs, and thus recompilation is triggered in the decode phase while this is really not necessary.\r\n\r\n```\r\nV0220 17:27:08.283680 140705660285312 torch/_dynamo/guards.py:1381] Recompiling function forward in /home/felix/transformers/src/transformers/models/llama/modeling_llama.py:1127\r\nV0220 17:27:08.283680 140705660285312 torch/_dynamo/guards.py:1381] triggered by the following guard failure(s):\r\nV0220 17:27:08.283680 140705660285312 torch/_dynamo/guards.py:1381] - tensor 'L['input_ids']' stride mismatch at index 0. expected 7, actual 8\r\n\r\n```\r\n&\r\n```\r\nV0220 17:27:12.636874 140705660285312 torch/_dynamo/guards.py:1381] Recompiling function torch_dynamo_resume_in_forward_at_989 in /home/felix/transformers/src/transformers/models/llama/modeling_llama.py:989\r\nV0220 17:27:12.636874 140705660285312 torch/_dynamo/guards.py:1381] triggered by the following guard failure(s):\r\nV0220 17:27:12.636874 140705660285312 torch/_dynamo/guards.py:1381] - tensor 'L['position_ids']' stride mismatch at index 0. expected 7, actual 8\r\n```\r\n\r\n## 2. Do not compile `_update_causal_mask`\r\n\r\n[`_update_causal_mask`](https://github.com/huggingface/transformers/blob/1c81132e80478e278681686fe44dfec793d5dee9/src/transformers/models/llama/modeling_llama.py#L1081) uses the input `attention_mask` length in its code. I believe this results in an FX `placehoder` being a SymInt,\r\n\r\n```\r\nV0220 11:30:30.341809 140023118176640 torch/_dynamo/output_graph.py:1084] [2/1] ===== __compiled_fn_12 =====\r\nV0220 11:30:30.341809 140023118176640 torch/_dynamo/output_graph.py:1084] [2/1] /home/felix/miniconda3/envs/fx/lib/python3.9/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):\r\nV0220 11:30:30.341809 140023118176640 torch/_dynamo/output_graph.py:1084] [2/1] def forward(self, s0 : torch.SymInt, L_attention_mask_ : torch.Tensor):\r\nV0220 11:30:30.341809 140023118176640 torch/_dynamo/output_graph.py:1084] [2/1] l_attention_mask_ = L_attention_mask_\r\n```\r\n\r\nwhich retriggers CUDAGraph capture for every decode step:\r\n```\r\nI0220 11:30:57.881897 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 13\r\nI0220 11:30:57.902060 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 14\r\nI0220 11:30:57.922157 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 15\r\nI0220 11:30:57.942382 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 16\r\nI0220 11:30:57.962995 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 17\r\nI0220 11:30:57.983414 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 18\r\nI0220 11:30:58.004108 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 19\r\nI0220 11:30:58.024602 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 20\r\nI0220 11:30:58.045034 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 21\r\nI0220 11:30:58.065743 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 22\r\nI0220 11:30:58.086160 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 23\r\nI0220 11:30:58.106957 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 24\r\nI0220 11:30:58.127666 140023118176640 torch/_inductor/cudagraph_trees.py:375] recording cudagraph tree for symint key 25\r\n```\r\n\r\nThis is very slow. We avoid capturing `_update_causal_mask` with `@torch.compiler.disable`, which fixes the issue (no more cuda graph capture after the very first decode step).\r\n\r\n## 3. Avoid using a stateful int `seen_tokens` (PyTorch bug)\r\n\r\nOn main, `StaticCache`'s [`seen_tokens`](https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py#L394) is bugged **only when using torch.compile** prior. Convince yourself with:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig\r\nimport torch\r\nfrom transformers.cache_utils import StaticCache\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n \"NousResearch/Llama-2-7b-chat-hf\", padding_side=\"left\", pad_token=\"<s>\"\r\n)\r\n\r\nwith torch.device(\"cuda\"):\r\n model = AutoModelForCausalLM.from_pretrained(\r\n \"NousResearch/Llama-2-7b-chat-hf\",\r\n torch_dtype=torch.float16,\r\n attn_implementation=\"sdpa\",\r\n )\r\n\r\ninputs = tokenizer(\r\n [\"I would\", \"Today I am in Paris and\"], padding=True, return_tensors=\"pt\"\r\n).to(model.device)\r\n\r\nnew_tokens = 10\r\ngen_config = GenerationConfig(\r\n max_new_tokens=new_tokens,\r\n min_new_tokens=new_tokens,\r\n use_cache=True,\r\n pad_token_id=tokenizer.pad_token_id,\r\n num_beams=1,\r\n do_sample=False,\r\n eos_token_id=None, # This is required for min_new_tokens to actually have an effect.\r\n)\r\nmodel.generation_config.eos_token_id = None # greedy_search falls back on this eos_token_id that we need to set to None as well for min_new_tokens to have an effect.\r\n\r\ngen_out = model.generate(**inputs, generation_config=gen_config)\r\n\r\ndecoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)\r\n\r\nprint(\"decoded\", decoded)\r\n\r\nprint(\"compiling...\")\r\n\r\nmodel.forward = torch.compile(model.forward, mode=\"reduce-overhead\")\r\nprint(\"Finished compile call\")\r\n\r\n# warmup\r\ngen_out = model.generate(**inputs, generation_config=gen_config, cache_implementation=\"static\")\r\n\r\nprint(\"\\n\\n\\n\\n\\n\\n----- second call\")\r\ngen_out = model.generate(**inputs, generation_config=gen_config, cache_implementation=\"static\")\r\n\r\ndecoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)\r\n\r\nprint(\"decoded static\", decoded)\r\n```\r\n\r\nwhich yields\r\n\r\n![image](https://github.com/huggingface/transformers/assets/9808326/0fc8f657-6ae8-4e41-8fac-dc7a16fa9811)\r\n\r\nUsing a `torch.Tensor` updated in-place instead of `int` fixes the bug, however we then hit what I believe to be a torch.compile bug where subclasses are added after the `torch.compile` call (in the `_setup_cache`). Even with the above fix, there is still a bug where `seen_tokens` is not properly updated. By making sure `_setup_cache` is called **BEFORE** `torch.compile`, this issue disappears. However this required an API change, so disreguarding this approach.\r\n\r\nInstead, **remove `seen_tokens` altogether from StaticCache.**\r\n\r\n## Results\r\n\r\nOn `main` (ee3af60be0d21044692211d97dfd858aa3e4b418):\r\n\r\n```\r\n-------------- STATIC CACHE\r\ncompiling...\r\ntorch.compile call: 703.207 ms\r\n/home/felix/miniconda3/envs/fx/lib/python3.9/site-packages/torch/_inductor/compile_fx.py:148: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.\r\n warnings.warn(\r\n\r\n- 0-th `generate` call latency per token (new_tokens=20): 10591.283 ms\r\n\r\n- 1-th `generate` call latency per token (new_tokens=20): 3284.225 ms\r\n\r\n- 2-th `generate` call latency per token (new_tokens=20): 132.769 ms\r\n\r\n- 3-th `generate` call latency per token (new_tokens=20): 11.211 ms\r\n\r\n- 4-th `generate` call latency per token (new_tokens=20): 11.160 ms\r\ndecoded static ['I would like to know how to get a copy of my medical records from my primary care physician.\\n', 'Today I am in Paris and I am feeling very grateful for this opportunity to explore this beautiful city. I have always wanted to visit']\r\n```\r\n\r\nOn this branch (0c03b7d45d7948363a6ece0e921b21f8c4e78286):\r\n```\r\n-------------- STATIC CACHE\r\ncompiling...\r\ntorch.compile call: 729.943 ms\r\n/home/felix/miniconda3/envs/fx/lib/python3.9/site-packages/torch/_inductor/compile_fx.py:148: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.\r\n warnings.warn(\r\n\r\n- 0-th `generate` call latency per token (new_tokens=20): 4121.241 ms\r\n\r\n- 1-th `generate` call latency per token (new_tokens=20): 239.070 ms\r\n\r\n- 2-th `generate` call latency per token (new_tokens=20): 11.592 ms\r\n\r\n- 3-th `generate` call latency per token (new_tokens=20): 11.602 ms\r\n\r\n- 4-th `generate` call latency per token (new_tokens=20): 11.618 ms\r\ndecoded static ['I would like to know how to get a copy of my medical records from my primary care physician.\\n', 'Today I am in Paris and I am feeling very grateful for this opportunity to explore this beautiful city. I have always wanted to visit']\r\n```",
"```python \r\ntorch._dynamo.exc.Unsupported: 'inline in skipfiles: LlamaModel._update_causal_mask | _fn /home/arthur/miniconda3/envs/py39/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py, skipped according skipfiles.SKIP_DIRS'\r\n```\r\nfailing on torch2.2 let's wait a tad bit (that was full graph!)",
"Leaving this open for now, as we would like to avoid `@torch.compiler.disable` and keep compatibility with `fullgraph=True`.\r\n\r\nThere is likely a bug in PyTorch where CUDA graphs are rerecorded while they should not, so we can't simply remove `@torch.compiler.disable`.",
"@fxmarty btw, I am [working on PR](https://github.com/huggingface/transformers/pull/29005) that has some of the changes that you added here (such as resetting the cache after generate), we might get merge conflicts :) ",
"Thank you @gante, awesome! Yes, I think there needs to be an alignment at some point between all different archs, it's getting a bit complex with all the different approaches.\r\n\r\nAt the end of the day after discussing with @ArthurZucker, merging but not cherry picking in the release. I removed the `@torch.compiler.disable` decorator for the reason above.\r\n\r\nI believe there is a bug in PyTorch where cuda graphs are somehow rerecorded in the second pass.\r\n```\r\n- 0-th `generate` call latency per token (new_tokens=100): 744.209 ms\r\n\r\n- 1-th `generate` call latency per token (new_tokens=100): 1773.975 ms\r\n\r\n- 2-th `generate` call latency per token (new_tokens=100): 11.069 ms\r\n\r\n- 3-th `generate` call latency per token (new_tokens=100): 11.042 ms\r\n\r\n- 4-th `generate` call latency per token (new_tokens=100): 11.035 ms\r\ndecoded static [\"I would like to know how to get a copy of my medical records from my primary care physician.\\nI would like to know how to get a copy of my medical records from my primary care physician.\\nGetting a copy of your medical records from your primary care physician can be a straightforward process, but it's important to follow the proper steps to ensure you receive a complete and accurate copy of your records. Here are the general steps you can take:\\n\\n1. Contact your\", \"Today I am in Paris and I am feeling very grateful for this opportunity to explore this beautiful city. I have always wanted to visit Paris and now I am finally here, and it is even more beautiful than I imagined. The Eiffel Tower is stunning, the Louvre is incredible, and the food is delicious. I am soaking up every moment and making the most of my time here. I can't wait to see what the rest of the trip has in store for me. #grateful\"]\r\n```",
"^ reference for this https://github.com/pytorch/pytorch/issues/120309"
] | 1,708 | 1,708 | null | COLLABORATOR | null | This PR improves the compilation time of llama model with `torch.compile` when using `generate`, avoiding recompilation, recaptures of CUDA graphs & a bug in PyTorch w.r.t. indexing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29114/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/29114/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29114",
"html_url": "https://github.com/huggingface/transformers/pull/29114",
"diff_url": "https://github.com/huggingface/transformers/pull/29114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29114.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29113/comments | https://api.github.com/repos/huggingface/transformers/issues/29113/events | https://github.com/huggingface/transformers/issues/29113 | 2,142,798,975 | I_kwDOCUB6oc5_uIR_ | 29,113 | ValueError: lags cannot go further than history length, found lag 37 while history length is only 16 | {
"login": "nikhilajoshy",
"id": 37141775,
"node_id": "MDQ6VXNlcjM3MTQxNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/37141775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhilajoshy",
"html_url": "https://github.com/nikhilajoshy",
"followers_url": "https://api.github.com/users/nikhilajoshy/followers",
"following_url": "https://api.github.com/users/nikhilajoshy/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhilajoshy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhilajoshy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhilajoshy/subscriptions",
"organizations_url": "https://api.github.com/users/nikhilajoshy/orgs",
"repos_url": "https://api.github.com/users/nikhilajoshy/repos",
"events_url": "https://api.github.com/users/nikhilajoshy/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhilajoshy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @kashif @NielsRogge ",
"@nikhilajoshy can you kindly paste in some more verbose error?",
"@kashif \r\n```\r\noutputs = model(\r\n ^^^^^^\r\n File \"/home/nikhila/encdec_venv/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/nikhila/encdec_venv/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/nikhila/encdec_venv/lib/python3.11/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py\", line 1384, in forward\r\n transformer_inputs, loc, scale, static_feat = self.create_network_inputs(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/nikhila/encdec_venv/lib/python3.11/site-packages/transformers/models/time_series_transformer/modeling_time_series_transformer.py\", line 1304, in create_network_inputs\r\n lagged_sequence = self.get_lagged_subsequences(sequence=inputs, subsequences_length=subsequences_length)\r\n```"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`model = TimeSeriesTransformerModel.from_pretrained(
"models/time-series-transformer-tourism-monthly"
)
outputs = model(
past_values=past_values,
past_time_features=past_time_features,
past_observed_mask=past_observed_mask,
# static_categorical_features=static_categorical_features"],
# static_real_features=static_real_features"],
future_values=future_values,
future_time_features=future_time_features,
)
`
### Expected behavior
Size of the arguments passed to model is below:
past values torch.Size([502, 8])
past_time_features torch.Size([502, 8, 1])
past_observed_mask torch.Size([502, 8])
future_values torch.Size([502, 8])
future_time_features torch.Size([502, 8, 1]) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29113/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29112/comments | https://api.github.com/repos/huggingface/transformers/issues/29112/events | https://github.com/huggingface/transformers/pull/29112 | 2,142,709,791 | PR_kwDOCUB6oc5nTn1C | 29,112 | Remove static pretrained maps from the library's internals | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Before this ungodly PR gets merged, I need to check that every checkpoint referenced here behaves the same once its pretrained map has been removed. \r\n\r\nI'll link the PRs open as a result in this comment.\r\n\r\n🟣: merged\r\n🟢: open\r\n🔴: closed\r\n🟡: not open yet\r\n\r\n## Repos with PRs opened\r\n\r\n### ALBERT\r\n🟣 [albert/albert-base-v1#2](https://huggingface.co/albert/albert-base-v1/discussions/2)\r\n🟣 [albert/albert-base-v2#6](https://huggingface.co/albert/albert-base-v2/discussions/6)\r\n🟣 [albert/albert-large-v1#2](https://huggingface.co/albert/albert-large-v1/discussions/2)\r\n🟣 [albert/albert-large-v2#3](https://huggingface.co/albert/albert-large-v2/discussions/3)\r\n🟣 [albert/albert-xlarge-v1#2](https://huggingface.co/albert/albert-xlarge-v1/discussions/2)\r\n🟣 [albert/albert-xlarge-v2#2](https://huggingface.co/albert/albert-xlarge-v2/discussions/2)\r\n🟣 [albert/albert-xxlarge-v1#3](https://huggingface.co/albert/albert-xxlarge-v1/discussions/3)\r\n🟣 [albert/albert-xxlarge-v2#3](https://huggingface.co/albert/albert-xxlarge-v2/discussions/3)\r\n\r\n### BERT\r\n\r\n🟣 [google-bert/bert-base-cased#10](https://huggingface.co/google-bert/bert-base-cased/discussions/10)\r\n🟣 [google-bert/bert-base-cased-finetuned-mrpc#2](https://huggingface.co/google-bert/bert-base-cased-finetuned-mrpc/discussions/2)\r\n🟣 [google-bert/bert-base-chinese#16](https://huggingface.co/google-bert/bert-base-chinese/discussions/16)\r\n🟣 [google-bert/bert-base-german-cased#5](https://huggingface.co/google-bert/bert-base-german-cased/discussions/5)\r\n🟣 [google-bert/bert-base-german-dbmdz-cased#4](https://huggingface.co/google-bert/bert-base-german-dbmdz-cased/discussions/4)\r\n🟣 [google-bert/bert-base-german-dbmdz-uncased#4](https://huggingface.co/google-bert/bert-base-german-dbmdz-uncased/discussions/4)\r\n🟣 [google-bert/bert-base-multilingual-cased#5](https://huggingface.co/google-bert/bert-base-multilingual-cased/discussions/5)\r\n🟣 [google-bert/bert-base-multilingual-uncased#5](https://huggingface.co/google-bert/bert-base-multilingual-uncased/discussions/5)\r\n🟣 [google-bert/bert-base-uncased#62](https://huggingface.co/google-bert/bert-base-uncased/discussions/62)\r\n🟣 [google-bert/bert-large-cased#3](https://huggingface.co/google-bert/bert-large-cased/discussions/3)\r\n🟣 [google-bert/bert-large-cased-whole-word-masking#2](https://huggingface.co/google-bert/bert-large-cased-whole-word-masking/discussions/2)\r\n🟣 [google-bert/bert-large-cased-whole-word-masking-finetuned-squad#2](https://huggingface.co/google-bert/bert-large-cased-whole-word-masking-finetuned-squad/discussions/2)\r\n🟣 [google-bert/bert-large-uncased#3](https://huggingface.co/google-bert/bert-large-uncased/discussions/3)\r\n🟣 [google-bert/bert-large-uncased-whole-word-masking#3](https://huggingface.co/google-bert/bert-large-uncased-whole-word-masking/discussions/3)\r\n🟣 [google-bert/bert-large-uncased-whole-word-masking-finetuned-squad#4](https://huggingface.co/google-bert/bert-large-uncased-whole-word-masking-finetuned-squad/discussions/4)\r\n\r\n### CamemBERT\r\n\r\n🟢 [almanach/camembert-base#7](https://huggingface.co/almanach/camembert-base/discussions/7)\r\n\r\n### CTRL\r\n\r\n🟣 [Salesforce/ctrl#4](https://huggingface.co/Salesforce/ctrl/discussions/4)\r\n\r\n### DistilBERT\r\n\r\n🟣 [distilbert/distilgpt2#11](https://huggingface.co/distilbert/distilgpt2/discussions/11)\r\n🟣 [distilbert/distilroberta-base#4](https://huggingface.co/distilbert/distilroberta-base/discussions/4)\r\n\r\n### GPT-2\r\n\r\n🟣 [openai-community/gpt2#80](https://huggingface.co/openai-community/gpt2/discussions/80)\r\n🟣 [openai-community/gpt2-large#7](https://huggingface.co/openai-community/gpt2-large/discussions/7)\r\n🟣 [openai-community/gpt2-medium#13](https://huggingface.co/openai-community/gpt2-medium/discussions/13)\r\n🟣 [openai-community/gpt2-xl#9](https://huggingface.co/openai-community/gpt2-xl/discussions/9)\r\n\r\n### OpenAI GPT\r\n\r\n🟣 [openai-community/openai-gpt#6](https://huggingface.co/openai-community/openai-gpt/discussions/6)\r\n\r\n### RoBERTa\r\n\r\n🟣 [FacebookAI/roberta-base#12](https://huggingface.co/FacebookAI/roberta-base/discussions/12)\r\n🟣 [openai-community/roberta-base-openai-detector#17](https://huggingface.co/openai-community/roberta-base-openai-detector/discussions/17)\r\n🟣 [FacebookAI/roberta-large#6](https://huggingface.co/FacebookAI/roberta-large/discussions/6)\r\n🟣 [FacebookAI/roberta-large-mnli#8](https://huggingface.co/FacebookAI/roberta-large-mnli/discussions/8)\r\n🟣 [openai-community/roberta-large-openai-detector#5](https://huggingface.co/openai-community/roberta-large-openai-detector/discussions/5)\r\n🟣 [FacebookAI/xlm-roberta-base#27](https://huggingface.co/FacebookAI/xlm-roberta-base/discussions/27)\r\n🟣 [FacebookAI/xlm-roberta-large#17](https://huggingface.co/FacebookAI/xlm-roberta-large/discussions/17)\r\n🟣 [FacebookAI/xlm-roberta-large-finetuned-conll02-dutch#3](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-dutch/discussions/3)\r\n🟣 [FacebookAI/xlm-roberta-large-finetuned-conll02-spanish#3](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-spanish/discussions/3)\r\n🟣 [FacebookAI/xlm-roberta-large-finetuned-conll03-english#11](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-english/discussions/11)\r\n🟣 [FacebookAI/xlm-roberta-large-finetuned-conll03-german#4](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-german/discussions/4)\r\n\r\n## Non-canonical models\r\n\r\n🟢 [facebook/m2m100_418M#16](https://huggingface.co/facebook/m2m100_418M/discussions/16)\r\n🟢 [openai/clip-vit-base-patch32#13](https://huggingface.co/openai/clip-vit-base-patch32/discussions/13)\r\n🟢 [google/bigbird-roberta-large#2](https://huggingface.co/google/bigbird-roberta-large/discussions/2)\r\n🟢 [google/bigbird-base-trivia-itc#2](https://huggingface.co/google/bigbird-base-trivia-itc/discussions/2)\r\n🟢 [google/rembert#2](https://huggingface.co/google/rembert/discussions/2)\r\n🟢 [YituTech/conv-bert-base#4](https://huggingface.co/YituTech/conv-bert-base/discussions/4)\r\n🟢 [YituTech/conv-bert-medium-small#2](https://huggingface.co/YituTech/conv-bert-medium-small/discussions/2)\r\n🟢 [YituTech/conv-bert-small#2](https://huggingface.co/YituTech/conv-bert-small/discussions/2)\r\n🔴 [facebook/wav2vec2-lv-60-espeak-cv-ft#5](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft/discussions/5)\r\n🟢 [facebook/blenderbot_small-90M#5](https://huggingface.co/facebook/blenderbot_small-90M/discussions/5)\r\n🟢 [funnel-transformer/small#2](https://huggingface.co/funnel-transformer/small/discussions/2)\r\n🟢 [funnel-transformer/small-base#2](https://huggingface.co/funnel-transformer/small-base/discussions/2)\r\n🟢 [funnel-transformer/medium#2](https://huggingface.co/funnel-transformer/medium/discussions/2)\r\n🟢 [funnel-transformer/medium-base#1](https://huggingface.co/funnel-transformer/medium-base/discussions/1)\r\n🟢 [funnel-transformer/intermediate#2](https://huggingface.co/funnel-transformer/intermediate/discussions/2)\r\n🟢 [funnel-transformer/intermediate-base#2](https://huggingface.co/funnel-transformer/intermediate-base/discussions/2)\r\n🟢 [funnel-transformer/large#3](https://huggingface.co/funnel-transformer/large/discussions/3)\r\n🟢 [funnel-transformer/large-base#1](https://huggingface.co/funnel-transformer/large-base/discussions/1)\r\n🟢 [funnel-transformer/xlarge#2](https://huggingface.co/funnel-transformer/xlarge/discussions/2)\r\n🟢 [funnel-transformer/xlarge-base#2](https://huggingface.co/funnel-transformer/xlarge-base/discussions/2)\r\n🟡 [flaubert/flaubert_small_cased#2](https://huggingface.co/flaubert/flaubert_small_cased/discussions/2)\r\n🟡 [flaubert/flaubert_base_uncased#2](https://huggingface.co/flaubert/flaubert_base_uncased/discussions/2)\r\n🟡 [flaubert/flaubert_base_cased#2](https://huggingface.co/flaubert/flaubert_base_cased/discussions/2)\r\n🟡 [flaubert/flaubert_large_cased#2](https://huggingface.co/flaubert/flaubert_large_cased/discussions/2)\r\n🟢 [google/realm-cc-news-pretrained-embedder#1](https://huggingface.co/google/realm-cc-news-pretrained-embedder/discussions/1)\r\n🟢 [google/realm-cc-news-pretrained-encoder#1](https://huggingface.co/google/realm-cc-news-pretrained-encoder/discussions/1)\r\n🟢 [google/realm-cc-news-pretrained-scorer#1](https://huggingface.co/google/realm-cc-news-pretrained-scorer/discussions/1)\r\n🟢 [google/realm-cc-news-pretrained-openqa#1](https://huggingface.co/google/realm-cc-news-pretrained-openqa/discussions/1)\r\n🟢 [google/realm-orqa-nq-openqa#1](https://huggingface.co/google/realm-orqa-nq-openqa/discussions/1)\r\n🟢 [google/realm-orqa-nq-reader#1](https://huggingface.co/google/realm-orqa-nq-reader/discussions/1)\r\n🟢 [google/realm-orqa-wq-openqa#1](https://huggingface.co/google/realm-orqa-wq-openqa/discussions/1)\r\n🟢 [google/realm-orqa-wq-reader#1](https://huggingface.co/google/realm-orqa-wq-reader/discussions/1)\r\n🟢 [google/fnet-base#1](https://huggingface.co/google/fnet-base/discussions/1)\r\n🟢 [google/fnet-large#1](https://huggingface.co/google/fnet-large/discussions/1)\r\n🟢 [microsoft/mpnet-base#4](https://huggingface.co/microsoft/mpnet-base/discussions/4)\r\n🟢 [google/reformer-crime-and-punishment#2](https://huggingface.co/google/reformer-crime-and-punishment/discussions/2)\r\n🟢 [facebook/s2t-wav2vec2-large-en-de#3](https://huggingface.co/facebook/s2t-wav2vec2-large-en-de/discussions/3)\r\n🟢 [allenai/longformer-base-4096#6](https://huggingface.co/allenai/longformer-base-4096/discussions/6)\r\n🟢 [allenai/longformer-large-4096#3](https://huggingface.co/allenai/longformer-large-4096/discussions/3)\r\n🟢 [allenai/longformer-large-4096-finetuned-triviaqa#4](https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa/discussions/4)\r\n🟢 [allenai/longformer-base-4096-extra.pos.embd.only#2](https://huggingface.co/allenai/longformer-base-4096-extra.pos.embd.only/discussions/2)\r\n🟢 [allenai/longformer-large-4096-extra.pos.embd.only#2](https://huggingface.co/allenai/longformer-large-4096-extra.pos.embd.only/discussions/2)\r\n🟢 [cl-tohoku/bert-base-japanese#2](https://huggingface.co/tohoku-nlp/bert-base-japanese/discussions/2)\r\n🟢 [cl-tohoku/bert-base-japanese-whole-word-masking#3](https://huggingface.co/tohoku-nlp/bert-base-japanese-whole-word-masking/discussions/3)\r\n🟢 [cl-tohoku/bert-base-japanese-char#2](https://huggingface.co/tohoku-nlp/bert-base-japanese-char/discussions/2)\r\n🟢 [cl-tohoku/bert-base-japanese-char-whole-word-masking#2](https://huggingface.co/tohoku-nlp/bert-base-japanese-char-whole-word-masking/discussions/2)\r\n🟢 [google/electra-small-generator#4](https://huggingface.co/google/electra-small-generator/discussions/4)\r\n🟢 [google/electra-base-generator#2](https://huggingface.co/google/electra-base-generator/discussions/2)\r\n🟢 [google/electra-large-generator#2](https://huggingface.co/google/electra-large-generator/discussions/2)\r\n🟢 [google/electra-small-discriminator#1](https://huggingface.co/google/electra-small-discriminator/discussions/1)\r\n🟢 [google/electra-base-discriminator#3](https://huggingface.co/google/electra-base-discriminator/discussions/3)\r\n🟢 [google/electra-large-discriminator#1](https://huggingface.co/google/electra-large-discriminator/discussions/1)\r\n🟢 [microsoft/layoutlmv2-base-uncased#5](https://huggingface.co/microsoft/layoutlmv2-base-uncased/discussions/5)\r\n🟢 [microsoft/layoutlmv2-large-uncased#3](https://huggingface.co/microsoft/layoutlmv2-large-uncased/discussions/3)\r\n🟢 [microsoft/deberta-v2-xlarge#3](https://huggingface.co/microsoft/deberta-v2-xlarge/discussions/3)\r\n🟢 [microsoft/deberta-v2-xxlarge#4](https://huggingface.co/microsoft/deberta-v2-xxlarge/discussions/4)\r\n🟣 [microsoft/deberta-v2-xlarge-mnli#2](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli/discussions/2)\r\n🟢 [microsoft/deberta-v2-xxlarge-mnli#2](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/discussions/2)\r\n🟢 [vinai/bartpho-syllable#3](https://huggingface.co/vinai/bartpho-syllable/discussions/3)\r\n🟢 [Helsinki-NLP/opus-mt-en-de#6](https://huggingface.co/Helsinki-NLP/opus-mt-en-de/discussions/6)\r\n🟢 [facebook/bart-base#4](https://huggingface.co/facebook/bart-base/discussions/4)\r\n🟢 [facebook/bart-large-cnn#71](https://huggingface.co/facebook/bart-large-cnn/discussions/71)\r\n🟢 [yjernite/bart_eli5#2](https://huggingface.co/yjernite/bart_eli5/discussions/2)\r\n🟢 [microsoft/layoutlm-base-uncased#3](https://huggingface.co/microsoft/layoutlm-base-uncased/discussions/3)\r\n🟢 [microsoft/layoutlm-large-uncased#1](https://huggingface.co/microsoft/layoutlm-large-uncased/discussions/1)\r\n🟢 [junnyu/roformer_chinese_char_small#1](https://huggingface.co/junnyu/roformer_chinese_char_small/discussions/1)\r\n🟢 [junnyu/roformer_chinese_char_base#1](https://huggingface.co/junnyu/roformer_chinese_char_base/discussions/1)\r\n🟢 [junnyu/roformer_small_discriminator#2](https://huggingface.co/junnyu/roformer_small_discriminator/discussions/2)\r\n🟢 [junnyu/roformer_small_generator#2](https://huggingface.co/junnyu/roformer_small_generator/discussions/2)\r\n🟢 [uclanlp/plbart-base#6](https://huggingface.co/uclanlp/plbart-base/discussions/6)\r\n🟢 [uclanlp/plbart-c-cpp-defect-detection#4](https://huggingface.co/uclanlp/plbart-c-cpp-defect-detection/discussions/4)\r\n🟢 [uclanlp/plbart-cs-java#1](https://huggingface.co/uclanlp/plbart-cs-java/discussions/1)\r\n🟢 [uclanlp/plbart-en_XX-java#2](https://huggingface.co/uclanlp/plbart-en_XX-java/discussions/2)\r\n🟢 [uclanlp/plbart-go-en_XX#1](https://huggingface.co/uclanlp/plbart-go-en_XX/discussions/1)\r\n🟢 [uclanlp/plbart-java-clone-detection#2](https://huggingface.co/uclanlp/plbart-java-clone-detection/discussions/2)\r\n🟢 [uclanlp/plbart-java-cs#3](https://huggingface.co/uclanlp/plbart-java-cs/discussions/3)\r\n🟢 [uclanlp/plbart-java-en_XX#2](https://huggingface.co/uclanlp/plbart-java-en_XX/discussions/2)\r\n🟢 [uclanlp/plbart-javascript-en_XX#2](https://huggingface.co/uclanlp/plbart-javascript-en_XX/discussions/2)\r\n🟢 [uclanlp/plbart-php-en_XX#2](https://huggingface.co/uclanlp/plbart-php-en_XX/discussions/2)\r\n🟢 [uclanlp/plbart-python-en_XX#2](https://huggingface.co/uclanlp/plbart-python-en_XX/discussions/2)\r\n🟢 [uclanlp/plbart-refine-java-medium#1](https://huggingface.co/uclanlp/plbart-refine-java-medium/discussions/1)\r\n🟢 [uclanlp/plbart-refine-java-small#2](https://huggingface.co/uclanlp/plbart-refine-java-small/discussions/2)\r\n🟢 [uclanlp/plbart-ruby-en_XX#1](https://huggingface.co/uclanlp/plbart-ruby-en_XX/discussions/1)\r\n🟢 [openbmb/cpm-ant-10b#2](https://huggingface.co/openbmb/cpm-ant-10b/discussions/2)\r\n🟢 [microsoft/deberta-base#5](https://huggingface.co/microsoft/deberta-base/discussions/5)\r\n🟢 [microsoft/deberta-large#4](https://huggingface.co/microsoft/deberta-large/discussions/4)\r\n🟢 [microsoft/deberta-xlarge#2](https://huggingface.co/microsoft/deberta-xlarge/discussions/2)\r\n🟢 [microsoft/deberta-base-mnli#1](https://huggingface.co/microsoft/deberta-base-mnli/discussions/1)\r\n🟢 [microsoft/deberta-large-mnli#1](https://huggingface.co/microsoft/deberta-large-mnli/discussions/1)\r\n🟢 [microsoft/deberta-xlarge-mnli#5](https://huggingface.co/microsoft/deberta-xlarge-mnli/discussions/5)\r\n🟢 [vinai/bertweet-base#5](https://huggingface.co/vinai/bertweet-base/discussions/5)\r\n🟢 [AI-Sweden-Models/gpt-sw3-126m#5](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/discussions/5)\r\n🟢 [AI-Sweden-Models/gpt-sw3-356m#4](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/discussions/4)\r\n🟢 [AI-Sweden-Models/gpt-sw3-1.3b#4](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/discussions/4)\r\n🟢 [AI-Sweden-Models/gpt-sw3-6.7b#5](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/discussions/5)\r\n🟢 [AI-Sweden-Models/gpt-sw3-6.7b-v2#4](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/discussions/4)\r\n🟢 [AI-Sweden-Models/gpt-sw3-20b#6](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/discussions/6)\r\n🟣 [AI-Sweden-Models/gpt-sw3-40b#4](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/discussions/4)\r\n🟢 [facebook/mbart-large-50-one-to-many-mmt#7](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt/discussions/7)\r\n🟢 [facebook/xglm-564M#7](https://huggingface.co/facebook/xglm-564M/discussions/7)\r\n🟢 [facebook/blenderbot-3B#7](https://huggingface.co/facebook/blenderbot-3B/discussions/7)\r\n🟢 [microsoft/prophetnet-large-uncased#2](https://huggingface.co/microsoft/prophetnet-large-uncased/discussions/2)\r\n🟢 [microsoft/xprophetnet-large-wiki100-cased#1](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased/discussions/1)\r\n🟢 [squeezebert/squeezebert-uncased#3](https://huggingface.co/squeezebert/squeezebert-uncased/discussions/3)\r\n🟢 [squeezebert/squeezebert-mnli#2](https://huggingface.co/squeezebert/squeezebert-mnli/discussions/2)\r\n🟢 [squeezebert/squeezebert-mnli-headless#2](https://huggingface.co/squeezebert/squeezebert-mnli-headless/discussions/2)\r\n🟢 [abeja/gpt-neox-japanese-2.7b#5](https://huggingface.co/abeja/gpt-neox-japanese-2.7b/discussions/5)\r\n🟢 [microsoft/biogpt#26](https://huggingface.co/microsoft/biogpt/discussions/26)\r\n🔴 [facebook/wav2vec2-base-960h#11](https://huggingface.co/facebook/wav2vec2-base-960h/discussions/11)\r\n🟣 [moussaKam/mbarthez#2](https://huggingface.co/moussaKam/mbarthez/discussions/2)\r\n🟣 [moussaKam/barthez#3](https://huggingface.co/moussaKam/barthez/discussions/3)\r\n🟣 [moussaKam/barthez-orangesum-title#2](https://huggingface.co/moussaKam/barthez-orangesum-title/discussions/2)\r\n🟢 [vinai/phobert-base#5](https://huggingface.co/vinai/phobert-base/discussions/5)\r\n🟢 [vinai/phobert-large#2](https://huggingface.co/vinai/phobert-large/discussions/2)\r\n🟢 [facebook/mbart-large-en-ro#3](https://huggingface.co/facebook/mbart-large-en-ro/discussions/3)\r\n🟢 [facebook/mbart-large-cc25#5](https://huggingface.co/facebook/mbart-large-cc25/discussions/5)\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29112). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29112/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29112",
"html_url": "https://github.com/huggingface/transformers/pull/29112",
"diff_url": "https://github.com/huggingface/transformers/pull/29112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29112.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29111/comments | https://api.github.com/repos/huggingface/transformers/issues/29111/events | https://github.com/huggingface/transformers/issues/29111 | 2,142,682,356 | I_kwDOCUB6oc5_trz0 | 29,111 | RWKV5 tokenizer truncation | {
"login": "sedrick-keh-tri",
"id": 133716510,
"node_id": "U_kgDOB_haHg",
"avatar_url": "https://avatars.githubusercontent.com/u/133716510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sedrick-keh-tri",
"html_url": "https://github.com/sedrick-keh-tri",
"followers_url": "https://api.github.com/users/sedrick-keh-tri/followers",
"following_url": "https://api.github.com/users/sedrick-keh-tri/following{/other_user}",
"gists_url": "https://api.github.com/users/sedrick-keh-tri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sedrick-keh-tri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedrick-keh-tri/subscriptions",
"organizations_url": "https://api.github.com/users/sedrick-keh-tri/orgs",
"repos_url": "https://api.github.com/users/sedrick-keh-tri/repos",
"events_url": "https://api.github.com/users/sedrick-keh-tri/events{/privacy}",
"received_events_url": "https://api.github.com/users/sedrick-keh-tri/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ArthurZucker ",
"Hey! THanks for opening an issue. The problem is that you are using `trust_remote_code=True` and thus the `https://huggingface.co/RWKV/v5-Eagle-7B-HF/blob/main/tokenization_rwkv_world.py` file is used. The code is not on transformers yet! \r\n\r\nUse the tokenizer from #29095 should fix this already 🤗 "
] | 1,708 | 1,708 | null | NONE | null | ### System Info
Python 3.10. Transformers 4.37.2
### Who can help?
@arthur
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("RWKV/HF_v5-Eagle-7B", trust_remote_code=True)
long_str = ""
for i in range(10000):
long_str = long_str + " " + str(i)
print(len(long_str)) # 48890
long_str_tokenized = tokenizer(long_str, truncation=True, max_length=2048)
print(len(long_str_tokenized['input_ids'])) # 19900
### Expected behavior
Proper truncation to max_length | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29111/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29110/comments | https://api.github.com/repos/huggingface/transformers/issues/29110/events | https://github.com/huggingface/transformers/pull/29110 | 2,142,647,107 | PR_kwDOCUB6oc5nTaYs | 29,110 | Skip failing test_save_load_low_cpu_mem_usage tests | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29110). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I also in favor of reverting the original PR #28948 - if that PR is not something urgent to be in `main`. Adding this many of skip is already a heavy burden, then clean it up later is another one.\r\n\r\n(The test fetcher is indeed not ideal, but there is a trade-off and we can decide how to improve it.)\r\n\r\nHowever, I am OK for this PR to be merged if we decide so - the job is already done (the time is spent) anyway.",
"@ydshieh @ArthurZucker I agree, let's revert for now. \r\n\r\nThe main reason is I don't trust the testing suite at the moment. @hackyon opened another, more complete PR to address the failing models #29118 which believe were retrieved from explicitly testing `test_save_load_low_cpu_mem_usage` for all models. Importantly, there were more models covered there than this PR addresses - highlighting lack of coverage and failing tests not being run. \r\n\r\nHowever, because we're just about to release, let's err on the side of caution. "
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Related to #29043 and #28948
Fixes more failing model tests on main which weren't picked up by the test fetcher cc @ydshieh cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29110",
"html_url": "https://github.com/huggingface/transformers/pull/29110",
"diff_url": "https://github.com/huggingface/transformers/pull/29110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29110.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29109/comments | https://api.github.com/repos/huggingface/transformers/issues/29109/events | https://github.com/huggingface/transformers/pull/29109 | 2,142,417,003 | PR_kwDOCUB6oc5nSns8 | 29,109 | Llama: fix batched generation | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29109). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'll have to run the benchmark on the A100 to make sure everything is alright but otherwise should be good ",
"Alright, no significant slow downs so 🟢 but I can't do naive Dynamic generation with the same script as before: \r\nProbably because I gave ` position_ids = torch.arange(seq_length, device=device)` and they are not unsqueezed\r\n\r\n```python3\r\n File \"/home/arthur/transformers/../static-kv-cache/clean_bench.py\", line 147, in <module>\r\n outputs = model(input_ids, past_key_values=past_key_values,position_ids=position_ids,cache_position=cache_position, return_dict=False, use_cache = True)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1536, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1545, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/arthur/transformers/src/transformers/models/llama/modeling_llama.py\", line 1155, in forward\r\n outputs = self.model(\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1536, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1545, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/arthur/transformers/src/transformers/models/llama/modeling_llama.py\", line 995, in forward\r\n layer_outputs = decoder_layer(\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1536, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1545, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/arthur/transformers/src/transformers/models/llama/modeling_llama.py\", line 721, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1536, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1545, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/arthur/transformers/src/transformers/models/llama/modeling_llama.py\", line 628, in forward\r\n cos, sin = self.rotary_emb(value_states, position_ids, seq_len=None)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1536, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/arthur/miniconda3/envs/py310/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1545, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/arthur/transformers/src/transformers/models/llama/modeling_llama.py\", line 107, in forward\r\n position_ids[:, None, :].float()\r\nIndexError: too many indices for tensor of dimension 1\r\n```",
"@ArthurZucker regarding the benchmark error: position ids should be a 2D tensor, just like the input ids :D I also had to adapt it on my end",
"Alright if passing a 1d before was erroring out! ",
"@gante thanks a lot for this"
] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?
Fixes batched inference on llama, after the static cache changes were added. For instance, `RUN_SLOW=1 py.test tests/test_cache_utils.py::CacheIntegrationTest::test_dynamic_cache_beam_search` now passes.
### What was wrong?
`position_ids` has shape `[bsz, seq_len]`. The line computing `freqs` was correct for batch size = 1, but incorrect for larger batch sizes: it was summing the values for the different batch members. Therefore, we need to create another dimension to prevent this sum from happening, which is what this PR does.
### Throughput impact of changes
None 🙌 [Measured on my end, RTX3090 + `TinyLlama/TinyLlama-1.1B-Chat-v1.0`]
Before this PR
![Screenshot 2024-02-19 at 13 10 54](https://github.com/huggingface/transformers/assets/12240844/5b622062-a01d-4408-b81d-6e492e8f74e7)
After this PR
![Screenshot 2024-02-19 at 13 43 29](https://github.com/huggingface/transformers/assets/12240844/2bdc9c25-fba7-43ae-affc-751467962b14)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29109",
"html_url": "https://github.com/huggingface/transformers/pull/29109",
"diff_url": "https://github.com/huggingface/transformers/pull/29109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29109.patch",
"merged_at": 1708424597000
} |
https://api.github.com/repos/huggingface/transformers/issues/29108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29108/comments | https://api.github.com/repos/huggingface/transformers/issues/29108/events | https://github.com/huggingface/transformers/pull/29108 | 2,142,388,605 | PR_kwDOCUB6oc5nShdP | 29,108 | [Phi] Add support for sdpa | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @gugarosa @ArthurZucker @younesbelkada 👋\r\n\r\nI'm looking for more places to add support for SDPA and figured Phi-2 could be a good one. \r\n\r\nBeen reading up on the issues regarding attention overflow for Phi-2 (#28673, #28488), and I think SPDA would probably be affected by it as well (if it chooses the FA kernels). So, this issue is dependent on #28673.\r\n\r\nI think we should at least issue a warning in SDPA attention if the flash attention is available and dtype == float16 or autocast dtype == float16 (not sure if SDPA will try to autocast to fp16 under the hood). ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29108). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the review!\r\n\r\nSounds good, marking this for ready. Let me know if/when there's any follow-up work and I'd happy to take a stab at it.",
"Yup, all the common model and integration tests pass:\r\n\r\n```\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiModelTest::test_eager_matches_sdpa_generate\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiModelTest::test_eager_matches_sdpa_inference_0_float16\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiModelTest::test_eager_matches_sdpa_inference_1_bfloat16\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiModelTest::test_eager_matches_sdpa_inference_2_float32\r\n...\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiIntegrationTest::test_model_phi_1_5_logits\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiIntegrationTest::test_model_phi_1_logits\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiIntegrationTest::test_model_phi_2_logits\r\nPASSED tests/models/phi/test_modeling_phi.py::PhiIntegrationTest::test_phi_2_generation\r\n```",
"Just ran `RUN_SLOW=1 pytest tests/models/phi`:\r\n```python \r\n_______________________________________________ PhiModelTest.test_eager_matches_sdpa_inference_1_bfloat16 ________________________________________________\r\n\r\na = (<tests.models.phi.test_modeling_phi.PhiModelTest testMethod=test_eager_matches_sdpa_inference_1_bfloat16>,), kw = {}\r\n\r\n @wraps(func)\r\n def standalone_func(*a, **kw):\r\n> return func(*(a + p.args), **p.kwargs, **kw)\r\n\r\n../miniconda3/envs/py39/lib/python3.9/site-packages/parameterized/parameterized.py:620: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:3698: in test_eager_matches_sdpa_inference\r\n self.assertTrue(len(fail_cases) == 0, \"\\n\".join(fail_cases))\r\nE AssertionError: False is not true : padding_side=left, use_mask=False, batch_size=5, enable_kernels=True: mean relative difference: 1.154e-02, torch atol = 0.01, torch rtol = 0.03\r\nE padding_side=left, use_mask=True, batch_size=5, enable_kernels=True: mean relative difference: 1.129e-02, torch atol = 0.01, torch rtol = 0.03\r\nE padding_side=right, use_mask=False, batch_size=5, enable_kernels=True: mean relative difference: 1.154e-02, torch atol = 0.01, torch rtol = 0.03\r\nE padding_side=right, use_mask=True, batch_size=5, enable_kernels=True: mean relative difference: 1.129e-02, torch atol = 0.01, torch rtol = 0.03\r\n````\r\nI think it is acceptable",
"Thanks @hackyon 😉 ",
"Thanks!\r\n\r\nI just ran the PhiModelTest again on my server again and it still passes, so it seems like a config issue :/ I'm running these tests/benchmarks on Paperspace ml-in-a-box instance with A100 GPUs. \r\n\r\nLet me know if you have recommendations on any better setup/config to use.",
"No worries might be flaky as well 1e-2 is alright I think I have torch nightly as well "
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
Adding support for SDPA to Phi (See #28005)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@fxmarty @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29108",
"html_url": "https://github.com/huggingface/transformers/pull/29108",
"diff_url": "https://github.com/huggingface/transformers/pull/29108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29108.patch",
"merged_at": 1708435992000
} |
https://api.github.com/repos/huggingface/transformers/issues/29107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29107/comments | https://api.github.com/repos/huggingface/transformers/issues/29107/events | https://github.com/huggingface/transformers/issues/29107 | 2,142,291,862 | I_kwDOCUB6oc5_sMeW | 29,107 | Cannot use time series transformer as encoder and gpt model as decoder using encoder decoder architecture from hugging face | {
"login": "nikhilajoshy",
"id": 37141775,
"node_id": "MDQ6VXNlcjM3MTQxNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/37141775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhilajoshy",
"html_url": "https://github.com/nikhilajoshy",
"followers_url": "https://api.github.com/users/nikhilajoshy/followers",
"following_url": "https://api.github.com/users/nikhilajoshy/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhilajoshy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhilajoshy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhilajoshy/subscriptions",
"organizations_url": "https://api.github.com/users/nikhilajoshy/orgs",
"repos_url": "https://api.github.com/users/nikhilajoshy/repos",
"events_url": "https://api.github.com/users/nikhilajoshy/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhilajoshy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @nikhilajoshy, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports."
] | 1,708 | 1,708 | null | NONE | null | ### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
EncoderDecoderModel.from_pretrained()
### Expected behavior
I want to use EncoderDecoderModel.from_pretrained() where huggingface/time-series-transformer-tourism-monthly is the encoder and openai-community/gpt2 as decoder but as mentioned in https://huggingface.co/blog/warm-starting-encoder-decoder, TS transformer does not have a tokenizer and hence not sure what to use as encoder input ids | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29107/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29106/comments | https://api.github.com/repos/huggingface/transformers/issues/29106/events | https://github.com/huggingface/transformers/pull/29106 | 2,142,285,524 | PR_kwDOCUB6oc5nSKnH | 29,106 | support SDPA Attention in stablelm | {
"login": "eaidova",
"id": 29454499,
"node_id": "MDQ6VXNlcjI5NDU0NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29454499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eaidova",
"html_url": "https://github.com/eaidova",
"followers_url": "https://api.github.com/users/eaidova/followers",
"following_url": "https://api.github.com/users/eaidova/following{/other_user}",
"gists_url": "https://api.github.com/users/eaidova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eaidova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eaidova/subscriptions",
"organizations_url": "https://api.github.com/users/eaidova/orgs",
"repos_url": "https://api.github.com/users/eaidova/repos",
"events_url": "https://api.github.com/users/eaidova/events{/privacy}",
"received_events_url": "https://api.github.com/users/eaidova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @fxmarty ",
"> Looks alright but we need to add an integration test iMO :)\r\n\r\nadded",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
enable SDPA attention in stablelm architecture
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29106/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29106",
"html_url": "https://github.com/huggingface/transformers/pull/29106",
"diff_url": "https://github.com/huggingface/transformers/pull/29106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29106.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29105/comments | https://api.github.com/repos/huggingface/transformers/issues/29105/events | https://github.com/huggingface/transformers/pull/29105 | 2,142,229,012 | PR_kwDOCUB6oc5nR-Lw | 29,105 | Fix the `bert-base-cased` tokenizer configuration test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | MEMBER | null | In the process of updating the tokenizer configurations on the Hub, this test needs to be updated to reflect the new value of the configuration. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29105",
"html_url": "https://github.com/huggingface/transformers/pull/29105",
"diff_url": "https://github.com/huggingface/transformers/pull/29105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29105.patch",
"merged_at": 1708345405000
} |
https://api.github.com/repos/huggingface/transformers/issues/29104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29104/comments | https://api.github.com/repos/huggingface/transformers/issues/29104/events | https://github.com/huggingface/transformers/pull/29104 | 2,142,219,632 | PR_kwDOCUB6oc5nR8G6 | 29,104 | Added image_captioning version in es and included in toctree file | {
"login": "gisturiz",
"id": 48292332,
"node_id": "MDQ6VXNlcjQ4MjkyMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/48292332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gisturiz",
"html_url": "https://github.com/gisturiz",
"followers_url": "https://api.github.com/users/gisturiz/followers",
"following_url": "https://api.github.com/users/gisturiz/following{/other_user}",
"gists_url": "https://api.github.com/users/gisturiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gisturiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gisturiz/subscriptions",
"organizations_url": "https://api.github.com/users/gisturiz/orgs",
"repos_url": "https://api.github.com/users/gisturiz/repos",
"events_url": "https://api.github.com/users/gisturiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/gisturiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
Translated image_captioning from en to es from issue https://github.com/huggingface/transformers/issues/28936 began by @stevhliu. I will continue to go through the documentation and make the correct translations.
(closed previous PR and reopened to rebasing issue)
Fixes # https://github.com/huggingface/transformers/issues/28936
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu @aaronjimv
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29104",
"html_url": "https://github.com/huggingface/transformers/pull/29104",
"diff_url": "https://github.com/huggingface/transformers/pull/29104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29104.patch",
"merged_at": 1708449195000
} |
https://api.github.com/repos/huggingface/transformers/issues/29103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29103/comments | https://api.github.com/repos/huggingface/transformers/issues/29103/events | https://github.com/huggingface/transformers/issues/29103 | 2,142,189,413 | I_kwDOCUB6oc5_rzdl | 29,103 | Request to add FLMR | {
"login": "LinWeizheDragon",
"id": 33350454,
"node_id": "MDQ6VXNlcjMzMzUwNDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/33350454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LinWeizheDragon",
"html_url": "https://github.com/LinWeizheDragon",
"followers_url": "https://api.github.com/users/LinWeizheDragon/followers",
"following_url": "https://api.github.com/users/LinWeizheDragon/following{/other_user}",
"gists_url": "https://api.github.com/users/LinWeizheDragon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LinWeizheDragon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinWeizheDragon/subscriptions",
"organizations_url": "https://api.github.com/users/LinWeizheDragon/orgs",
"repos_url": "https://api.github.com/users/LinWeizheDragon/repos",
"events_url": "https://api.github.com/users/LinWeizheDragon/events{/privacy}",
"received_events_url": "https://api.github.com/users/LinWeizheDragon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### Model description
## Basic Information
This issue requests adding Fine-grained Late-interaction Multi-modal Retriever (FLMR).
The model leverages late interaction (as originally proposed by Stanford [ColBERT](https://github.com/stanford-futuredata/ColBERT)) to compute token-level similarity between every query token and document token, which enables more accurate retrieval relative to DPR-like systems.
The model was proposed in [here](https://openreview.net/forum?id=IWWWulAX7g) (NeurIPS 2023) and [here](https://arxiv.org/abs/2402.08327) (a follow-up version that was pre-trained on more than ten million of multi-modal retrieval data).
## Resources
- Project page [here](https://preflmr.github.io/)
- Official codebase [here](https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering)
- The pre-trained checkpoints are [here](https://huggingface.co/models?search=PreFLMR).
## Why adding this model
1. This work has gained attention from researchers across the world. We received many requests during NeurIPS 2023 to provide an easy implementation of this model.
2. The work has been recognized by the authors of ColBERT [twitter post](https://twitter.com/lateinteraction/status/1757652639503007893)
3. There exists no implementation for late-interaction retrieval models in hf-transformers, which have been extensively researched in these years.
4. There are many requests in the original codebase [an example issue](https://github.com/LinWeizheDragon/Retrieval-Augmented-Visual-Question-Answering/issues/24)
5. As original authors, we have finished 99% of the model, including an example for indexing and searching. Limited work is required to have it on huggingface-transformers! #29062
### Open source status
- [x] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The PR is already here:
#29062 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29103/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29102/comments | https://api.github.com/repos/huggingface/transformers/issues/29102/events | https://github.com/huggingface/transformers/pull/29102 | 2,141,970,710 | PR_kwDOCUB6oc5nRFh9 | 29,102 | Fix two tiny typos in `pipelines/base.py::Pipeline::_sanitize_parameters()`'s docstring | {
"login": "sadra-barikbin",
"id": 22097587,
"node_id": "MDQ6VXNlcjIyMDk3NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/22097587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadra-barikbin",
"html_url": "https://github.com/sadra-barikbin",
"followers_url": "https://api.github.com/users/sadra-barikbin/followers",
"following_url": "https://api.github.com/users/sadra-barikbin/following{/other_user}",
"gists_url": "https://api.github.com/users/sadra-barikbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadra-barikbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadra-barikbin/subscriptions",
"organizations_url": "https://api.github.com/users/sadra-barikbin/orgs",
"repos_url": "https://api.github.com/users/sadra-barikbin/repos",
"events_url": "https://api.github.com/users/sadra-barikbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadra-barikbin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One more thing. We have `False` for default value of `clean_up_tokenization_spaces` argument in `TextGenerationPipeline`'s docstring:\r\nhttps://github.com/huggingface/transformers/blob/593230f0a1150ea9c0477b9d859f25daf73c8c33/src/transformers/pipelines/text_generation.py#L207-L208\r\n\r\nbut its default value in `TextGenerationPipeline::postprocess` is `True`:\r\nhttps://github.com/huggingface/transformers/blob/593230f0a1150ea9c0477b9d859f25daf73c8c33/src/transformers/pipelines/text_generation.py#L336",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29102). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@sadra-barikbin Good spot! Could you add a fix for the docstring in this PR too?",
"The pipelines in `text2text_generation.py` class have `clean_up_tokenization_spaces=False` as default. This has been the desired behavior?",
"@sadra-barikbin Yes, I believe so. Changing the default behaviour would be a breaking change, so we should keep as-is"
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Hi there!
To fix two tiny typos in `pipelines/base.py::Pipeline::_sanitize_parameters()`'s docstring.
@Narsil
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29102",
"html_url": "https://github.com/huggingface/transformers/pull/29102",
"diff_url": "https://github.com/huggingface/transformers/pull/29102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29102.patch",
"merged_at": 1708368629000
} |
https://api.github.com/repos/huggingface/transformers/issues/29101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29101/comments | https://api.github.com/repos/huggingface/transformers/issues/29101/events | https://github.com/huggingface/transformers/issues/29101 | 2,141,799,722 | I_kwDOCUB6oc5_qUUq | 29,101 | Models with remote code are not loaded correctly when there's `.` in their name. | {
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 ",
"Hi @BlackSamorez, this issue has been reported already at #28919. We're working on a fix right now! I'm going to close this issue as a duplicate."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | ### System Info
```
- `transformers` version: 4.37.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.28.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.8.1 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
quantized_model = AutoModelForCausalLM.from_pretrained(
"BlackSamorez/Mixtral-8x7B-Instruct-v0.1-AQLM-2Bit-1x16-hf",
trust_remote_code=True, torch_dtype="auto", device_map="cuda"
)
```
produces:
```
ModuleNotFoundError: No module named 'transformers_modules.BlackSamorez.Mixtral-8x7B-Instruct-v0'
```
It's clear that `.` in the name break custom code imports
### Expected behavior
Expected that model is loaded correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29101/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29100/comments | https://api.github.com/repos/huggingface/transformers/issues/29100/events | https://github.com/huggingface/transformers/issues/29100 | 2,141,751,170 | I_kwDOCUB6oc5_qIeC | 29,100 | Getting Assertion Error when calling neo4j chain for inference | {
"login": "KaifAhmad1",
"id": 98801504,
"node_id": "U_kgDOBeOXYA",
"avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaifAhmad1",
"html_url": "https://github.com/KaifAhmad1",
"followers_url": "https://api.github.com/users/KaifAhmad1/followers",
"following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}",
"gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions",
"organizations_url": "https://api.github.com/users/KaifAhmad1/orgs",
"repos_url": "https://api.github.com/users/KaifAhmad1/repos",
"events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaifAhmad1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @KaifAhmad1, thanks for opening an issue! \r\n\r\nPlease make sure to provide a minimal code reproducer and information about the bug encountered, including the full error traceback when reporting an issue.\r\n\r\nIf the error is coming from `bitsandbytes` there isn't anything the transformers team can do. ",
"Hey, @amyeroberts \r\nI have tagged this issue with bitsandbytes maintainers according to transformers documentation \r\n@SunMarc @younesbelkada \r\n\r\nbitsandbytes = 0.42.0\r\npip = 24.0\r\npython = 3.10.10\r\ncuda = 12.1\r\nOS = windows 11 x64 \r\n\r\n\r\n``` Python \r\nimport torch\r\nfrom torch import cuda, bfloat16\r\nimport transformers\r\nmodel_id = 'microsoft/phi-2'\r\ndevice = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\r\n\r\n# begin initializing HF items, you need an access token\r\nmodel_config = transformers.AutoConfig.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth,\r\n trust_remote_code=True\r\n)\r\n\r\n# BnB Configuration\r\nbnb_config = transformers.BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type='nf4',\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_compute_dtype=bfloat16\r\n)\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n quantization_config=bnb_config,\r\n low_cpu_mem_usage=True\r\n)\r\n\r\n# How model looks like:\r\nmodel.eval()\r\n\r\n\r\nfrom langchain.chains import GraphCypherQAChain\r\nfrom langchain.graphs import Neo4jGraph\r\n \r\n\r\nfrom langchain.chains.base import Chain\r\nfrom langchain.chains.llm import LLMChain\r\nfrom langchain.chat_models import ChatOpenAI\r\nfrom langchain.chains.question_answering.stuff_prompt import CHAT_PROMPT\r\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\r\nfrom typing import Any, Dict, List\r\nfrom pydantic import Field\r\n\r\n\r\n\r\nvector_search = \"\"\"\r\nWITH \r\nk, e) yield node, score\r\nRETURN node.text AS result\r\nORDER BY score DESC\r\nLIMIT 3\r\n\"\"\"\r\n\r\nprint(graph.schema)\r\n\r\nclass Neo4jVectorChain(Chain):\r\n graph: Neo4jGraph = Field(exclude=True)\r\n input_key: str = \"query\"\r\n output_key: str = \"result\"\r\n embeddings: HuggingFaceBgeEmbeddings = HuggingFaceBgeEmbeddings()\r\n qa_chain: LLMChain = LLMChain(llm=llm, prompt=CHAT_PROMPT)\r\n\r\n @property\r\n def input_keys(self) -> List[str]:\r\n return [self.input_key]\r\n\r\n @property\r\n def output_keys(self) -> List[str]:\r\n _output_keys = [self.output_key]\r\n return _output_keys\r\n\r\n def _call(self, inputs: Dict[str, str], run_manager, k=3) -> Dict[str, Any]:\r\n question = inputs[self.input_key]\r\n embedding = self.embeddings.embed_query(question)\r\n\r\n context = self.graph.query(vector_search, {'embedding': embedding, 'k': 3})\r\n context = [el['result'] for el in context]\r\n\r\n result = self.qa_chain({\"question\": question, \"context\": context})\r\n final_result = result[self.qa_chain.output_key]\r\n return {self.output_key: final_result}\r\n\r\nchain = Neo4jVectorChain(graph=graph, embeddings=embeddings, verbose=True)\r\n\r\ngraph_result = chain.run(\"How can we enhance the specificity and efficiency of CRISPR/Cas9 gene-editing technology to minimize off-target effects and increase its potential for therapeutic applications?\")\r\n\r\n\r\n\r\n```\r\n\r\n```\r\n> Entering new Neo4jVectorChain chain...\r\n/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\r\n warn_deprecated(\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:392: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.3` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.\r\n warnings.warn(\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nFP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n[<ipython-input-42-4ff3ab735a16>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 graph_result = chain.run(\"How can we enhance the specificity and efficiency of CRISPR/Cas9 gene-editing technology to minimize off-target effects and increase its potential for therapeutic applications?\")\r\n\r\n49 frames\r\n[/usr/local/lib/python3.10/dist-packages/bitsandbytes/autograd/_functions.py](https://localhost:8080/#) in matmul_4bit(A, B, quant_state, out, bias)\r\n 564 \r\n 565 def matmul_4bit(A: tensor, B: tensor, quant_state: F.QuantState, out: tensor = None, bias=None):\r\n--> 566 assert quant_state is not None\r\n 567 if A.numel() == A.shape[-1] and A.requires_grad == False:\r\n 568 if A.shape[-1] % quant_state.blocksize != 0:\r\n\r\nAssertionError:\r\n```",
"Hi @KaifAhmad1 \r\nThanks very much for the issue ! \r\nYou are using the trust_remote_code model that we don't maintain, can you try out phi-2 without `trust_remote_code` ? I think 4bit should work out of the box with the non-trust_remote_code model"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
langchain version = 0.1.7
bitsandbytes = 0.42.0
pip = 24.0
cuda = 12.1
OS Windows 11 x64
### Who can help?
Hey, @SunMarc @younesbelkada please help me out.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've brought up this concern on LangChain, but Duso-Bot is indicating that it's actually related to BitsAndBytes.
Here is the discussion link and issue: https://github.com/langchain-ai/langchain/discussions/17701
also raised on bitsandbytes repo but did not get support. Link: https://github.com/TimDettmers/bitsandbytes/issues/1067
### Expected behavior
It wil give the answes without raising the exception. answer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29100/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29099/comments | https://api.github.com/repos/huggingface/transformers/issues/29099/events | https://github.com/huggingface/transformers/pull/29099 | 2,141,668,363 | PR_kwDOCUB6oc5nQDN9 | 29,099 | Fix the behavior of collecting 'num_input_tokens_seen' | {
"login": "YouliangHUANG",
"id": 56789071,
"node_id": "MDQ6VXNlcjU2Nzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/56789071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YouliangHUANG",
"html_url": "https://github.com/YouliangHUANG",
"followers_url": "https://api.github.com/users/YouliangHUANG/followers",
"following_url": "https://api.github.com/users/YouliangHUANG/following{/other_user}",
"gists_url": "https://api.github.com/users/YouliangHUANG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YouliangHUANG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YouliangHUANG/subscriptions",
"organizations_url": "https://api.github.com/users/YouliangHUANG/orgs",
"repos_url": "https://api.github.com/users/YouliangHUANG/repos",
"events_url": "https://api.github.com/users/YouliangHUANG/events{/privacy}",
"received_events_url": "https://api.github.com/users/YouliangHUANG/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
The length of "inputs[main_input_name]" is not guaranteed to be the same when using DDP, which may make the training process hang. Besides, in a distributed setup, it costs a lot to gather the WHOLE input tensors on different workers. It is better to call .numel() first and then .gather().
Fixes https://github.com/huggingface/transformers/issues/28791
The modified code has already passed the relevant test ``tests/trainer/test_trainer_distributed.py`` with additional argument 'include_num_input_tokens_seen'.
## Who can review?
@pacman100
@muellerzr
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29099",
"html_url": "https://github.com/huggingface/transformers/pull/29099",
"diff_url": "https://github.com/huggingface/transformers/pull/29099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29099.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29098/comments | https://api.github.com/repos/huggingface/transformers/issues/29098/events | https://github.com/huggingface/transformers/issues/29098 | 2,141,628,529 | I_kwDOCUB6oc5_pqhx | 29,098 | Flashatten2 avaiable should handle if hardware support or not? | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @lucasjinreal, thanks for opening an issue! \r\n\r\nPlease make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide: \r\n* A minimal code reproducer\r\n* All relevant error information including the full error traceback\r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n\r\n"
] | 1,708 | 1,708 | null | NONE | null | For some docker images the flashattn installed, but v100 not support.
It will return True and raise error in runtime.
Also, since torch already support flash2 inside, add torch2.2 check and using inside version? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29098/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29097/comments | https://api.github.com/repos/huggingface/transformers/issues/29097/events | https://github.com/huggingface/transformers/pull/29097 | 2,141,556,123 | PR_kwDOCUB6oc5nPqeA | 29,097 | change version | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29097). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"We still have failing for `test_save_load_low_cpu_mem_usage` which are already on `main`.\r\n\r\nOn this PR, the accelerate is `0.27.2` and on `main`, the CI uses `accelerate` main. So I don't think the change of this PR (on the `accelerate` part address the issue)\r\n\r\n@amyeroberts mentions she has already some work on solving `test_save_load_low_cpu_mem_usage` .\r\n\r\n@ArthurZucker mentions he don't want to use `accelerate main` on our CI.\r\n\r\nSo let's keep this PR as it is despite there is `test_save_load_low_cpu_mem_usage` failing, if @amyeroberts agrees.\r\n\r\ncc @muellerzr ",
"p.s. `test_beam_search_low_memory` also happens for which I don't see on `main` CI.",
"@ydshieh Do we know the original reasons for using accelerate main rather than the stable release in the CI? ",
"@ydshieh Just to confirm - changing the accelerate version here would mean that `test_save_load_low_cpu_mem_usage` starts to fail despite the fixes on `main`. Or that there's still failures on `main`? ",
"@amyeroberts I didn't know the fix of `test_save_load_low_cpu_mem_usage ` is already on `main`.\r\n\r\nHowever, that fix is merged 4 days ago, right? But last night's run \r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/84737/workflows/c2fc5312-c0bf-4620-b523-74c1a99b0d1b/jobs/1096153/tests\r\n\r\nhaving those failures.\r\n\r\nSo it's not this PR introducing this again. (i.e. those failures are still on `main` after the fix).\r\n\r\n\r\n",
"I propose to revert the changes regarding accelerate in this PR (as it didn't fix the issue).\r\n\r\nIf eventually everyone agrees with @ArthurZucker, we can open another PR to use `released accelerate` for CI - but this is not the scope of this PR (and I don't want to mix the 2 different stuff in a single PR).",
"@ydshieh OK, what I suggest for the moment is skipping all the models where this fails at the moment. \r\n\r\nSeparate from this PR, I think there's a issue with our test fetcher. There's been quite a few PRs recently where the whole CI has been green and then after merging into main, another PR which touches related (but different) code suddenly starts to have CI failures. These tests are one example. Two other cases would be: \r\n* Failures coming from the static cache PR\r\n* Failures in coming from [processing fix for detr](https://github.com/huggingface/transformers/issues/28530) which later triggered [errors in a different PR](https://github.com/huggingface/transformers/pull/28312).",
"> If eventually everyone agrees with @ArthurZucker, we can open another PR to use released accelerate for CI - but this is not the scope of this PR (and I don't want to mix the 2 different stuff in a single PR).\r\n\r\nI'm pro splitting and pro having stable accelerate for PR's CI and `main` accelerate on a nightly run",
"Will take a look over the test fetcher. But remember that test fetcher have some design decision (by great @sgugger ) to balance the coverage as well as the speed of CI, so the situation described is not surprising (at least to me since I faced it a few times already and some fix were already done).\r\n\r\n(One related PR but not exact is #28816)",
"Regarding skipping all the models where this fails at the moment, I will leave it to anyone that is so motivated to beat me, otherwise I can do it in about 9 hours.\r\n\r\nFor now, I will just wait the CI of this PR and merge it.",
"Thanks both 🤗 "
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Try to change cached version | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29097/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29097",
"html_url": "https://github.com/huggingface/transformers/pull/29097",
"diff_url": "https://github.com/huggingface/transformers/pull/29097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29097.patch",
"merged_at": 1708351724000
} |
https://api.github.com/repos/huggingface/transformers/issues/29096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29096/comments | https://api.github.com/repos/huggingface/transformers/issues/29096/events | https://github.com/huggingface/transformers/pull/29096 | 2,141,530,987 | PR_kwDOCUB6oc5nPk6E | 29,096 | Fix: Fixed the previous tracking URI setting logic to prevent clashes with original MLflow code. | {
"login": "seanswyi",
"id": 20367759,
"node_id": "MDQ6VXNlcjIwMzY3NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/20367759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanswyi",
"html_url": "https://github.com/seanswyi",
"followers_url": "https://api.github.com/users/seanswyi/followers",
"following_url": "https://api.github.com/users/seanswyi/following{/other_user}",
"gists_url": "https://api.github.com/users/seanswyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanswyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanswyi/subscriptions",
"organizations_url": "https://api.github.com/users/seanswyi/orgs",
"repos_url": "https://api.github.com/users/seanswyi/repos",
"events_url": "https://api.github.com/users/seanswyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanswyi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@seanswyi For the failing tests, these are unrelated and a known issue happening on our CI. We're currently working on a fix for it and will let you know asap once it's merged and you can rebase to get the CI green",
"@amyeroberts No worries, thanks for the heads up!",
"@seanswyi The failing tests should now be resolved. Could you try rebasing on `main`? ",
"@amyeroberts Just checked that the test pass. Thanks again for the heads up!",
"@seanswyi I think something funny has happened with the history in this PR - the diff is now showing changes coming from other unrelated commits, which are also now in the commit history. Did you force push after rebasing? This is necessary as rebasing is effectively rewriting the history",
"@amyeroberts Ah, you're right. And no, I don't believe I did a force push, just a regular push. I'll rebase and fix this now."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
The previous code was calling the `mlflow.set_tracking_uri` function regardless of whether or not the environment variable `MLFLOW_TRACKING_URI` is even set. This led to clashes with the original MLflow implementation and therefore the logic was changed to only calling the function when the environment variable is explicitly set.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
~~Fixes # (issue)~~ Doesn't fix a specific issue but addresses a problem that was brought up as a comment in the aforementioned PR: https://github.com/huggingface/transformers/pull/29032#issuecomment-1951595909
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/omnious/model-classifier/blob/885efa563cba97e58ed5c5874a2c9d0f32daded2/tests/unit/test_model.py#L1185
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Maintainers: @amyeroberts @muellerzr @pacman100
Original author of MLflowCallback: @noise-field
Author of comment and MLflow maintainer: @harupy
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29096/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29096",
"html_url": "https://github.com/huggingface/transformers/pull/29096",
"diff_url": "https://github.com/huggingface/transformers/pull/29096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29096.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29095/comments | https://api.github.com/repos/huggingface/transformers/issues/29095/events | https://github.com/huggingface/transformers/pull/29095 | 2,141,352,273 | PR_kwDOCUB6oc5nO96x | 29,095 | [`RWKV5`] Add support for RWKV5 model | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29095). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Not sure if this is the appropriate place to post this, but I've been running into some issues when using the tokenizer `AutoTokenizer.from_pretrained(\"RWKV/v5-Eagle-7B-HF\")` and I was told to forward my message here. Wondering if this is a feature or a bug? \r\n\r\nOriginal issue https://github.com/BlinkDL/RWKV-LM/issues/224\r\n",
"😅 could you open a seperate issue and ping me there?! 🤗 ",
"Gotcha, starting a separate issue makes sense. Thanks! https://github.com/huggingface/transformers/issues/29111"
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Adds RWKV5, superseeds #26963 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29095/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29095",
"html_url": "https://github.com/huggingface/transformers/pull/29095",
"diff_url": "https://github.com/huggingface/transformers/pull/29095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29095.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29094/comments | https://api.github.com/repos/huggingface/transformers/issues/29094/events | https://github.com/huggingface/transformers/issues/29094 | 2,141,335,890 | I_kwDOCUB6oc5_ojFS | 29,094 | altclip can not be traced by fx? | {
"login": "TXacs",
"id": 60869411,
"node_id": "MDQ6VXNlcjYwODY5NDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TXacs",
"html_url": "https://github.com/TXacs",
"followers_url": "https://api.github.com/users/TXacs/followers",
"following_url": "https://api.github.com/users/TXacs/following{/other_user}",
"gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TXacs/subscriptions",
"organizations_url": "https://api.github.com/users/TXacs/orgs",
"repos_url": "https://api.github.com/users/TXacs/repos",
"events_url": "https://api.github.com/users/TXacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/TXacs/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @TXacs, thanks for raising this issue! \r\n\r\nYou need to pass in `input_names` to `symbolic_trace`:\r\n\r\n```py\r\nfrom config import load_config\r\n\r\nfrom transformers import (\r\n AltCLIPModel,\r\n AltCLIPConfig,\r\n)\r\nfrom transformers.utils.fx import symbolic_trace\r\n\r\ndef main():\r\n config_kwarg = load_config(\"altclip\")\r\n config = AltCLIPConfig(\r\n **config_kwarg\r\n )\r\n\r\n # create model\r\n model = AltCLIPModel(config)\r\n\r\n traced_model = symbolic_trace(model, ['input_ids', 'pixel_values'])\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"> Hi @TXacs, thanks for raising this issue!\r\n> \r\n> You need to pass in `input_names` to `symbolic_trace`:\r\n> \r\n> ```python\r\n> from config import load_config\r\n> \r\n> from transformers import (\r\n> AltCLIPModel,\r\n> AltCLIPConfig,\r\n> )\r\n> from transformers.utils.fx import symbolic_trace\r\n> \r\n> def main():\r\n> config_kwarg = load_config(\"altclip\")\r\n> config = AltCLIPConfig(\r\n> **config_kwarg\r\n> )\r\n> \r\n> # create model\r\n> model = AltCLIPModel(config)\r\n> \r\n> traced_model = symbolic_trace(model, ['input_ids', 'pixel_values'])\r\n> \r\n> if __name__ == \"__main__\":\r\n> main()\r\n> ```\r\n\r\nThanks a lot! It solved."
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.1
- Platform: Linux-5.4.0-47-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from config import load_config
from transformers import (
AltCLIPModel,
AltCLIPConfig,
)
from transformers.utils.fx import symbolic_trace
def main():
config_kwarg = load_config("altclip")
config = AltCLIPConfig(
**config_kwarg
)
# create model
model = AltCLIPModel(config)
traced_model = symbolic_trace(model)
if __name__ == "__main__":
main()
```
### Expected behavior
When I run the above code, it reports an error: `ValueError: You have to specify pixel_values.`
Somebody can help me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29094/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29093/comments | https://api.github.com/repos/huggingface/transformers/issues/29093/events | https://github.com/huggingface/transformers/issues/29093 | 2,141,302,021 | I_kwDOCUB6oc5_oa0F | 29,093 | Generation doesn't work as expected with input_embeds | {
"login": "dipta007",
"id": 13894030,
"node_id": "MDQ6VXNlcjEzODk0MDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13894030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipta007",
"html_url": "https://github.com/dipta007",
"followers_url": "https://api.github.com/users/dipta007/followers",
"following_url": "https://api.github.com/users/dipta007/following{/other_user}",
"gists_url": "https://api.github.com/users/dipta007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dipta007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipta007/subscriptions",
"organizations_url": "https://api.github.com/users/dipta007/orgs",
"repos_url": "https://api.github.com/users/dipta007/repos",
"events_url": "https://api.github.com/users/dipta007/events{/privacy}",
"received_events_url": "https://api.github.com/users/dipta007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@dipta007 ,hi! This [PR](https://github.com/huggingface/transformers/pull/28994) fixed it",
"@dipta007 as @zucchini-nlp wrote, if you add `!pip install --upgrade git+https://github.com/huggingface/transformers.git` on top of your notebook it will work 🤗 ",
"Thanks"
] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.1 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Colab: https://colab.research.google.com/drive/1tIgcJVofkzTic9ChhG8rNAQmFjrE1HYd?usp=sharing
### Expected behavior
Output should be generated for both using `input_ids` or `inputs_embed`.
But when giving `inputs_embed`, it doesn't consider the input length in `max_length` stopping criteria. So it raises an exception when the output exceeds `model.max_length`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29092/comments | https://api.github.com/repos/huggingface/transformers/issues/29092/events | https://github.com/huggingface/transformers/pull/29092 | 2,141,298,539 | PR_kwDOCUB6oc5nOyRk | 29,092 | FIX [`bnb` / `tests`]: Fix currently failing bnb tests | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29092). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
https://github.com/huggingface/transformers/pull/29001 changed the logic of handling how to get linear layers from testing models. In fact, the `model_type` should always stay `"gpt2"` and not `"openai-community/gpt2"`
cc @amyeroberts @Titus-von-Koeller | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29092",
"html_url": "https://github.com/huggingface/transformers/pull/29092",
"diff_url": "https://github.com/huggingface/transformers/pull/29092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29092.patch",
"merged_at": 1708335552000
} |
https://api.github.com/repos/huggingface/transformers/issues/29091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29091/comments | https://api.github.com/repos/huggingface/transformers/issues/29091/events | https://github.com/huggingface/transformers/pull/29091 | 2,141,252,840 | PR_kwDOCUB6oc5nOoYV | 29,091 | fix the post-processing link | {
"login": "davies-w",
"id": 6550854,
"node_id": "MDQ6VXNlcjY1NTA4NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6550854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davies-w",
"html_url": "https://github.com/davies-w",
"followers_url": "https://api.github.com/users/davies-w/followers",
"following_url": "https://api.github.com/users/davies-w/following{/other_user}",
"gists_url": "https://api.github.com/users/davies-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davies-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davies-w/subscriptions",
"organizations_url": "https://api.github.com/users/davies-w/orgs",
"repos_url": "https://api.github.com/users/davies-w/repos",
"events_url": "https://api.github.com/users/davies-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/davies-w/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29091). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | The link in evaluation was missing a hyphen between post and processing. I fixed this, for English only. Someone with the ability to do a global search/replace should fix the other languages (if indeed they have this issue).
# What does this PR do?
Fixes a broken link in the documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29091/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29091",
"html_url": "https://github.com/huggingface/transformers/pull/29091",
"diff_url": "https://github.com/huggingface/transformers/pull/29091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29091.patch",
"merged_at": 1708337758000
} |
https://api.github.com/repos/huggingface/transformers/issues/29090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29090/comments | https://api.github.com/repos/huggingface/transformers/issues/29090/events | https://github.com/huggingface/transformers/pull/29090 | 2,141,198,507 | PR_kwDOCUB6oc5nOdgh | 29,090 | Do not use pooling for squad conversions when thread == 1 | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Testing this on CI tool. Improvement seems to happen only for certain machines.
This significantly improves the time it takes for the qa pipeline test. For example, the bert torch tests went from ~90s to ~15s locally.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29090/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29090",
"html_url": "https://github.com/huggingface/transformers/pull/29090",
"diff_url": "https://github.com/huggingface/transformers/pull/29090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29090.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29089/comments | https://api.github.com/repos/huggingface/transformers/issues/29089/events | https://github.com/huggingface/transformers/issues/29089 | 2,141,162,788 | I_kwDOCUB6oc5_n40k | 29,089 | Caching image prototype embeddings for image-guided object detection using OWL-ViT | {
"login": "jakubhejhal",
"id": 97042178,
"node_id": "U_kgDOBci_Ag",
"avatar_url": "https://avatars.githubusercontent.com/u/97042178?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakubhejhal",
"html_url": "https://github.com/jakubhejhal",
"followers_url": "https://api.github.com/users/jakubhejhal/followers",
"following_url": "https://api.github.com/users/jakubhejhal/following{/other_user}",
"gists_url": "https://api.github.com/users/jakubhejhal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakubhejhal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakubhejhal/subscriptions",
"organizations_url": "https://api.github.com/users/jakubhejhal/orgs",
"repos_url": "https://api.github.com/users/jakubhejhal/repos",
"events_url": "https://api.github.com/users/jakubhejhal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakubhejhal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports."
] | 1,708 | 1,708 | 1,708 | NONE | null | ### Feature request
The [OWL-ViT](https://arxiv.org/abs/2205.06230) model currently supports image-guided one-shot object detection by using reference image embeddings as the input to the classification head instead of the text embedding. This is implemented by the [image_guided_detection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/owlvit#transformers.OwlViTForObjectDetection.image_guided_detection) method.
There are 2 problems
- it doesn't support passing multiple reference images as input
- the reference image is passed through the image encoder every time
In practice I'd like to use the model's `image_guided_detection` for inference on larger dataset and computing the reference image query embedding for each image I'm doing an inference on is clearly wasteful, as the query embeddings are not dependent on the target image.
1) Is there a way to cache the query image embeddings?
2) And is there a way to use multiple query images for one target image?
### Motivation
In practice One-shot learning is an extreme case of Few-Shot learning and it's usually very hard / impossible to represent the whole class with only one reference image.
Therefore a natural extension is to use multiple prototypical images capturing the detected object in various situations, lightning conditions etc.
But as of now, the running time of the OWL-ViT scales linearly with the number of query images, which makes it impractical for real-world usage.
### Your contribution
I'm unfortunately not very experienced with hugging face and I don't know even where to start if I tried to implement this myself. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29089/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29088/comments | https://api.github.com/repos/huggingface/transformers/issues/29088/events | https://github.com/huggingface/transformers/pull/29088 | 2,141,149,538 | PR_kwDOCUB6oc5nOT2h | 29,088 | Remove misleading model disclaimers in docs for gpt2 and gpt neo QA. | {
"login": "Whenning42",
"id": 8920171,
"node_id": "MDQ6VXNlcjg5MjAxNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8920171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Whenning42",
"html_url": "https://github.com/Whenning42",
"followers_url": "https://api.github.com/users/Whenning42/followers",
"following_url": "https://api.github.com/users/Whenning42/following{/other_user}",
"gists_url": "https://api.github.com/users/Whenning42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Whenning42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Whenning42/subscriptions",
"organizations_url": "https://api.github.com/users/Whenning42/orgs",
"repos_url": "https://api.github.com/users/Whenning42/repos",
"events_url": "https://api.github.com/users/Whenning42/events{/privacy}",
"received_events_url": "https://api.github.com/users/Whenning42/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @Whenning42, thanks for opening this PR! \r\n\r\nCould you share what the effect is of removing the `real_checkpoint` argument? Does the disclaimer disappear? "
] | 1,708 | 1,708 | null | NONE | null | The removed disclaimers suggest the real checkpoints aren't correct and that they need to be replaced by themselves.
GPT 2 disclaimer:
"This example uses a random model as the real ones are all very big. To get proper results, you should use gpt2 instead of gpt2. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call."
and
GPT Neo disclaimer:
"This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-neo-1.3B instead of EleutherAI/gpt-neo-1.3B. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call."
# What does this PR do?
Remove misleading model disclaimers in docs for gpt2 and gpt neo QA.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu or @MKhalusova for documentation review. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29088/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29088",
"html_url": "https://github.com/huggingface/transformers/pull/29088",
"diff_url": "https://github.com/huggingface/transformers/pull/29088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29088.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29087/comments | https://api.github.com/repos/huggingface/transformers/issues/29087/events | https://github.com/huggingface/transformers/issues/29087 | 2,141,128,881 | I_kwDOCUB6oc5_nwix | 29,087 | Mixtral inference breaks when `output_router_logits=True` | {
"login": "LeonardoEmili",
"id": 36575651,
"node_id": "MDQ6VXNlcjM2NTc1NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/36575651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonardoEmili",
"html_url": "https://github.com/LeonardoEmili",
"followers_url": "https://api.github.com/users/LeonardoEmili/followers",
"following_url": "https://api.github.com/users/LeonardoEmili/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonardoEmili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonardoEmili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonardoEmili/subscriptions",
"organizations_url": "https://api.github.com/users/LeonardoEmili/orgs",
"repos_url": "https://api.github.com/users/LeonardoEmili/repos",
"events_url": "https://api.github.com/users/LeonardoEmili/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonardoEmili/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"When running inference you should set `model .config.output_router_logits=False`",
"Thanks @ArthurZucker, I believe it is a bit hard to spot the correct behaviour from the [docs](https://huggingface.co/docs/transformers/main/model_doc/mixtral#transformers.MixtralModel.forward.output_router_logits) so I was wondering if it is always the case that inference requires turning off the config and if so maybe it should be enforced when `model.eval()` is called?",
"Actually this should be enforced when call `prepare_inputs_for_generation`! Would you like to open a PR for mixtral ? ",
"Sounds good, I'll happily take care of it @ArthurZucker.\r\n\r\nJust to make sure do you think it's better raising an assertion when mixtral is used in inference with that configuration or rather raising a warning and ignoring it (even if the user set it to True)? I believe at that stage the first option should be preferred and the second scenario should be handled earlier (maybe when setting the model in inference mode?).",
"No I think we should always set it, this is the expected api for `output_attention` for example. 🤗 "
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.15.0-1038-oracle-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, 8x A100 80GBs
- Using distributed or parallel set-up in script?: yes, using `device_map="auto"`
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The snippet
```python3
from transformers import AutoTokenizer, MixtralForCausalLM
import torch
model = MixtralForCausalLM.from_pretrained(<path_to_finetuned_Mixtral-8x7B-v0.1>)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(<path_to_finetuned_Mixtral-8x7B-v0.1>)
prompts = ['Pu', 'Av', 'Il', 'Please', 'access']
batch = tokenizer(prompts, padding=True, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**batch, max_new_tokens=400, do_sample=True, top_p=0.9, temperature=0.1, min_length=None, use_cache=True, top_k=50,
bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id,
)
```
produces
```
Traceback (most recent call last):
File "/job_workspace/axolotl/scripts/custom_modules/checkpoint_selection/evaluate_model_checkpoint.py", line 283, in main
hypothesis = llm.batch_translate(batch["prompts"], batch["tl_names"])
File "/job_workspace/axolotl/scripts/custom_modules/checkpoint_selection/evaluate_model_checkpoint.py", line 140, in batch_translate
outputs = self._model.generate(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 1525, in generate
return self.sample(
File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2598, in sample
outputs = self(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/mixtral/modeling_mixtral.py", line 1392, in forward
aux_loss = load_balancing_loss_func(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/mixtral/modeling_mixtral.py", line 132, in load_balancing_loss_func
tokens_per_expert = torch.sum(expert_mask.float() * expert_attention_mask, dim=0) / torch.sum(
RuntimeError: The size of tensor a (160) must match the size of tensor b (0) at non-singleton dimension 0
```
**Important details**:
- the model is a Q-Lora fine-tuned version of `Mixtral-8x7B-v0.1` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), different weights but same shapes
- task is instruction following where the model learns to translate the English prompt in Italian
- the issue only arises for some input data which is likely very short (see the snippet above)
### Expected behavior
- **Doubt/clarification**: it seems that Mixtral in inference shall not output `output_router_logits` (see [official docs](https://huggingface.co/docs/transformers/main/model_doc/mixtral#transformers.MixtralModel.forward.output_router_logits)) and its usage should only be limited during training (as described [here](https://152334h.github.io/blog/mixtral-vs-oss/)). I believe this was set by during training and then stored into the checkpoints, disabling it in the configs produces the expected results.
- **Proposal**: shall we always override this configuration to [False](https://github.com/huggingface/transformers/blob/161fe425c9c87af3b22b382f28239b0504d91d37/src/transformers/models/mixtral/modeling_mixtral.py#L1391) when `model.eval()` is called?
- **Expected outcome**: completion consistently returned with variable prompt | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29087/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29086/comments | https://api.github.com/repos/huggingface/transformers/issues/29086/events | https://github.com/huggingface/transformers/pull/29086 | 2,141,128,404 | PR_kwDOCUB6oc5nOPwa | 29,086 | 🌐 [i18n-KO] Translated generation_strategies.md to Korean | {
"login": "AI4Harmony",
"id": 160417616,
"node_id": "U_kgDOCY_HUA",
"avatar_url": "https://avatars.githubusercontent.com/u/160417616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI4Harmony",
"html_url": "https://github.com/AI4Harmony",
"followers_url": "https://api.github.com/users/AI4Harmony/followers",
"following_url": "https://api.github.com/users/AI4Harmony/following{/other_user}",
"gists_url": "https://api.github.com/users/AI4Harmony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI4Harmony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI4Harmony/subscriptions",
"organizations_url": "https://api.github.com/users/AI4Harmony/orgs",
"repos_url": "https://api.github.com/users/AI4Harmony/repos",
"events_url": "https://api.github.com/users/AI4Harmony/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI4Harmony/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29086). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
Translated the generation_strategies.md file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. [[lowercased-header]])
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review?
As followed the guide, I would like to give a review request to @ArthurZucker, @sgugger, @eunseojo.
and also send the review request to Team PseudoLab just in case. @0525hhgus, @kihoon71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29086/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29086",
"html_url": "https://github.com/huggingface/transformers/pull/29086",
"diff_url": "https://github.com/huggingface/transformers/pull/29086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29086.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29085/comments | https://api.github.com/repos/huggingface/transformers/issues/29085/events | https://github.com/huggingface/transformers/pull/29085 | 2,141,125,268 | PR_kwDOCUB6oc5nOPKB | 29,085 | [WIP] Update legacy Repository usage in `examples/pytorch/text-classification/run_glue_no_trainer.py | {
"login": "Hvanderwilk",
"id": 15908112,
"node_id": "MDQ6VXNlcjE1OTA4MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15908112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hvanderwilk",
"html_url": "https://github.com/Hvanderwilk",
"followers_url": "https://api.github.com/users/Hvanderwilk/followers",
"following_url": "https://api.github.com/users/Hvanderwilk/following{/other_user}",
"gists_url": "https://api.github.com/users/Hvanderwilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hvanderwilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hvanderwilk/subscriptions",
"organizations_url": "https://api.github.com/users/Hvanderwilk/orgs",
"repos_url": "https://api.github.com/users/Hvanderwilk/repos",
"events_url": "https://api.github.com/users/Hvanderwilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hvanderwilk/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
The usage in the example is marked for deprecation here https://huggingface.co/docs/huggingface_hub/guides/upload#legacy-upload-files-with-git-lfs. Use the new recommended methods.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29085",
"html_url": "https://github.com/huggingface/transformers/pull/29085",
"diff_url": "https://github.com/huggingface/transformers/pull/29085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29085.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29084/comments | https://api.github.com/repos/huggingface/transformers/issues/29084/events | https://github.com/huggingface/transformers/pull/29084 | 2,141,100,964 | PR_kwDOCUB6oc5nOKKF | 29,084 | [Mistral, Mixtral] Improve docs | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29084). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR improves the docs of Mistral and Mixtral, by including:
- explaining the difference between base models vs instruction tuned ones, including the newer [v2 checkpoint](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) of Mistral-7B
- use of chat templates
- quantization tips and tricks
- memory requirements
- resources such as the [Alignment Handbook](https://github.com/huggingface/alignment-handbook) by HF | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29084/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29084",
"html_url": "https://github.com/huggingface/transformers/pull/29084",
"diff_url": "https://github.com/huggingface/transformers/pull/29084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29084.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29083/comments | https://api.github.com/repos/huggingface/transformers/issues/29083/events | https://github.com/huggingface/transformers/pull/29083 | 2,141,091,613 | PR_kwDOCUB6oc5nOIOL | 29,083 | Allow repo_id--module.classname config definition even if loading from path | {
"login": "rl337",
"id": 387895,
"node_id": "MDQ6VXNlcjM4Nzg5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/387895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rl337",
"html_url": "https://github.com/rl337",
"followers_url": "https://api.github.com/users/rl337/followers",
"following_url": "https://api.github.com/users/rl337/following{/other_user}",
"gists_url": "https://api.github.com/users/rl337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rl337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rl337/subscriptions",
"organizations_url": "https://api.github.com/users/rl337/orgs",
"repos_url": "https://api.github.com/users/rl337/repos",
"events_url": "https://api.github.com/users/rl337/events{/privacy}",
"received_events_url": "https://api.github.com/users/rl337/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"c @Rocketknight1 "
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | When you have a model that's in a path that's not exactly the repo_id relative to the current directory and the config has AutoConfig of the form model_id--module.classname in it, you can't load the model using the path to the model because resolving module.classname ends up being relative to repo_id as defined in the config.
Let me lay this out.
```
path/to/large/storage/
models/
model_a/
config.json
tokenizer_config.json
model_config.py
model_impl.py
model_b/
...
path/to/sourcecode/
my_module/
__init__.py
__main__.py
```
What i want to do is, from my `__main__.py` load my model from `AutoModel.from_pretrained()` so I pass `path/to/large/storage/models/model_a` as model_id_or_path with `local_files_only=True` because i only want to use the specific model that i have on my filesystem.
When you try to do this, you end up with an exception that looks like this:
```Could not locate the model_config.py inside rl337/cicero-ffn.
Traceback (most recent call last):
File "/Users/rlee/dev/singularity/.venv/lib/python3.9/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/Users/rlee/dev/singularity/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/rlee/dev/singularity/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1363, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
```
Here, even though I'm trying to load the model from a path it's trying to resolve the code relative to the repo_id. This is problematic because if i made changes to the model code and i'm not restricting downloads, i may download and use out of date code from the hub. If i don't notice that it's downloaded code, it'd be super confusing to debug.
The workaround is to edit the config.json to remove the repo_id-- part of the definition which is kind of annoying because if you want to push changes to the hub after, you need to remember to add the repo_id-- back in.
I think the root problem here is when the model_id_or_path is specified as a path, really what it's doing is its acting like a path to the config.json and then it doesn't treat the directory it loads the config.json from to be a self contained model. It instead tries to resolve things defined in the config.json relative to the current directory or relative to the model hub/cache. Concretely it requires a directory structure that looks more like this:
```
path/to/sourcecode/
my_module/
__init__.py
__main__.py
username/model_id/
config.json
tokenizer_config.json
model_config.py
model_impl.py
```
When i run my `__main__.py` from the directory designated by `path/to/sourcecode`, everything seems to work okay because resolution of the model_id `username/model_id` happens relative to the current directory.
# What does this PR do?
This PR adds a check to see if the repo_or_path is a path containing the module file to download. If it is, load from that path instead of referencing the repo_id. It then tries to load the model.classname from the path rather than the repo_id when dynamically loading classes.
At this point, the config.json was already loaded from the path so likely the path is okay to load code from especially if trust_remote_code is true.. which it must be at this point.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29083",
"html_url": "https://github.com/huggingface/transformers/pull/29083",
"diff_url": "https://github.com/huggingface/transformers/pull/29083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29083.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29082/comments | https://api.github.com/repos/huggingface/transformers/issues/29082/events | https://github.com/huggingface/transformers/pull/29082 | 2,140,990,281 | PR_kwDOCUB6oc5nN0GE | 29,082 | FEAT [`Trainer` / `bnb`]: Add RMSProp from `bitsandbytes` to HF `Trainer` | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
As requested by the community, this PR adds the support for bnb RMSProp optimizers to HF Trainer !
`RMSProp` exists in bitsandbytes since its first commit: https://github.com/TimDettmers/bitsandbytes/commit/7439924891496025edf60c9da6a782f362a50c70#diff-8384af03566f84c3055f3fee7b1516696a1546d130d63935714af17781d6202b so this PR is compatible with all versions of bnb.
I also added nice tests, which all pass on my machine
Fixes: https://github.com/huggingface/trl/issues/1336
cc @amyeroberts and @Titus-von-Koeller FYI !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29082/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29082",
"html_url": "https://github.com/huggingface/transformers/pull/29082",
"diff_url": "https://github.com/huggingface/transformers/pull/29082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29082.patch",
"merged_at": 1708393382000
} |
https://api.github.com/repos/huggingface/transformers/issues/29081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29081/comments | https://api.github.com/repos/huggingface/transformers/issues/29081/events | https://github.com/huggingface/transformers/pull/29081 | 2,140,736,966 | PR_kwDOCUB6oc5nM-Px | 29,081 | token healing impl | {
"login": "Ayenem",
"id": 50707385,
"node_id": "MDQ6VXNlcjUwNzA3Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/50707385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ayenem",
"html_url": "https://github.com/Ayenem",
"followers_url": "https://api.github.com/users/Ayenem/followers",
"following_url": "https://api.github.com/users/Ayenem/following{/other_user}",
"gists_url": "https://api.github.com/users/Ayenem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ayenem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ayenem/subscriptions",
"organizations_url": "https://api.github.com/users/Ayenem/orgs",
"repos_url": "https://api.github.com/users/Ayenem/repos",
"events_url": "https://api.github.com/users/Ayenem/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ayenem/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI is failing due to an automatic update in the pytest package, we are tracking it. Will let you know when it is sorted -- it will need a rebase ",
"> CI is failing due to an automatic update in the pytest package, we are tracking it. Will let you know when it is sorted -- it will need a rebase\r\n\r\nThanks for the follow-up!",
"@Ayenem `main` is fixed, rebasing should make CI green except if there are PR-specific issues :)",
"![image](https://github.com/huggingface/transformers/assets/50707385/96063985-5688-4225-8e88-f6a8d55a84c7)\r\nIn case it's relevant, here are (some) listed remotes with `git branch -r`:\r\n```\r\n origin/HEAD -> origin/main\r\n origin/heal_tokens\r\n origin/main\r\n origin/token_healing\r\n upstream/'delete-delete-doc'\r\n upstream/BritneyMuller-housekeeping-patch\r\n upstream/_dummy_fix_weight_only_usage\r\n upstream/_dummy_fix_weight_only_usage_2\r\n upstream/add-chat-glm\r\n upstream/add-deci-lm\r\n upstream/add-encode-special-tokens\r\n upstream/add-flash-decoding\r\n upstream/add-mamba\r\n upstream/add-prefix-space\r\n upstream/add-quantization-workflow\r\n ```"
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
Token healing rectifies the token boundary bias in greedy tokenization. It does this by trimming and regrowing the prompt to better align with the model's tokenizer, thus enhancing generation quality. The improvement is clearest with completion models.
Token boundary bias is a silent performance killer that doesn't seem very well known. It has clear impact on completion quality.
A more thorough explanation of the problem: [The Art of Prompt Design: Prompt Boundaries and Token Healing | by Scott Lundberg](https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38).
### Motivation
Given a completion prompt with a partial url ending with `:`, the model might have seen the expected completion `://` as a _single_ token in training. However, the prompt's tail token `:` tells it that the next token is not `//`, and so it generates a wrong completion. Such errors compound in auto-regressive language models.
Fixes [#28346](https://github.com/huggingface/transformers/issues/28346)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
- @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29081/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29081",
"html_url": "https://github.com/huggingface/transformers/pull/29081",
"diff_url": "https://github.com/huggingface/transformers/pull/29081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29081.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29080/comments | https://api.github.com/repos/huggingface/transformers/issues/29080/events | https://github.com/huggingface/transformers/issues/29080 | 2,140,668,237 | I_kwDOCUB6oc5_mAFN | 29,080 | repetition_penalty not being applied | {
"login": "adenhaus",
"id": 57678819,
"node_id": "MDQ6VXNlcjU3Njc4ODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/57678819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adenhaus",
"html_url": "https://github.com/adenhaus",
"followers_url": "https://api.github.com/users/adenhaus/followers",
"following_url": "https://api.github.com/users/adenhaus/following{/other_user}",
"gists_url": "https://api.github.com/users/adenhaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adenhaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adenhaus/subscriptions",
"organizations_url": "https://api.github.com/users/adenhaus/orgs",
"repos_url": "https://api.github.com/users/adenhaus/repos",
"events_url": "https://api.github.com/users/adenhaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/adenhaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just noticed the same issue. I think it would be a useful feature.",
"`repetition_penalty` is working on my end:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\ninputs = tokenizer([\"The quick brown\"], return_tensors=\"pt\")\r\ngen_out = model.generate(**inputs, do_sample=False, max_new_tokens=100)\r\ndecoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)\r\ngen_out_repetition_penalty = model.generate(**inputs, do_sample=False, max_new_tokens=100, repetition_penalty=1.5)\r\ndecoded_repetition_penalty = tokenizer.batch_decode(gen_out_repetition_penalty, skip_special_tokens=True)\r\nprint(decoded == decoded_repetition_penalty)\r\n# False\r\n```\r\n\r\nIf you're still seeing this issue, I will need a reproducer to figure out what's wrong 🤗 ",
"It’s related to flax models only.\r\nNot sure if that’s what @adenhaus meant.",
"@borisdayma no I noticed it on an mt5 finetuned model, not a flax model. Happy to provide code to reproduce but the day after I posted this it was working again.",
"> `repetition_penalty` is working on my end:\n> \n> \n> \n> ```py\n> \n> from transformers import AutoModelForCausalLM, AutoTokenizer\n> \n> \n> \n> tokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\n> \n> model = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\n> \n> \n> \n> inputs = tokenizer([\"The quick brown\"], return_tensors=\"pt\")\n> \n> gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=100)\n> \n> decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)\n> \n> gen_out_repetition_penalty = model.generate(**inputs, do_sample=False, max_new_tokens=100, repetition_penalty=1.5)\n> \n> decoded_repetition_penalty = tokenizer.batch_decode(gen_out_repetition_penalty, skip_special_tokens=True)\n> \n> print(decoded == decoded_repetition_penalty)\n> \n> # False\n> \n> ```\n> \n> \n> \n> If you're still seeing this issue, I will need a reproducer to figure out what's wrong 🤗 \n\n@gante i set repetition_penalty in the generation_config.json on the hub, didn't pass it to model.generate. Not sure if that makes a difference",
"Then maybe I should create a separate issue for flax models which don't seem to support this option.",
"@borisdayma correct, the option is not supported on flax 🤗 (if you open the issue, please mention that unsupported flags on a given framework should raise a warning, we are not raising them atm)",
"@adenhaus \r\n\r\n> i set repetition_penalty in the generation_config.json on the hub, didn't pass it to model.generate. Not sure if that makes a difference\r\n\r\nThis should be fine 🤗 \r\n\r\n> I noticed it on an mt5 finetuned model\r\n\r\nThis may be the cause! mt5 is an encoder-decoder model, have you tried the `encoder_repetition_penalty` flag instead? Or maybe both `encoder_repetition_penalty` and `repetition_penalty`, depending on your use case (the former acts on the input text of mt5, the later acts on the generated text).",
"@gante I want it to act on the generated text. And now I am noticing the issue again. I change the `repetition_penalty` in the `generation_config` file on the hub from 1.0 to 1.9 but see no difference in the outputs.\r\n\r\nHere are steps to reproduce:\r\n\r\nI am using [this](https://huggingface.co/adenhaus/mt5-small-eng-tata-blueprints/blob/main/generation_config.json) model from the hub.\r\n\r\nWith this code:\r\n\r\n```\r\nimport torch\r\nimport transformers\r\nimport pandas as pd\r\n\r\nfrom transformers import (\r\n AutoTokenizer,\r\n AutoModelForSeq2SeqLM,\r\n)\r\n\r\n# Model\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\nmodel_name = 'adenhaus/mt5-small-eng-tata-blueprints'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)\r\n\r\ndef split_verbalisation(text):\r\n split_text = text.split(\"Verbalisation: \")\r\n\r\n if len(split_text) > 1:\r\n return split_text[1]\r\n else:\r\n return text\r\n\r\n\r\n# Predict function\r\ndef generate_verbalisation(model, tokenizer, example):\r\n input_ids = tokenizer(example)[\"input_ids\"]\r\n input_ids = torch.LongTensor(input_ids).view(1, -1).to(device)\r\n generated_ids = model.generate(input_ids, max_new_tokens=200)\r\n prediction = tokenizer.decode(generated_ids[0], skip_special_tokens=True)\r\n\r\n prediction = split_verbalisation(prediction)\r\n\r\n print(prediction)\r\n return prediction\r\n\r\n# Load test set\r\ndf = pd.read_csv(\"/csv_path\", sep='\\t')\r\n\r\n# Generate output dataset for evaluation\r\nout_df = pd.DataFrame(columns=['preds', 'refs'])\r\nout_df['refs'] = df['target']\r\nout_df['preds'] = df['linearized_input'].apply(lambda x: generate_verbalisation(model, tokenizer, x))\r\nout_df['linearized_input'] = df['linearized_input']\r\n\r\nout_df.to_csv('small-eng-blueprints-preds.csv', sep='\\t', index=False)\r\n```\r\n\r\nAnd here is a sample of the (tab separated) csv I'm loading:\r\n\r\n\r\n```\r\nlinearized_input\ttarget\r\nMedian ages at first sex, first marriage, and birth of first child among men age 30-34 by residence | Age (years) | (First sex, Urban, 18.4) (First marriage, Urban, 25.6) (Birth of first child, Urban, 26.9) (First sex, Rural, 18.5) (First marriage, Rural, 24.1) (Birth of first child, Rural, 25.1) (First sex, Urban, 21.3) (First marriage, Urban, 27.7) (Birth of first child, Urban, 28.2) (First sex, Rural, 20.8) (First marriage, Rural, 25) (Birth of first child, Rural, 25.6) (First sex, Urban, 21.5) (First marriage, Urban, 27.4) (Birth of first child, Urban, 29.2) (First sex, Rural, 22.1) (First marriage, Rural, 25.5) (Birth of first child, Rural, 26.8) (First sex, Urban, 21.7) (First marriage, Urban, 27) (Birth of first child, Urban, 29.4) (First sex, Rural, 21) (First marriage, Rural, 22.8) (Birth of first child, Rural, 24.9) (First sex, Urban, 22.6) (First marriage, Urban, 27.7) (Birth of first child, Urban, 27.8) (First sex, Rural, 22.5) (First marriage, Rural, 25.1) (Birth of first child, Rural, 26) (First sex, Urban, 18.6) (First marriage, Urban, 25.5) (Birth of first child, Urban, 26.4) (First sex, Rural, 18.4) (First marriage, Rural, 22.5) (Birth of first child, Rural, 23.6) (First sex, Urban, 25.7) (First marriage, Urban, 26.1) (Birth of first child, Urban, 28.3) (First sex, Rural, 23.8) (First marriage, Rural, 23.8) (Birth of first child, Rural, 26.2) (First sex, Urban, 20.9) (First marriage, Urban, 22.2) (Birth of first child, Urban, 24.5) (First sex, Rural, 20.1) (First marriage, Rural, 20.7) (Birth of first child, Rural, 23.7)\tA number of the differences between rural and urban areas are common across most of the countries.\r\nFamily formation trajectories among men age 30-34, Benin | (Timing of first sex, Earlier, 0.32) (Timing of first marriage, Earlier, 0.29) (Timing of birth of first child, Earlier, 0.3) (Timing of first sex, Earlier, 278) (Timing of first marriage, Earlier, 256) (Timing of birth of first child, Earlier, 266) (Timing of first sex, Typical, 0.5) (Timing of first marriage, Typical, 0.38) (Timing of birth of first child, Typical, 0.39) (Timing of first sex, Typical, 443) (Timing of first marriage, Typical, 335) (Timing of birth of first child, Typical, 344) (Timing of first sex, Later, 0.18) (Timing of first marriage, Later, 0.33) (Timing of birth of first child, Later, 0.31) (Timing of first sex, Later, 160) (Timing of first marriage, Later, 290) (Timing of birth of first child, Later, 271)\tIn the first Sankey diagram for Benin (Figure 6), 32% of men experienced first sexual intercourse earlier-than-typical, 50% at typical timing, and 18% later-than-typical.\r\nFamily formation trajectories among men age 30-34, Benin | (Timing of first sex, Earlier, 0.32) (To Earlier, Earlier, 0.12) (To Typical, Earlier, 0.1) (To Later, Earlier, 0.1) (Timing of first marriage, Earlier, 0.29) (To Earlier, Earlier, 0.09) (Timing of birth of first child, Earlier, 0.3) (Timing of first sex, Earlier, 278) (Timing of first marriage, Earlier, 256) (Timing of birth of first child, Earlier, 266) (Timing of first sex, Typical, 0.5) (To Earlier, Typical, 0.15) (To Typical, Typical, 0.21) (To Later, Typical, 0.15) (Timing of first marriage, Typical, 0.38) (To Earlier, Typical, 0.11) (To Typical, Typical, 0.15) (To Later, Typical, 0.11) (Timing of birth of first child, Typical, 0.39) (Timing of first sex, Typical, 443) (Timing of first marriage, Typical, 335) (Timing of birth of first child, Typical, 344) (Timing of first sex, Later, 0.18) (To Earlier, Later, 0.02) (To Typical, Later, 0.07) (To Later, Later, 0.09) (Timing of first marriage, Later, 0.33) (To Typical, Later, 0.06) (To Later, Later, 0.08) (Timing of birth of first child, Later, 0.31) (Timing of first sex, Later, 160) (Timing of first marriage, Later, 290) (Timing of birth of first child, Later, 271)\tIn the first Sankey diagram for Benin (Figure 6), 32% of men experienced first sexual intercourse earlier-than-typical, 50% at typical timing, and 18% later-than-typical.\r\nYouth empowerment (pooled terciles) among women age 15-29 by country | (High, Mali, 13.2) (Medium, Mali, 17.9) (Low, Mali, 68.9) (High, Ethiopia, 14.7) (Medium, Ethiopia, 22.7) (Low, Ethiopia, 62.6) (High, Malawi, 15.8) (Medium, Malawi, 55) (Low, Malawi, 29.2) (High, Uganda, 25.4) (Medium, Uganda, 32.1) (Low, Uganda, 42.4) (High, Zambia, 27.4) (Medium, Zambia, 28.3) (Low, Zambia, 44.3) (High, Nigeria, 34.2) (Medium, Nigeria, 35.5) (Low, Nigeria, 30.3) (High, Haiti, 42.8) (Medium, Haiti, 40.1) (Low, Haiti, 17.1) (High, Nepal, 43.5) (Medium, Nepal, 37) (Low, Nepal, 19.5) (High, Senegal, 43.9) (Medium, Senegal, 21.5) (Low, Senegal, 34.6) (High, Philippines, 80.6) (Medium, Philippines, 13.7) (Low, Philippines, 5.7)\tA mere 13% of young women are in the high empowerment tercile in Mali.\r\nOwnership of House and Land | Percent of women and men age 15-49 who: | (Women, Own a home alone or jointly, 18) (Men, Own a home alone or jointly, 40) (Women, Own land alone or jointly, 15) (Men, Own land alone or jointly, 34)\tOnly 18% of women own a house, either alone or jointly, and only 15% own land.\r\nAttitudes toward Wife Beating | Percent of women and men age15-49 who believe that a husband is justified in beating his wife for the following reasons: | (Women, Wife burns the food, 14) (Men, Wife burns the food, 8) (Women, Wife argues with him, 21) (Men, Wife argues with him, 13) (Women, Wife goes out without telling him, 25) (Men, Wife goes out without telling him, 13) (Women, Wife neglects children, 25) (Men, Wife neglects children, 14) (Women, Wife refuses to have sex with him, 19) (Men, Wife refuses to have sex with him, 11) (Women, Any of these, 35) (Men, Any of these, 25)\tMore than one-third of women (35%) and one-quarter of men agree that a husband is justified in beating his wife for at least one of these reasons: if she burns the food, argues with him, goes out without telling him, neglects the children, or refuses to have sex with him.\r\nEmployment Characteristics among Working Women | Percent | (All year, 62) (Seasona, 32) (Occasional, 6) (Employed by family member, 9) (Employed by non-family member, 30) (Self-employed, 62) (Cash only, 65) (Cash and in-kind, 11) (In-kind only, 2) (Not paid, 23)\tthe percent distribution of employed women age 15-49 by type of earnings and employer characteristics, according to type of employment (agricultural or non-agricultural).\r\nTrends in Total Fertility Rate, Kenya 1975-2008* | Total Fertility Rate | (1975-78, 8.1) (1984-88, 6.7) (1990-92, 5.4) (1995-97, 4.7) (2000-02-01 00:00:00, 4.9) (2006-08-01 00:00:00, 4.6)\tThe data indicate that the TFR declined during the 1980s and 1990s, changing from a high of 8.1 children per woman in the late 1970s to 6.7 in the late 1980s, and dropping to 4.7 during the last half of the 1990s.\r\nTrends in Contraceptive Use, Kenya 1978-2008 (percentage of currently married women using any method) | Percent | (1978, 7) (1984, 17) (1989, 27) (1993, 33) (1998, 39) (2003, 39) (2008-09-01 00:00:00, 46)\tthere has been a substantial increase in contraceptive use since the late 1970s, from 7 percent of married women in 1978 to 46 percent in 2008-09.\r\nPercentage of Currently Married Women Whose Husbands Have At Least One Other Wife | Percent | (Urban, 7) (Rural, 15) (Nairobi, 2) (Central, 3) (Coast, 15) (Eastern, 5) (Nyanza, 21) (Rift Valley, 15) (Western, 23) (North Eastern, 36) (No education, 33) (Primary incomplete, 17) (Primary complete, 8) (Secondary+, 8)\tThirteen percent of currently married women live in polygynous unions (i.e., they have one or more co-wives).\r\nPlanning Status of Births | Percent | (Wanted then, 0.57) (Unwanted, 0.17) (Wanted later, 0.26)\t17 percent of births in Kenya are unwanted, and 26 percent are mistimed (wanted later).\r\nTrends in Receipt of Antenatal Care from a Skilled Medical Provider, Kenya 2003-2008 | Percentage of women with live birth in the past 5 years | (2003, 88) (2008-09-01 00:00:00, 92)\tThe 2008-09 data indicate a rise since 2003 in medical antenatal care coverage\r\nPercentage of Children Age 12-23 Months with Specific Vaccinations | Percent | (BCG, 96) (DPT 1 - HepB - Hib, 96) (DPT 2 - HepB - Hib, 93) (DPT 3 - HepB - Hib, 86) (Polio 1, 96) (Polio 2, 94) (Polio 3, 88) (Measles, 85) (All basic vaccinations, 77) (No vaccinations, 3)\t77 percent of children age 12-23 months are fully vaccinated at any time before the survey.\r\nInfant and Young Child Feeding (IYCF) Practices | Percent | (Breastfed, 44) (Non-breastfed, 16) (Total, 39)\tBreastfed children are much more likely to be fed in accordance with IYCF practices than non-breastfed children\r\nNumber of Decisions in Which Women Participate | Percent of women | (Number of decisions, Percent of women) (0, 3) (1, 6) (2, 9) (3, 13) (4, 19) (5, 50)\tthe distribution of currently married women according to the number of decisions in which they participate.\r\nInfant mortality rate in the 10 years preceding the survey by selected demographic characteristics | Deaths per 1,000 live births | (<20, 95) (20-29, 71) (30-39, 73) (40-49, 100) (1, 83) (2021-02-03 00:00:00, 65) (2021-04-06 00:00:00, 72) (7+, 103) (<2 years, 122) (2 years, 67) (3 years, 46) (4+ years, 45) (Male, 84) (Female, 70)\tThis is true for all categories of mortality. With the exception of mothers in the 40-49 age group, infant mortality is higher for mothers under age 20 than for older mothers.\r\nMother’s duration of stay in the health facility after giving birth | Percentage | (<6 hours, Vaginal birth, 29) (6-11 hours, Vaginal birth, 14) (12-23 hours, Vaginal birth, 6) (1-2 days, Vaginal birth, 41) (3+ days, Vaginal birth, 10) (<6 hours, Caesarean birth, 7) (6-11 hours, Caesarean birth, 1) (12-23 hours, Caesarean birth, 1) (1-2 days, Caesarean birth, 9) (3+ days, Caesarean birth, 81)\tthe percent distribution of women who gave birth in a health facility in the five years preceding the survey by duration of stay in the facility and type of delivery.\r\nPercentage of children age 12-23 months with specific vaccinations | Percentage vaccinated at any time before the survey | (BCG, 0.51) (DPT 1, 0.51) (DPT 2, 0.46) (DPT 3, 0.38) (Polio 0, 0.47) (Polio 1, 77) (Polio 2, 70) (Polio 3, 54) (Measles, 42) (All bascic vaccinations, 25) (No vaccinations, 21)\tthe percentage of children age 12-23 months who have received the various vaccinations by source of information (vaccination card or mother’s report).\r\nTrends in vaccination coverage among children age 12-23 months, 2003-2013 | Percentage | (2003 NDHS, BCG, 48) (2008 NDHS, BCG, 50) (2013 NDHS, BCG, 51) (2003 NDHS, DPT 3, 21) (2008 NDHS, DPT 3, 35) (2013 NDHS, DPT 3, 38) (2003 NDHS, Polio 3, 29) (2008 NDHS, Polio 3, 39) (2013 NDHS, Polio 3, 54) (2003 NDHS, Measles, 36) (2008 NDHS, Measles, 41) (2013 NDHS, Measles, 42) (2003 NDHS, All basic vaccinations, 13) (2008 NDHS, All basic vaccinations, 23) (2013 NDHS, All basic vaccinations, 25) (2003 NDHS, No vaccinations, 27) (2008 NDHS, No vaccinations, 29) (2013 NDHS, No vaccinations, 21)\tThe proportion of children who received none of the six basic vaccinations declined marginally by 6 percentage points from the 2003 level\r\nIYCF indicators on breastfeeding status | Percentage of children | (Exclusive breastfeeding under age 6 months, 17) (Exclusive breastfeeding at age 4-5 months, 10) (Continued breastfeeding at 1 year, 84) (Introduction of solid, semisolid, or soft foods (6-8 months), 67) (Continued breastfeeding at 2 years, 35) (Age-appropriate breastfeeding (0-23 months), 52) (Predominant breastfeeding (0-5 months), 69) (Bottle feeding (0-23 months), 13)\tthe 2013 NDHS results for key IYCF breastfeeding practices among children under age 2 who are living with their mothers.\r\nOwnership of, access to, and use of ITNs | Percent | (Percent of households with at least one ITN, 50) (Percent of households with at least one ITN for every two persons who stayed in the household the night before the interview, 22) (Percent of the household population with access to an ITN within their household, 36) (Percent of the household population who slept under an ITN, 13)\t50 percent of households have at least one ITN.\r\nTrends in age of first sexual intercourse | Percent | (2008 NDHS, Percentage of women 15-19 who had sexual intercourse before exact age 15, 15) (2013 NDHS, Percentage of women 15-19 who had sexual intercourse before exact age 15, 16) (2008 NDHS, Percentage of men 15-19 who had sexual intercourse before exact age 15, 6) (2013 NDHS, Percentage of men 15-19 who had sexual intercourse before exact age 15, 3) (2008 NDHS, Percentage of women 18-19 who had sexual intercourse before exact age 18, 53) (2013 NDHS, Percentage of women 18-19 who had sexual intercourse before exact age 18, 53) (2008 NDHS, Percentage of men 18-19 who had sexual intercourse before exact age 18, 26) (2013 NDHS, Percentage of men 18-19 who had sexual intercourse before exact age 18, 21)\tAmong young women, there was practically no change in the proportion who had sexual intercourse before age 15 or age 18 in the five-year period between the two surveys.\r\nCurrent use by method | (Not currently using) (Any traditional method) (Other modern method) (Female sterilization) (Injectable) (Pill) (IUD) (0.415) (0.016) (0.011) (0.012) (0.085) (0.16) (0.301)\tThe 2014 EDHS findings revealed that 59 percent of currently married women in Egypt are currently using a contraceptive method.\r\nTreatment practices among children ill with diarrhea | Percent | (Health provider consulted) (Given oral rehydration salt/home solution) (Given antibiotic) (55) (30) (37)\tTable 11.11 shows that, for the majority of children ill with diarrhea, feeding practices did not conform to the recommended practices.\r\n```",
"@gante was having this issue on my cluster. Now not seeing the issue in a colab notebook. So weird. Maybe a caching thing?"
] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.1 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
### Who can help?
@gante @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Download a model and do inference. Change the repetition_penalty. You will see the output does not change_
### Expected behavior
The output should change when repetition_penalty is changed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29080/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29079/comments | https://api.github.com/repos/huggingface/transformers/issues/29079/events | https://github.com/huggingface/transformers/pull/29079 | 2,140,536,642 | PR_kwDOCUB6oc5nMT9Q | 29,079 | Quantization support for CUDA graph generation. | {
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @younesbelkada "
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As of now, this PR fixes a small problem preventing one from using CUDA graph generation from #28937 with quantized models.
In the long run, It would be great to have compiled generation actually working for GPTQ/AQLM/other quantization methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29079/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29079",
"html_url": "https://github.com/huggingface/transformers/pull/29079",
"diff_url": "https://github.com/huggingface/transformers/pull/29079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29079.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29078/comments | https://api.github.com/repos/huggingface/transformers/issues/29078/events | https://github.com/huggingface/transformers/issues/29078 | 2,140,402,242 | I_kwDOCUB6oc5_k_JC | 29,078 | Error while using load_best_model_at_end with LoRA adapters inside Trainer and SFTTrainer | {
"login": "deshwalmahesh",
"id": 50293852,
"node_id": "MDQ6VXNlcjUwMjkzODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/50293852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deshwalmahesh",
"html_url": "https://github.com/deshwalmahesh",
"followers_url": "https://api.github.com/users/deshwalmahesh/followers",
"following_url": "https://api.github.com/users/deshwalmahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/deshwalmahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deshwalmahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deshwalmahesh/subscriptions",
"organizations_url": "https://api.github.com/users/deshwalmahesh/orgs",
"repos_url": "https://api.github.com/users/deshwalmahesh/repos",
"events_url": "https://api.github.com/users/deshwalmahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/deshwalmahesh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @younesbelkada ",
"Hi @deshwalmahesh \r\nThanks for the issue! \r\nJust to confirm, what PEFT version do you have ? Can oyou try out with the latest PEFT ? `pip install -U peft`"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
I'm using version `4.37` and when you use any of the model like `AutoModelForCausalLM, AutoModelForSequenceClassification` along with LoRA adapters, you get error after training gets finished. I used `load_best_model_at_end` with `TrainingArguments` on both the above mentioned models with `Trainer` as well as `SFTTrainer`. I tried it with both Classification and SFT
### Who can help?
@muellerz @pacman100 @ArthurZucker @younes
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Model:
```
model = PhiForSequenceClassification.from_pretrained(model_name, num_labels = 1, trust_remote_code=True, torch_dtype=torch.bfloat16)
model.config.pad_token_id = model.config.eos_token_id
model.gradient_checkpointing_enable() #gradient checkpointing to save memory
model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True)
lora_config = LoraConfig(
r=256,
lora_alpha=512,
target_modules=[
'q_proj','k_proj','v_proj','dense','fc1','fc2',], #print(model) will show the modules to use
bias="none",
lora_dropout=0.05,
task_type=TaskType.SEQ_CLS)
model = get_peft_model(model, lora_config) # LORA
```
3. Args:
```
training_args = TrainingArguments(load_best_model_at_end = False )
trainer = Trainer(
model = model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=training_args,
data_collator = DataCollatorWithPadding(tokenizer=tokenizer),
compute_metrics = compute_metrics_for_regression,
callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
)
trainer.train()
```
### Expected behavior
it should not give the below at the end of training:
```
RuntimeError: Error(s) in loading state_dict for PeftModelForSequenceClassification:
Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.0.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.0.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.0.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.0.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.0.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.0.self_attn.dense.base_layer.weight", "base_model.model.model.layers.0.self_attn.dense.base_layer.bias", "base_model.model.model.layers.0.mlp.fc1.base_layer.weight", "base_model.model.model.layers.0.mlp.fc1.base_layer.bias", "base_model.model.model.layers.0.mlp.fc2.base_layer.weight", "base_model.model.model.layers.0.mlp.fc2.base_layer.bias", "base_model.model.model.layers.0.input_layernorm.weight", "base_model.model.model.layers.0.input_layernorm.bias", "base_model.model.model.layers.1.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.1.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.1.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.1.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.1.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.1.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.1.self_attn.dense.base_layer.weight", "base_model.model.model.layers.1.self_attn.dense.base_layer.bias", "base_model.model.model.layers.1.mlp.fc1.base_layer.weight", "base_model.model.model.layers.1.mlp.fc1.base_layer.bias", "base_model.model.model.layers.1.mlp.fc2.base_layer.weight", "base_model.model.model.layers.1.mlp.fc2.base_layer.bias", "base_model.model.model.layers.1.input_layernorm.weight", "base_model.model.model.layers.1.input_layernorm.bias", "base_model.model.model.layers.2.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.2.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.2.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.2.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.2.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.2.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.2.self_attn.dense.base_layer.weight", "base_model.model.model.layers.2.self_attn.dense.base_layer.bias", "base_model.model.model.layers.2.mlp.fc1.base_layer.weight", "base_model.model.model.layers.2.mlp.fc1.base_layer.bias", "base_model.model.model.layers.2.mlp.fc2.base_layer.weight", "base_model.model.model.layers.2.mlp.fc2.base_layer.bias", "base_model.model.model.layers.2.input_layernorm.weight", "base_model.model.model.layers.2.input_layernorm.bias", "base_model.model.model.layers.3.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.3.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.3.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.3.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.3.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.3.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.3.self_attn.dense.base_layer.weight", "base_model.model.model.layers.3.self_attn.dense.base_layer.bias", "base_model.model.model.layers.3.mlp.fc1.base_layer.weight", "base_model.model.model.layers.3.mlp.fc1.base_layer.bias", "base_model.model.model.layers.3.mlp.fc2.base_layer.weight", "base_model.model.model.layers.3.mlp.fc2.base_layer.bias", "base_model.model.model.layers.3.input_layernorm.weight", "base_model.model.model.layers.3.input_layernorm.bias", "base_model.model.model.layers.4.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.4.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.4.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.4.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.4.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.4.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.4.self_attn.dense.base_layer.weight", "base_model.model.model.layers.4.self_attn.dense.base_layer.bias", "base_model.model.model.layers.4.mlp.fc1.base_layer.weight", "base_model.model.model.layers.4.mlp.fc1.base_layer.bias", "base_model.model.model.layers.4.mlp.fc2.base_layer.weight", "base_model.model.model.layers.4.mlp.fc2.base_layer.bias", "base_model.model.model.layers.4.input_layernorm.weight", "base_model.model.model.layers.4.input_layernorm.bias", "base_model.model.model.layers.5.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.5.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.5.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.5.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.5.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.5.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.5.self_attn.dense.base_layer.weight", "base_model.model.model.layers.5.self_attn.dense.base_layer.bias", "base_model.model.model.layers.5.mlp.fc1.base_layer.weight", "base_model.model.model.layers.5.mlp.fc1.base_layer.bias", "base_model.model.model.layers.5.mlp.fc2.base_layer.weight", "base_model.model.model.layers.5.mlp.fc2.base_layer.bias", "base_model.model.model.layers.5.input_layernorm.weight", "base_model.model.model.layers.5.input_layernorm.bias", "base_model.model.model.layers.6.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.6.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.6.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.6.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.6.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.6.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.6.self_attn.dense.base_layer.weight", "base_model.model.model.layers.6.self_attn.dense.base_layer.bias", "base_model.model.model.layers.6.mlp.fc1.base_layer.weight", "base_model.model.model.layers.6.mlp.fc1.base_layer.bias", "base_model.model.model.layers.6.mlp.fc2.base_layer.weight", "base_model.model.model.layers.6.mlp.fc2.base_layer.bias", "base_model.model.model.layers.6.input_layernorm.weight", "base_model.model.model.layers.6.input_layernorm.bias", "base_model.model.model.layers.7.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.7.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.7.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.7.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.7.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.7.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.7.self_attn.dense.base_layer.weight", "base_model.model.model.layers.7.self_attn.dense.base_layer.bias", "base_model.model.model.layers.7.mlp.fc1.base_layer.weight", "base_model.model.model.layers.7.mlp.fc1.base_layer.bias", "base_model.model.model.layers.7.mlp.fc2.base_layer.weight", "base_model.model.model.layers.7.mlp.fc2.base_layer.bias", "base_model.model.model.layers.7.input_layernorm.weight", "base_model.model.model.layers.7.input_layernorm.bias", "base_model.model.model.layers.8.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.8.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.8.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.8.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.8.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.8.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.8.self_attn.dense.base_layer.weight", "base_model.model.model.layers.8.self_attn.dense.base_layer.bias", "base_model.model.model.layers.8.mlp.fc1.base_layer.weight", "base_model.model.model.layers.8.mlp.fc1.base_layer.bias", "base_model.model.model.layers.8.mlp.fc2.base_layer.weight", "base_model.model.model.layers.8.mlp.fc2.base_layer.bias", "base_model.model.model.layers.8.input_layernorm.weight", "base_model.model.model.layers.8.input_layernorm.bias", "base_model.model.model.layers.9.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.9.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.9.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.9.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.9.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.9.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.9.self_attn.dense.base_layer.weight", "base_model.model.model.layers.9.self_attn.dense.base_layer.bias", "base_model.model.model.layers.9.mlp.fc1.base_layer.weight", "base_model.model.model.layers.9.mlp.fc1.base_layer.bias", "base_model.model.model.layers.9.mlp.fc2.base_layer.weight", "base_model.model.model.layers.9.mlp.fc2.base_layer.bias", "base_model.model.model.layers.9.input_layernorm.weight", "base_model.model.model.layers.9.input_layernorm.bias", "base_model.model.model.layers.10.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.10.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.10.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.10.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.10.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.10.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.10.self_attn.dense.base_layer.weight", "base_model.model.model.layers.10.self_attn.dense.base_layer.bias", "base_model.model.model.layers.10.mlp.fc1.base_layer.weight", "base_model.model.model.layers.10.mlp.fc1.base_layer.bias", "base_model.model.model.layers.10.mlp.fc2.base_layer.weight", "base_model.model.model.layers.10.mlp.fc2.base_layer.bias", "base_model.model.model.layers.10.input_layernorm.weight", "base_model.model.model.layers.10.input_layernorm.bias", "base_model.model.model.layers.11.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.11.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.11.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.11.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.11.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.11.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.11.self_attn.dense.base_layer.weight", "base_model.model.model.layers.11.self_attn.dense.base_layer.bias", "base_model.model.model.layers.11.mlp.fc1.base_layer.weight", "base_model.model.model.layers.11.mlp.fc1.base_layer.bias", "base_model.model.model.layers.11.mlp.fc2.base_layer.weight", "base_model.model.model.layers.11.mlp.fc2.base_layer.bias", "base_model.model.model.layers.11.input_layernorm.weight", "base_model.model.model.layers.11.input_layernorm.bias", "base_model.model.model.layers.12.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.12.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.12.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.12.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.12.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.12.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.12.self_attn.dense.base_layer.weight", "base_model.model.model.layers.12.self_attn.dense.base_layer.bias", "base_model.model.model.layers.12.mlp.fc1.base_layer.weight", "base_model.model.model.layers.12.mlp.fc1.base_layer.bias", "base_model.model.model.layers.12.mlp.fc2.base_layer.weight", "base_model.model.model.layers.12.mlp.fc2.base_layer.bias", "base_model.model.model.layers.12.input_layernorm.weight", "base_model.model.model.layers.12.input_layernorm.bias", "base_model.model.model.layers.13.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.13.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.13.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.13.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.13.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.13.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.13.self_attn.dense.base_layer.weight", "base_model.model.model.layers.13.self_attn.dense.base_layer.bias", "base_model.model.model.layers.13.mlp.fc1.base_layer.weight", "base_model.model.model.layers.13.mlp.fc1.base_layer.bias", "base_model.model.model.layers.13.mlp.fc2.base_layer.weight", "base_model.model.model.layers.13.mlp.fc2.base_layer.bias", "base_model.model.model.layers.13.input_layernorm.weight", "base_model.model.model.layers.13.input_layernorm.bias", "base_model.model.model.layers.14.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.14.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.14.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.14.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.14.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.14.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.14.self_attn.dense.base_layer.weight", "base_model.model.model.layers.14.self_attn.dense.base_layer.bias", "base_model.model.model.layers.14.mlp.fc1.base_layer.weight", "base_model.model.model.layers.14.mlp.fc1.base_layer.bias", "base_model.model.model.layers.14.mlp.fc2.base_layer.weight", "base_model.model.model.layers.14.mlp.fc2.base_layer.bias", "base_model.model.model.layers.14.input_layernorm.weight", "base_model.model.model.layers.14.input_layernorm.bias", "base_model.model.model.layers.15.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.15.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.15.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.15.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.15.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.15.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.15.self_attn.dense.base_layer.weight", "base_model.model.model.layers.15.self_attn.dense.base_layer.bias", "base_model.model.model.layers.15.mlp.fc1.base_layer.weight", "base_model.model.model.layers.15.mlp.fc1.base_layer.bias", "base_model.model.model.layers.15.mlp.fc2.base_layer.weight", "base_model.model.model.layers.15.mlp.fc2.base_layer.bias", "base_model.model.model.layers.15.input_layernorm.weight", "base_model.model.model.layers.15.input_layernorm.bias", "base_model.model.model.layers.16.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.16.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.16.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.16.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.16.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.16.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.16.self_attn.dense.base_layer.weight", "base_model.model.model.layers.16.self_attn.dense.base_layer.bias", "base_model.model.model.layers.16.mlp.fc1.base_layer.weight", "base_model.model.model.layers.16.mlp.fc1.base_layer.bias", "base_model.model.model.layers.16.mlp.fc2.base_layer.weight", "base_model.model.model.layers.16.mlp.fc2.base_layer.bias", "base_model.model.model.layers.16.input_layernorm.weight", "base_model.model.model.layers.16.input_layernorm.bias", "base_model.model.model.layers.17.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.17.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.17.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.17.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.17.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.17.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.17.self_attn.dense.base_layer.weight", "base_model.model.model.layers.17.self_attn.dense.base_layer.bias", "base_model.model.model.layers.17.mlp.fc1.base_layer.weight", "base_model.model.model.layers.17.mlp.fc1.base_layer.bias", "base_model.model.model.layers.17.mlp.fc2.base_layer.weight", "base_model.model.model.layers.17.mlp.fc2.base_layer.bias", "base_model.model.model.layers.17.input_layernorm.weight", "base_model.model.model.layers.17.input_layernorm.bias", "base_model.model.model.layers.18.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.18.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.18.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.18.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.18.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.18.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.18.self_attn.dense.base_layer.weight", "base_model.model.model.layers.18.self_attn.dense.base_layer.bias", "base_model.model.model.layers.18.mlp.fc1.base_layer.weight", "base_model.model.model.layers.18.mlp.fc1.base_layer.bias", "base_model.model.model.layers.18.mlp.fc2.base_layer.weight", "base_model.model.model.layers.18.mlp.fc2.base_layer.bias", "base_model.model.model.layers.18.input_layernorm.weight", "base_model.model.model.layers.18.input_layernorm.bias", "base_model.model.model.layers.19.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.19.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.19.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.19.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.19.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.19.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.19.self_attn.dense.base_layer.weight", "base_model.model.model.layers.19.self_attn.dense.base_layer.bias", "base_model.model.model.layers.19.mlp.fc1.base_layer.weight", "base_model.model.model.layers.19.mlp.fc1.base_layer.bias", "base_model.model.model.layers.19.mlp.fc2.base_layer.weight", "base_model.model.model.layers.19.mlp.fc2.base_layer.bias", "base_model.model.model.layers.19.input_layernorm.weight", "base_model.model.model.layers.19.input_layernorm.bias", "base_model.model.model.layers.20.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.20.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.20.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.20.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.20.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.20.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.20.self_attn.dense.base_layer.weight", "base_model.model.model.layers.20.self_attn.dense.base_layer.bias", "base_model.model.model.layers.20.mlp.fc1.base_layer.weight", "base_model.model.model.layers.20.mlp.fc1.base_layer.bias", "base_model.model.model.layers.20.mlp.fc2.base_layer.weight", "base_model.model.model.layers.20.mlp.fc2.base_layer.bias", "base_model.model.model.layers.20.input_layernorm.weight", "base_model.model.model.layers.20.input_layernorm.bias", "base_model.model.model.layers.21.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.21.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.21.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.21.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.21.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.21.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.21.self_attn.dense.base_layer.weight", "base_model.model.model.layers.21.self_attn.dense.base_layer.bias", "base_model.model.model.layers.21.mlp.fc1.base_layer.weight", "base_model.model.model.layers.21.mlp.fc1.base_layer.bias", "base_model.model.model.layers.21.mlp.fc2.base_layer.weight", "base_model.model.model.layers.21.mlp.fc2.base_layer.bias", "base_model.model.model.layers.21.input_layernorm.weight", "base_model.model.model.layers.21.input_layernorm.bias", "base_model.model.model.layers.22.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.22.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.22.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.22.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.22.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.22.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.22.self_attn.dense.base_layer.weight", "base_model.model.model.layers.22.self_attn.dense.base_layer.bias", "base_model.model.model.layers.22.mlp.fc1.base_layer.weight", "base_model.model.model.layers.22.mlp.fc1.base_layer.bias", "base_model.model.model.layers.22.mlp.fc2.base_layer.weight", "base_model.model.model.layers.22.mlp.fc2.base_layer.bias", "base_model.model.model.layers.22.input_layernorm.weight", "base_model.model.model.layers.22.input_layernorm.bias", "base_model.model.model.layers.23.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.23.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.23.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.23.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.23.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.23.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.23.self_attn.dense.base_layer.weight", "base_model.model.model.layers.23.self_attn.dense.base_layer.bias", "base_model.model.model.layers.23.mlp.fc1.base_layer.weight", "base_model.model.model.layers.23.mlp.fc1.base_layer.bias", "base_model.model.model.layers.23.mlp.fc2.base_layer.weight", "base_model.model.model.layers.23.mlp.fc2.base_layer.bias", "base_model.model.model.layers.23.input_layernorm.weight", "base_model.model.model.layers.23.input_layernorm.bias", "base_model.model.model.layers.24.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.24.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.24.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.24.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.24.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.24.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.24.self_attn.dense.base_layer.weight", "base_model.model.model.layers.24.self_attn.dense.base_layer.bias", "base_model.model.model.layers.24.mlp.fc1.base_layer.weight", "base_model.model.model.layers.24.mlp.fc1.base_layer.bias", "base_model.model.model.layers.24.mlp.fc2.base_layer.weight", "base_model.model.model.layers.24.mlp.fc2.base_layer.bias", "base_model.model.model.layers.24.input_layernorm.weight", "base_model.model.model.layers.24.input_layernorm.bias", "base_model.model.model.layers.25.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.25.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.25.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.25.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.25.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.25.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.25.self_attn.dense.base_layer.weight", "base_model.model.model.layers.25.self_attn.dense.base_layer.bias", "base_model.model.model.layers.25.mlp.fc1.base_layer.weight", "base_model.model.model.layers.25.mlp.fc1.base_layer.bias", "base_model.model.model.layers.25.mlp.fc2.base_layer.weight", "base_model.model.model.layers.25.mlp.fc2.base_layer.bias", "base_model.model.model.layers.25.input_layernorm.weight", "base_model.model.model.layers.25.input_layernorm.bias", "base_model.model.model.layers.26.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.26.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.26.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.26.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.26.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.26.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.26.self_attn.dense.base_layer.weight", "base_model.model.model.layers.26.self_attn.dense.base_layer.bias", "base_model.model.model.layers.26.mlp.fc1.base_layer.weight", "base_model.model.model.layers.26.mlp.fc1.base_layer.bias", "base_model.model.model.layers.26.mlp.fc2.base_layer.weight", "base_model.model.model.layers.26.mlp.fc2.base_layer.bias", "base_model.model.model.layers.26.input_layernorm.weight", "base_model.model.model.layers.26.input_layernorm.bias", "base_model.model.model.layers.27.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.27.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.27.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.27.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.27.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.27.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.27.self_attn.dense.base_layer.weight", "base_model.model.model.layers.27.self_attn.dense.base_layer.bias", "base_model.model.model.layers.27.mlp.fc1.base_layer.weight", "base_model.model.model.layers.27.mlp.fc1.base_layer.bias", "base_model.model.model.layers.27.mlp.fc2.base_layer.weight", "base_model.model.model.layers.27.mlp.fc2.base_layer.bias", "base_model.model.model.layers.27.input_layernorm.weight", "base_model.model.model.layers.27.input_layernorm.bias", "base_model.model.model.layers.28.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.28.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.28.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.28.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.28.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.28.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.28.self_attn.dense.base_layer.weight", "base_model.model.model.layers.28.self_attn.dense.base_layer.bias", "base_model.model.model.layers.28.mlp.fc1.base_layer.weight", "base_model.model.model.layers.28.mlp.fc1.base_layer.bias", "base_model.model.model.layers.28.mlp.fc2.base_layer.weight", "base_model.model.model.layers.28.mlp.fc2.base_layer.bias", "base_model.model.model.layers.28.input_layernorm.weight", "base_model.model.model.layers.28.input_layernorm.bias", "base_model.model.model.layers.29.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.29.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.29.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.29.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.29.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.29.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.29.self_attn.dense.base_layer.weight", "base_model.model.model.layers.29.self_attn.dense.base_layer.bias", "base_model.model.model.layers.29.mlp.fc1.base_layer.weight", "base_model.model.model.layers.29.mlp.fc1.base_layer.bias", "base_model.model.model.layers.29.mlp.fc2.base_layer.weight", "base_model.model.model.layers.29.mlp.fc2.base_layer.bias", "base_model.model.model.layers.29.input_layernorm.weight", "base_model.model.model.layers.29.input_layernorm.bias", "base_model.model.model.layers.30.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.30.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.30.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.30.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.30.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.30.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.30.self_attn.dense.base_layer.weight", "base_model.model.model.layers.30.self_attn.dense.base_layer.bias", "base_model.model.model.layers.30.mlp.fc1.base_layer.weight", "base_model.model.model.layers.30.mlp.fc1.base_layer.bias", "base_model.model.model.layers.30.mlp.fc2.base_layer.weight", "base_model.model.model.layers.30.mlp.fc2.base_layer.bias", "base_model.model.model.layers.30.input_layernorm.weight", "base_model.model.model.layers.30.input_layernorm.bias", "base_model.model.model.layers.31.self_attn.q_proj.base_layer.weight", "base_model.model.model.layers.31.self_attn.q_proj.base_layer.bias", "base_model.model.model.layers.31.self_attn.k_proj.base_layer.weight", "base_model.model.model.layers.31.self_attn.k_proj.base_layer.bias", "base_model.model.model.layers.31.self_attn.v_proj.base_layer.weight", "base_model.model.model.layers.31.self_attn.v_proj.base_layer.bias", "base_model.model.model.layers.31.self_attn.dense.base_layer.weight", "base_model.model.model.layers.31.self_attn.dense.base_layer.bias", "base_model.model.model.layers.31.mlp.fc1.base_layer.weight", "base_model.model.model.layers.31.mlp.fc1.base_layer.bias", "base_model.model.model.layers.31.mlp.fc2.base_layer.weight", "base_model.model.model.layers.31.mlp.fc2.base_layer.bias", "base_model.model.model.layers.31.input_layernorm.weight", "base_model.model.model.layers.31.input_layernorm.bias", "base_model.model.model.final_layernorm.weight", "base_model.model.model.final_layernorm.bias", "base_model.model.score.original_module.weight"
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29078/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29077/comments | https://api.github.com/repos/huggingface/transformers/issues/29077/events | https://github.com/huggingface/transformers/pull/29077 | 2,139,899,107 | PR_kwDOCUB6oc5nKCnG | 29,077 | New model support RTDETR | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Looking good @SangbumChoi! Let us know when the PR is ready for review 🤗 "
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This is the new model for RTDETR that is complete version from https://github.com/huggingface/transformers/pull/27247
There are several TO DOs
- [X] reslove conflicts
- [ ] weight files for other 7 RTDETR
- [ ] Edit testing script
- [ ] (optional) enable training
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29077/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29077",
"html_url": "https://github.com/huggingface/transformers/pull/29077",
"diff_url": "https://github.com/huggingface/transformers/pull/29077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29077.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29076/comments | https://api.github.com/repos/huggingface/transformers/issues/29076/events | https://github.com/huggingface/transformers/issues/29076 | 2,139,855,433 | I_kwDOCUB6oc5_i5pJ | 29,076 | RingAttention Support | {
"login": "Hambaobao",
"id": 48345096,
"node_id": "MDQ6VXNlcjQ4MzQ1MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/48345096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hambaobao",
"html_url": "https://github.com/Hambaobao",
"followers_url": "https://api.github.com/users/Hambaobao/followers",
"following_url": "https://api.github.com/users/Hambaobao/following{/other_user}",
"gists_url": "https://api.github.com/users/Hambaobao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hambaobao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hambaobao/subscriptions",
"organizations_url": "https://api.github.com/users/Hambaobao/orgs",
"repos_url": "https://api.github.com/users/Hambaobao/repos",
"events_url": "https://api.github.com/users/Hambaobao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hambaobao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Hello,
I would like to inquire about the potential inclusion of [RingAttention](https://github.com/lhao499/ring-attention) in `Transformers`, which could enable training with longer sequences.
### Motivation
The incorporation of `RingAttention` would significantly enhance the capabilities for users engaged in advanced projects involving `LLMs` and `LMMs`. I believe this feature would make `Transformers` even more versatile and valuable for the research and development community.
### Your contribution
References:
- [RingAttention](https://github.com/lhao499/ring-attention)
- [LWM](https://github.com/LargeWorldModel/LWM) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29076/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29075/comments | https://api.github.com/repos/huggingface/transformers/issues/29075/events | https://github.com/huggingface/transformers/issues/29075 | 2,139,821,427 | I_kwDOCUB6oc5_ixVz | 29,075 | Inputs left-padded passed to Instruct-Mistral-7B, with FlashAttention-2, causes garbage outputs for the padded sequences | {
"login": "millicentli",
"id": 20379204,
"node_id": "MDQ6VXNlcjIwMzc5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/20379204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/millicentli",
"html_url": "https://github.com/millicentli",
"followers_url": "https://api.github.com/users/millicentli/followers",
"following_url": "https://api.github.com/users/millicentli/following{/other_user}",
"gists_url": "https://api.github.com/users/millicentli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/millicentli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/millicentli/subscriptions",
"organizations_url": "https://api.github.com/users/millicentli/orgs",
"repos_url": "https://api.github.com/users/millicentli/repos",
"events_url": "https://api.github.com/users/millicentli/events{/privacy}",
"received_events_url": "https://api.github.com/users/millicentli/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Update: so downgrading to `transformers`: 4.34.0 fixed this issue. See: https://discuss.huggingface.co/t/fine-tuned-mistral-7b-inference-issue-for-4k-context-length-token-with-transformer-4-35/65295\r\n\r\nThis is still a problem though with the `transformers` version noted though, so would like a fix if possible for the most recent one (so I'll keep this open).",
"Having a look right now, but the padding and the attention should not be manually changed, the tokenizer is supposed to take care of that ",
"Yes I know of course, this is just an example to replicate what's happening and to visualize the bug for you guys, but in my actual code, the tokenizer takes care of the padding and attention. (in my own code, my batch size is > 1, but this is one example that I've scoped down to showcase the issue. Generally, the trend is, in any batched input, the only sample that has coherent output is the one without padding)."
] | 1,708 | 1,708 | null | NONE | null | ### System Info
transformers version: 4.36.2
Pytorch version: 2.2.0
Platform: Rocky Linux release 8.8 (Green Obsidian), 4.18.0-477.27.1.el8_8.x86_64
Python version: Python 3.9.18
Accelerate version: 0.26.1
FlashAttention-2 version: 2.5.3
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Inference on Mistral-7B seems to vary wildly with padding when using FlashAttention-2, versus having no padding with FlashAttention-2.
The behavior for inference with FA-2 seems to be dependent on the complexity of the task -- in my case, I'm doing multi-document summarization, and my example is a multi-document example. I didn't try too hard to find a simpler example because a simple input text didn't seem to exhibit the same issues.
In addition for the reproduction, I've included the text I use (the examples below will take in the text for debugging).
[text.txt](https://github.com/huggingface/transformers/files/14317626/text.txt)
Example (minimal reproduction):
With FlashAttention-2
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
model_name = "mistralai/Mistral-7B-Instruct-v0.1"
torch_dtype = torch.bfloat16
tokenizer_kwargs = {
"add_bos_token": False,
"add_eos_token": False,
"padding_side": "left"
}
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2",
device_map="balanced"
)
tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.sep_token = "[END DOCUMENT]"
f = open("text.txt", "r")
inputs = f.readlines()
inputs = tokenizer(inputs, return_tensors="pt")
# Find indices where = 2
pad_indices = (inputs['input_ids'] == 2).nonzero()
inputs['attention_mask'][:, pad_indices] = 0
inputs = {k: inputs[k].cuda() for k in inputs}
outputs = model.generate(
**inputs,
num_beams=2,
no_repeat_ngram_size=3,
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id
)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:][0].reshape(1, -1)))
```
The output:
> ['The</s>']
Without FlashAttention-2
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
model_name = "mistralai/Mistral-7B-Instruct-v0.1"
torch_dtype = torch.bfloat16
tokenizer_kwargs = {
"add_bos_token": False,
"add_eos_token": False,
"padding_side": "left"
}
config = AutoConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
device_map="balanced"
)
tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.sep_token = "[END DOCUMENT]"
f = open("text.txt", "r")
inputs = f.readlines()
inputs = tokenizer(inputs, return_tensors="pt")
# Find indices where = 2
pad_indices = (inputs['input_ids'] == 2).nonzero()
inputs['attention_mask'][:, pad_indices] = 0
inputs = {k: inputs[k].cuda() for k in inputs}
outputs = model.generate(
**inputs,
num_beams=2,
no_repeat_ngram_size=3,
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id
)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:][0].reshape(1, -1)))
```
The output:
> ['The above documents discuss various studies and research related to the effects of breast- feeding on the health and development of newborn babies. Some studies suggest that breast- fed babies have a lower risk of certain health problems, such as infections and allergies, while others find no significant differences between breast- and formula-fed babies.\n\nOne study found that exclusive breast -feeding for at least six months was associated with a reduced risk of SIDS (Sudden Infant Death Syndrome) in infants aged 6-12 months. Another study found no significant association between breast - feeding and SIDS risk in infancy, but did find that breast - fed babies had a lower incidence and severity of respitory infections in the early months of life compared to formula- fed infancy.\nIn addition, some studies have found that breast feeding may have a positive effect on the cognitive and emotional development of babies. For example, one study found a positive correlation between breast feeding duration and cognitive development in infancies.\nOverall, the evidence suggests that breastfeeds is beneficial for newborn health and well-being, but more research is needed to fully understand the effects and to identify the optimal duration and frequency of feeding for different babies']
Discovered this issue by debugging and removing the padding from the beginning of the sequence; if the padding is gone from the beginning of the sequence, then the behavior w/ and w/o FA-2 is similar. Other attempts at debugging: upgraded FA-2 version to the latest, and torch version to 2.2.0, but neither solution fixed the problem. Did the Pytorch upgrade because of https://github.com/pytorch/pytorch/issues/112577 but this didn't seem to be the problem. Also upgraded transformers to be 4.37.2 and it was still a problem there.
### Expected behavior
The behavior for inference with FA-2 should be similar as inference without FA-2, but it's wildly different. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29075/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29075/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29074/comments | https://api.github.com/repos/huggingface/transformers/issues/29074/events | https://github.com/huggingface/transformers/issues/29074 | 2,139,812,888 | I_kwDOCUB6oc5_ivQY | 29,074 | NotImplementedError | {
"login": "vincent507cpu",
"id": 29680509,
"node_id": "MDQ6VXNlcjI5NjgwNTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/29680509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vincent507cpu",
"html_url": "https://github.com/vincent507cpu",
"followers_url": "https://api.github.com/users/vincent507cpu/followers",
"following_url": "https://api.github.com/users/vincent507cpu/following{/other_user}",
"gists_url": "https://api.github.com/users/vincent507cpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vincent507cpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vincent507cpu/subscriptions",
"organizations_url": "https://api.github.com/users/vincent507cpu/orgs",
"repos_url": "https://api.github.com/users/vincent507cpu/repos",
"events_url": "https://api.github.com/users/vincent507cpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vincent507cpu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
I was trying to run **https://github.com/dvlab-research/LongLoRA/blob/main/fine-tune.py**, but I first encountered `ValueError: Tokenizer class ChatGLMTokenizer does not exist or is not currently imported.` (it's been solved by switching line 135 and line 141 snippests). Then I had a new error (please see below).
Could anyone help me? Thanks a lot!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`python fine-tune.py --output_dir ./output_models/chatglm3-6b-32k`
### Expected behavior
```
/home/zwj/GitHub/LongLoRA-main/llama_attn_replace.py:464: UserWarning: Flash attention is only supported on A100 or H100 GPU during training due to head dim > 64 backward.ref: https://github.com/HazyResearch/flash-attention/issues/190#issuecomment-1523359593
warnings.warn(
Loading checkpoint shards: 100%|█████████████████████████████████████████████| 7/7 [00:02<00:00, 3.38it/s]
Traceback (most recent call last):
File "/home/zwj/GitHub/LongLoRA-main/fine-tune.py", line 217, in <module>
train()
File "/home/zwj/GitHub/LongLoRA-main/fine-tune.py", line 165, in train
smart_tokenizer_and_embedding_resize(
File "/home/zwj/GitHub/LongLoRA-main/fine-tune.py", line 81, in smart_tokenizer_and_embedding_resize
model.resize_token_embeddings(len(tokenizer))
File "/home/zwj/miniconda3/envs/longlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1811, in resize_token_embeddings
model_embeds = self._resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zwj/miniconda3/envs/longlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1832, in _resize_token_embeddings
self.set_input_embeddings(new_embeddings)
File "/home/zwj/miniconda3/envs/longlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1610, in set_input_embeddings
base_model.set_input_embeddings(value)
File "/home/zwj/miniconda3/envs/longlora/lib/python3.11/site-packages/transformers/modeling_utils.py", line 1612, in set_input_embeddings
raise NotImplementedError
NotImplementedError
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29073/comments | https://api.github.com/repos/huggingface/transformers/issues/29073/events | https://github.com/huggingface/transformers/pull/29073 | 2,139,700,801 | PR_kwDOCUB6oc5nJX8g | 29,073 | Bump cryptography from 42.0.0 to 42.0.2 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29073). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version\r\n",
"OK, I won't notify you about version 42.x.x again, unless you re-open this PR."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Bumps [cryptography](https://github.com/pyca/cryptography) from 42.0.0 to 42.0.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p>
<blockquote>
<p>42.0.2 - 2024-01-30</p>
<pre><code>
* Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL 3.2.1.
* Fixed an issue that prevented the use of Python buffer protocol objects in
``sign`` and ``verify`` methods on asymmetric keys.
* Fixed an issue with incorrect keyword-argument naming with ``EllipticCurvePrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey.exchange`,
``X25519PrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.exchange`,
``X448PrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.exchange`,
and ``DHPrivateKey``
:meth:`~cryptography.hazmat.primitives.asymmetric.dh.DHPrivateKey.exchange`.
<p>.. _v42-0-1:</p>
<p>42.0.1 - 2024-01-24
</code></pre></p>
<ul>
<li>Fixed an issue with incorrect keyword-argument naming with <code>EllipticCurvePrivateKey</code>
:meth:<code>~cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey.sign</code>.</li>
<li>Resolved compatibility issue with loading certain RSA public keys in
:func:<code>~cryptography.hazmat.primitives.serialization.load_pem_public_key</code>.</li>
</ul>
<p>.. _v42-0-0:</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pyca/cryptography/commit/2202123b50de1b8788f909a3e5afe350c56ad81e"><code>2202123</code></a> changelog and version bump 42.0.2 (<a href="https://redirect.github.com/pyca/cryptography/issues/10268">#10268</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/f7032bdd409838f67fc2b93343f897fb5f397d80"><code>f7032bd</code></a> bump openssl in CI (<a href="https://redirect.github.com/pyca/cryptography/issues/10298">#10298</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10299">#10299</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/002e886f16d8857151c09b11dc86b35f2ac9aec3"><code>002e886</code></a> Fixes <a href="https://redirect.github.com/pyca/cryptography/issues/10294">#10294</a> -- correct accidental change to exchange kwarg (<a href="https://redirect.github.com/pyca/cryptography/issues/10295">#10295</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10296">#10296</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/92fa9f2f606caea5d499c825e832be5bac6f0c23"><code>92fa9f2</code></a> support bytes-like consistently across our asym sign/verify APIs (<a href="https://redirect.github.com/pyca/cryptography/issues/10260">#10260</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/1">#1</a>...</li>
<li><a href="https://github.com/pyca/cryptography/commit/6478f7e28be54b51931277235de01b249ceabd96"><code>6478f7e</code></a> explicitly support bytes-like for signature/data in RSA sign/verify (<a href="https://redirect.github.com/pyca/cryptography/issues/10259">#10259</a>) ...</li>
<li><a href="https://github.com/pyca/cryptography/commit/4bb8596ae02d95bb054dbcf55e8771379dbe0c19"><code>4bb8596</code></a> fix the release script (<a href="https://redirect.github.com/pyca/cryptography/issues/10233">#10233</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10254">#10254</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/337437dc2e62772bde4ad5544f4b1db9ee7572d9"><code>337437d</code></a> 42.0.1 bump (<a href="https://redirect.github.com/pyca/cryptography/issues/10252">#10252</a>)</li>
<li><a href="https://github.com/pyca/cryptography/commit/56255de6b2d1a2d2e502b0275231ca81907f33f1"><code>56255de</code></a> allow SPKI RSA keys to be parsed even if they have an incorrect delimiter (<a href="https://redirect.github.com/pyca/cryptography/issues/1">#1</a>...</li>
<li><a href="https://github.com/pyca/cryptography/commit/12f038b38af76e36efe8cef09597010c97647e8f"><code>12f038b</code></a> fixes <a href="https://redirect.github.com/pyca/cryptography/issues/10237">#10237</a> -- correct EC sign parameter name (<a href="https://redirect.github.com/pyca/cryptography/issues/10239">#10239</a>) (<a href="https://redirect.github.com/pyca/cryptography/issues/10240">#10240</a>)</li>
<li>See full diff in <a href="https://github.com/pyca/cryptography/compare/42.0.0...42.0.2">compare view</a></li>
</ul>
</details>
<br />
<details>
<summary>Most Recent Ignore Conditions Applied to This Pull Request</summary>
| Dependency Name | Ignore Conditions |
| --- | --- |
| cryptography | [< 42, > 41.0.2] |
</details>
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=42.0.0&new-version=42.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29073",
"html_url": "https://github.com/huggingface/transformers/pull/29073",
"diff_url": "https://github.com/huggingface/transformers/pull/29073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29073.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29072/comments | https://api.github.com/repos/huggingface/transformers/issues/29072/events | https://github.com/huggingface/transformers/pull/29072 | 2,139,666,847 | PR_kwDOCUB6oc5nJQGo | 29,072 | Fix a typo in `examples/pytorch/text-classification/run_classification.py` | {
"login": "Ja1Zhou",
"id": 50169346,
"node_id": "MDQ6VXNlcjUwMTY5MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/50169346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ja1Zhou",
"html_url": "https://github.com/Ja1Zhou",
"followers_url": "https://api.github.com/users/Ja1Zhou/followers",
"following_url": "https://api.github.com/users/Ja1Zhou/following{/other_user}",
"gists_url": "https://api.github.com/users/Ja1Zhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ja1Zhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ja1Zhou/subscriptions",
"organizations_url": "https://api.github.com/users/Ja1Zhou/orgs",
"repos_url": "https://api.github.com/users/Ja1Zhou/repos",
"events_url": "https://api.github.com/users/Ja1Zhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ja1Zhou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29072). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Hi there, this commit fixes a tiny typo in the provided pytorch text classification pipeline. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29072/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29072",
"html_url": "https://github.com/huggingface/transformers/pull/29072",
"diff_url": "https://github.com/huggingface/transformers/pull/29072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29072.patch",
"merged_at": 1708347676000
} |
https://api.github.com/repos/huggingface/transformers/issues/29071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29071/comments | https://api.github.com/repos/huggingface/transformers/issues/29071/events | https://github.com/huggingface/transformers/pull/29071 | 2,139,637,479 | PR_kwDOCUB6oc5nJJty | 29,071 | Typo in `modeling_clip.ClipVisionTransformer` | {
"login": "AdityaKane2001",
"id": 64411306,
"node_id": "MDQ6VXNlcjY0NDExMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/64411306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaKane2001",
"html_url": "https://github.com/AdityaKane2001",
"followers_url": "https://api.github.com/users/AdityaKane2001/followers",
"following_url": "https://api.github.com/users/AdityaKane2001/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaKane2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaKane2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaKane2001/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaKane2001/orgs",
"repos_url": "https://api.github.com/users/AdityaKane2001/repos",
"events_url": "https://api.github.com/users/AdityaKane2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaKane2001/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@amyeroberts \r\n\r\nYes, I realized that when I tried to incorporate the change in my fork. Maybe the maintainers might have to do this, but what would be a solution in this case? Apart from the brute-force one, i.e. changing names in _all_ hosted clip weights.\r\n\r\n",
"Maybe overloading a try-catch mechanism somewhere such that loading the weights to the misspelled one also reflects in the correct one?",
"@AdityaKane2001 It's not something that we're going to try to fix. It's a bit annoying but effectively not consequential. The solution is to put up with the original spelling :) ",
"Gotcha. Closing the PR."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | Fixed a typo in `modeling_clip.py`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29071/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29071",
"html_url": "https://github.com/huggingface/transformers/pull/29071",
"diff_url": "https://github.com/huggingface/transformers/pull/29071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29071.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29070/comments | https://api.github.com/repos/huggingface/transformers/issues/29070/events | https://github.com/huggingface/transformers/pull/29070 | 2,139,577,470 | PR_kwDOCUB6oc5nI8e4 | 29,070 | Add support for fine-tuning CLIP-like models using contrastive-image-text example | {
"login": "tjs-intel",
"id": 74561858,
"node_id": "MDQ6VXNlcjc0NTYxODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/74561858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tjs-intel",
"html_url": "https://github.com/tjs-intel",
"followers_url": "https://api.github.com/users/tjs-intel/followers",
"following_url": "https://api.github.com/users/tjs-intel/following{/other_user}",
"gists_url": "https://api.github.com/users/tjs-intel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tjs-intel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tjs-intel/subscriptions",
"organizations_url": "https://api.github.com/users/tjs-intel/orgs",
"repos_url": "https://api.github.com/users/tjs-intel/repos",
"events_url": "https://api.github.com/users/tjs-intel/events{/privacy}",
"received_events_url": "https://api.github.com/users/tjs-intel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixing up this PR as per the contributor guidelines now",
"Happy to receive suggestions for any test candidates",
"This has been manually tested by replacing `openai/clip-vit-base-patch32` in the contrastive-image-text example with the following models:\r\n\r\n```\r\n\tOFA-Sys/chinese-clip-vit-base-patch16\r\n\tfacebook/metaclip-b32-400m\r\n\tgoogle/siglip-so400m-patch14-384\r\n\tlaion/CLIP-ViT-B-32-laion2B-s34B-b79K\r\n\tlaion/CLIP-ViT-H-14-laion2B-s32B-b79K\r\n\tlaion/CLIP-ViT-bigG-14-laion2B-39B-b160k\r\n\topenai/clip-vit-base-patch32\r\n\topenai/clip-vit-large-patch14\r\n\topenai/clip-vit-large-patch14-336\r\n\ttimm/ViT-SO400M-14-SigLIP-384\r\n```",
"Not sure what's going on here:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/84689/workflows/02d18e8c-af6e-465d-8625-fb3dc53bc03e/jobs/1095368/parallel-runs/0/steps/0-116\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/84689/workflows/02d18e8c-af6e-465d-8625-fb3dc53bc03e/jobs/1095369/parallel-runs/0/steps/0-115\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/84689/workflows/02d18e8c-af6e-465d-8625-fb3dc53bc03e/jobs/1095365/parallel-runs/0/steps/0-117",
"Hi @tjs-intel, thanks for adding this! For the failing tests, could you try rebasing onto main? There was some recent issues we had with compatible library versions which should now be resolved",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29070). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
The example [contrastive-image-text](https://github.com/huggingface/transformers/blob/f497f56/examples/pytorch/contrastive-image-text/README.md) works for fine-tuning models that have the `model_type` "clip", but for other models like "chinese_clip" and "siglip" the `VisionTextDualEncoderConfig` class is too specific to CLIP models.
This PR adds support for [Chinese-CLIP](https://huggingface.co/docs/transformers/en/model_doc/chinese_clip) and [SigLIP](https://huggingface.co/docs/transformers/en/model_doc/siglip) vision models to be fine-tuned with the contrastive-image-text example.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @patil-suraj @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29070/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29070",
"html_url": "https://github.com/huggingface/transformers/pull/29070",
"diff_url": "https://github.com/huggingface/transformers/pull/29070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29070.patch",
"merged_at": 1708430912000
} |
https://api.github.com/repos/huggingface/transformers/issues/29069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29069/comments | https://api.github.com/repos/huggingface/transformers/issues/29069/events | https://github.com/huggingface/transformers/issues/29069 | 2,139,529,715 | I_kwDOCUB6oc5_hqHz | 29,069 | is_vision_availble() is slow and called a lot | {
"login": "collosi",
"id": 138069,
"node_id": "MDQ6VXNlcjEzODA2OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/138069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/collosi",
"html_url": "https://github.com/collosi",
"followers_url": "https://api.github.com/users/collosi/followers",
"following_url": "https://api.github.com/users/collosi/following{/other_user}",
"gists_url": "https://api.github.com/users/collosi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/collosi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/collosi/subscriptions",
"organizations_url": "https://api.github.com/users/collosi/orgs",
"repos_url": "https://api.github.com/users/collosi/repos",
"events_url": "https://api.github.com/users/collosi/events{/privacy}",
"received_events_url": "https://api.github.com/users/collosi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"@collosi can you share a minimal code snippet to reproduce your issue?"
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Memoize the result of `is_vision_availble()`, because it depends on installed packages which are unlikely to change.
### Motivation
`is_vision_availble()` in import_utils.py is slow, in that it results in many calls to os.stat and os.list_dir, because it is checking import metadata. Because it is used in places like `is_valid_image()`, it can be called A LOT when trying to bootstrap an image dataset. For example in my use case of opening a PIL Image, and immediately transforming it with a transforms.Compose(), the calls to stat and list_dir account for more than half of the runtime.
### Your contribution
Sorry, I'm not a great python developer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29069/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29068/comments | https://api.github.com/repos/huggingface/transformers/issues/29068/events | https://github.com/huggingface/transformers/issues/29068 | 2,139,498,780 | I_kwDOCUB6oc5_hikc | 29,068 | Add Support for Dataclasses to Trainer | {
"login": "ntenenz",
"id": 8411908,
"node_id": "MDQ6VXNlcjg0MTE5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8411908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntenenz",
"html_url": "https://github.com/ntenenz",
"followers_url": "https://api.github.com/users/ntenenz/followers",
"following_url": "https://api.github.com/users/ntenenz/following{/other_user}",
"gists_url": "https://api.github.com/users/ntenenz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ntenenz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntenenz/subscriptions",
"organizations_url": "https://api.github.com/users/ntenenz/orgs",
"repos_url": "https://api.github.com/users/ntenenz/repos",
"events_url": "https://api.github.com/users/ntenenz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ntenenz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @muellerzr "
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Update Trainer._prepare_input to natively support python dataclasses for support of more structured objects.
### Motivation
_prepare_input will seamlessly transfer the tensors contained in many datatypes (list, tuple, dict, etc.) to the appropriate device. However, it will not do so for dataclass objects.
Python dataclasses often provide better ergonomics than TypedDict, which is likely the closest supported alternative. Adding support appears to be a small change to the codebase with nice benefits to the user.
### Your contribution
If the PR is as simple as it initially appears, I would be willing to submit a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29068/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29067/comments | https://api.github.com/repos/huggingface/transformers/issues/29067/events | https://github.com/huggingface/transformers/issues/29067 | 2,139,183,717 | I_kwDOCUB6oc5_gVpl | 29,067 | FalconAttention Doesn't use `alibi` when `alibi` is not None if `_use_sdpa==True` | {
"login": "SamanehSaadat",
"id": 1986164,
"node_id": "MDQ6VXNlcjE5ODYxNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1986164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamanehSaadat",
"html_url": "https://github.com/SamanehSaadat",
"followers_url": "https://api.github.com/users/SamanehSaadat/followers",
"following_url": "https://api.github.com/users/SamanehSaadat/following{/other_user}",
"gists_url": "https://api.github.com/users/SamanehSaadat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamanehSaadat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamanehSaadat/subscriptions",
"organizations_url": "https://api.github.com/users/SamanehSaadat/orgs",
"repos_url": "https://api.github.com/users/SamanehSaadat/repos",
"events_url": "https://api.github.com/users/SamanehSaadat/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamanehSaadat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"HEy! Pretty sure that is because it's not supported in `sdpa` nor is `flash_attention`. We should / could raise an error however. ",
"Hi @SamanehSaadat thank you for the notice, please refer to: https://github.com/huggingface/transformers/blob/2f1003be86f11c8d97d7c2e6a7739dbb6fa795f2/src/transformers/models/falcon/modeling_falcon.py#L1104-L1124. Alibi is merged into the attention bias. We do so at the top-level and not in every attention module.",
"Oh thanks @fxmarty for correcting me! "
] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-6.5.13-1rodete2-amd64-x86_64
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@fxmarty
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I found this issue when I was reading the Falcon code:
[This else](https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L477-L526) is related to when `alibi is not None`:
* When `_use_sdpa==False`, `alibi` is added to `attention_scores` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L503)
* But when `_use_sdpa==True`, `alibi` hasn't been used anywhere in [this part of code](https://github.com/huggingface/transformers/blob/2f1003be86f11c8d97d7c2e6a7739dbb6fa795f2/src/transformers/models/falcon/modeling_falcon.py#L478-L490).
### Expected behavior
`alibi` should be used when `alibi` is not `None`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29066/comments | https://api.github.com/repos/huggingface/transformers/issues/29066/events | https://github.com/huggingface/transformers/pull/29066 | 2,139,025,735 | PR_kwDOCUB6oc5nHBPp | 29,066 | Bnb test fix for different hardwares | {
"login": "Titus-von-Koeller",
"id": 9048635,
"node_id": "MDQ6VXNlcjkwNDg2MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9048635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Titus-von-Koeller",
"html_url": "https://github.com/Titus-von-Koeller",
"followers_url": "https://api.github.com/users/Titus-von-Koeller/followers",
"following_url": "https://api.github.com/users/Titus-von-Koeller/following{/other_user}",
"gists_url": "https://api.github.com/users/Titus-von-Koeller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Titus-von-Koeller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Titus-von-Koeller/subscriptions",
"organizations_url": "https://api.github.com/users/Titus-von-Koeller/orgs",
"repos_url": "https://api.github.com/users/Titus-von-Koeller/repos",
"events_url": "https://api.github.com/users/Titus-von-Koeller/events{/privacy}",
"received_events_url": "https://api.github.com/users/Titus-von-Koeller/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29066). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Could one of you please merge? I don't have permission to do so."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | Just updating the acceptable generated text, as this is slightly different based on hardware from what I can tell. The first commit is based on what I observed on my dev VM with A10G and the second commit is based on what I saw [in the failing BNB integration pipeline](https://github.com/huggingface/peft/actions/runs/7925057159/job/21637597387#step:9:83).
In my eyes, both text strings seem similar enough. I already spoke about this with @younesbelkada regarding the first commit. The second seems a similar case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29066",
"html_url": "https://github.com/huggingface/transformers/pull/29066",
"diff_url": "https://github.com/huggingface/transformers/pull/29066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29066.patch",
"merged_at": 1708365885000
} |
https://api.github.com/repos/huggingface/transformers/issues/29065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29065/comments | https://api.github.com/repos/huggingface/transformers/issues/29065/events | https://github.com/huggingface/transformers/pull/29065 | 2,139,009,551 | PR_kwDOCUB6oc5nG9pQ | 29,065 | Fix WhisperNoSpeechDetection when input is full silence | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I want to add a test, but I realized most of the slow tests of Whisper were already failing, independently of this PR. \r\ncc @sanchit-gandhi ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29065). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
@cifkao found an edge case that happens when the input of Whisper.generate is a full silence. This is a simple tentative PR.
cc @sanchit-gandhi
<!-- Remove if not applicable -->
Fixes #29036
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29065/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29065",
"html_url": "https://github.com/huggingface/transformers/pull/29065",
"diff_url": "https://github.com/huggingface/transformers/pull/29065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29065.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29064/comments | https://api.github.com/repos/huggingface/transformers/issues/29064/events | https://github.com/huggingface/transformers/issues/29064 | 2,138,935,677 | I_kwDOCUB6oc5_fZF9 | 29,064 | Swapping `tqdm` to `rich` | {
"login": "alexge233",
"id": 6159747,
"node_id": "MDQ6VXNlcjYxNTk3NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6159747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexge233",
"html_url": "https://github.com/alexge233",
"followers_url": "https://api.github.com/users/alexge233/followers",
"following_url": "https://api.github.com/users/alexge233/following{/other_user}",
"gists_url": "https://api.github.com/users/alexge233/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexge233/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexge233/subscriptions",
"organizations_url": "https://api.github.com/users/alexge233/orgs",
"repos_url": "https://api.github.com/users/alexge233/repos",
"events_url": "https://api.github.com/users/alexge233/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexge233/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"> I can see on github it's at `transformers/src/transformers/tokenization_utils_fast.py` and I can see in lines #790 and #791 that there's a further method `train_from_iterator` but at this point I can't find where the actual code is? Can anyone point me to the right direction?\r\n\r\n`train_from_iterator` is defined here: https://github.com/huggingface/transformers/blob/ee3af60be0d21044692211d97dfd858aa3e4b418/examples/flax/language-modeling/t5_tokenizer_model.py#L89",
"@geronimi73 This seems to imply it's for `t5_tokenizer_model` aka only for a T5? Isn't there a common base class?\r\n",
"yes, you're totally right, i'm sorry! can you show me entire code you're using?",
"Hi @alexge233, thanks for opening this feature request! \r\n\r\nThere's been previous discussions internally about adding rich, and the conclusion was to stick with tqdm for the moment as: \r\n* It has caused issues in the past with logs in accelerate. In particular with different running environments e.g. Jupyter notebooks c.f. these issues #26085 #1630\r\n* Issues with writing logs to file for large runs\r\n\r\nEnabling would mean adding in conditional logic, which would still fallback to `tqdm`. We want to keep our (optional) dependencies as small as possible, and adding `rich` would be going against that at the moment ",
"Hi,\r\n\r\nThanks for your reply and feedback. I’m a bit surprised because in my experience it’s fqdm that seems to have issues, especially in cloud environments where I see re-rendering of previous progress bars, incorrect width in the terminal, etc.\r\n\r\nWhich was my primary reason for wanting to implement this.\r\nI would have assumed rich was more mature than tqdm.\r\n\r\nFeel free to close this ticket then, I’ll have to deal with it internally. Could you please point me to where the tqdm implementation is done in the codebase? I’d really appreciate it.\r\n\r\nBest Regards",
"Hi @alexge233, \r\n\r\nOK, I'll close this issue. \r\n\r\n> Could you please point me to where the tqdm implementation is done in the codebase? I’d really appreciate it.\r\n\r\nThroughout most of transformers we just use either `from tqdm import tqdm` or `from tqdm.auto import tqdm`. There's a custom `tqdm` class defined [here in logging](https://github.com/huggingface/transformers/blob/fc37f38915372c15992b540dfcbbe00a916d4fc6/src/transformers/utils/logging.py#L359). "
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Hi, for `AutoTokenizer.train_new_from_iterator` there's a hardcoded `tqdm` progress bar I want to swap to `rich` and I'm happy to PR it back.
I can see on github it's at `transformers/src/transformers/tokenization_utils_fast.py` and I can see in lines #790 and #791 that there's a further method `train_from_iterator` but at this point I can't find where the actual code is? Can anyone point me to the right direction?
Also, is there any reason to go against adding `rich` as a dependency?
Where are the `tqdm` specific bits of code, so I can go through them?
Thanks!
### Motivation
I'm not fond of `tqdm` it seems to create issues when used on AWS, SageMaker, etc. It's span is large and doesn't contain nearly enough information as `rich` can. I wanna start by going over `AutoTokenizer` because it's where I first spotted it.
### Your contribution
Slowly work through bits of code which rely on `tqdm` and add the option to swap for `rich` instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29064/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29063/comments | https://api.github.com/repos/huggingface/transformers/issues/29063/events | https://github.com/huggingface/transformers/pull/29063 | 2,138,892,403 | PR_kwDOCUB6oc5nGj0m | 29,063 | Raise unused kwargs image processor | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Finishing up draft, removing this validation functionality from #28711 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29063). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Main concern I had was using `inspect`, so I moved it to the test suite instead of the actual validation function. That way we don't inspect at runtime, we just compare static lists. For new model additions, `self._valid_processor_keys` remain optional (if it's not there, nothing is run) but it'd be a recommendation.\r\nAs to what the validation raises, for now it's a `logger.info`, could be a warning instead but should not raise an exception as a good chunk of existing models /processors from the hub would break down"
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
This PR captures all kwargs passed to an `ImageProcessor` `preprocess` method and compares them to what's expected, raising an exception or logging an informative message when a difference is found.
This will
1) Make preprocess methods more reliable
2) Inform users when an expected kwargs has no actual impact in the preprocessing
- [x] Did you write any new necessary tests?
## Who can review?
- vision models: @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29063",
"html_url": "https://github.com/huggingface/transformers/pull/29063",
"diff_url": "https://github.com/huggingface/transformers/pull/29063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29063.patch",
"merged_at": 1708442420000
} |
https://api.github.com/repos/huggingface/transformers/issues/29062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29062/comments | https://api.github.com/repos/huggingface/transformers/issues/29062/events | https://github.com/huggingface/transformers/pull/29062 | 2,138,763,388 | PR_kwDOCUB6oc5nGHd6 | 29,062 | [WIP] Add FLMR model | {
"login": "LinWeizheDragon",
"id": 33350454,
"node_id": "MDQ6VXNlcjMzMzUwNDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/33350454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LinWeizheDragon",
"html_url": "https://github.com/LinWeizheDragon",
"followers_url": "https://api.github.com/users/LinWeizheDragon/followers",
"following_url": "https://api.github.com/users/LinWeizheDragon/following{/other_user}",
"gists_url": "https://api.github.com/users/LinWeizheDragon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LinWeizheDragon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinWeizheDragon/subscriptions",
"organizations_url": "https://api.github.com/users/LinWeizheDragon/orgs",
"repos_url": "https://api.github.com/users/LinWeizheDragon/repos",
"events_url": "https://api.github.com/users/LinWeizheDragon/events{/privacy}",
"received_events_url": "https://api.github.com/users/LinWeizheDragon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@ArthurZucker\r\n@younesbelkada\r\n@amyeroberts\r\nCan anyone help me to finish this PR or assign someone knowledgeable? The whole process is a little complicated and I have no idea what to do next. Thanks!",
"Hello @LinWeizheDragon, could you update the readme to have links to the pretrained checkpoints, the original codebase etc? \r\nI would recommend you to first start with a [code on the hub ](https://huggingface.co/docs/transformers/en/model_sharing)+ an issue on `transformers` to add support for this model. If it picks up the interest of the community 😉 \r\n\r\n",
"> Hello @LinWeizheDragon, could you update the readme to have links to the pretrained checkpoints, the original codebase etc? I would recommend you to first start with a [code on the hub ](https://huggingface.co/docs/transformers/en/model_sharing)+ an issue on `transformers` to add support for this model. If it picks up the interest of the community 😉\r\n\r\n@ArthurZucker Thanks for your reply. I have added the requested info to the description of this PR.\r\n We are the authors of this model. We plan to release the pre-trained models here directly to provide easy access to multi-modal late-interaction models. I followed the docs [here](https://huggingface.co/docs/transformers/add_new_model#how-to-add-a-model-to--transformers) to finish a runnable version. The checkpoints have been converted and uploaded to the hub. I think the remaining things are having knowledgeable people to review the changes and pass all the tests.\r\nCould you please let me know what to do next?\r\n",
"Hi @LinWeizheDragon, thanks for all the work opening this PR! \r\n\r\nAs Arthur mentioned, we suggest adding the model directly to the hub. This is the recommended way to add models and we try to have as much support as possible for enabling this. This way, the model will be available to use and findable on the hub immediately. Adding a model into the transformers repo introduces maintenance costs. As such, the bar for adding models is high and the review process can take a long time. ",
"> Hi @LinWeizheDragon, thanks for all the work opening this PR!\r\n> \r\n> As Arthur mentioned, we suggest adding the model directly to the hub. This is the recommended way to add models and we try to have as much support as possible for enabling this. This way, the model will be available to use and findable on the hub immediately. Adding a model into the transformers repo introduces maintenance costs. As such, the bar for adding models is high and the review process can take a long time.\r\n\r\n@amyeroberts Thanks for your reply. Given that the model weights are already on the Hub, what is the recommended way of sharing the model classes and definitions with users? Is there a way to allow users to do \"AutoModel.from_pretrained\" easily?",
"@LinWeizheDragon Yes! :D Here's a guide on custom models and making them available on the hub: https://huggingface.co/docs/transformers/custom_models\r\n\r\nIf you want to use `AutoModel.from_pretrained`, you'll need to make sure to [register your model](https://huggingface.co/docs/transformers/custom_models#registering-a-model-with-custom-code-to-the-auto-classes). Here's an example of modeling code on the hub: https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat/blob/main/modeling_baichuan.py"
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR aims to add a new model, Fine-grained Late-interaction Multi-modal Retriever (FLMR). The model was proposed in [here](https://openreview.net/forum?id=IWWWulAX7g) and [here](https://arxiv.org/abs/2402.08327). This PR contains the following content:
- Model class for both FLMR and PreFLMR.
- Example scripts to run indexing and inference with FLMR and PreFLMR.
The original code base of this work is at [here](https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering). The pre-trained checkpoints are [here](https://huggingface.co/models?search=PreFLMR).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@younesbelkada
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29062",
"html_url": "https://github.com/huggingface/transformers/pull/29062",
"diff_url": "https://github.com/huggingface/transformers/pull/29062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29062.patch",
"merged_at": null
} |