Datasets:
url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/29161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29161/comments | https://api.github.com/repos/huggingface/transformers/issues/29161/events | https://github.com/huggingface/transformers/issues/29161 | 2,145,902,969 | I_kwDOCUB6oc5_5-F5 | 29,161 | To enter token in jupyter notebook issue | {
"login": "arda1906",
"id": 157398066,
"node_id": "U_kgDOCWG0Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/157398066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arda1906",
"html_url": "https://github.com/arda1906",
"followers_url": "https://api.github.com/users/arda1906/followers",
"following_url": "https://api.github.com/users/arda1906/following{/other_user}",
"gists_url": "https://api.github.com/users/arda1906/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arda1906/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arda1906/subscriptions",
"organizations_url": "https://api.github.com/users/arda1906/orgs",
"repos_url": "https://api.github.com/users/arda1906/repos",
"events_url": "https://api.github.com/users/arda1906/events{/privacy}",
"received_events_url": "https://api.github.com/users/arda1906/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @arda1906, thanks for raising an issue!\r\n\r\nWithout more information about the error i.e. what does it mean to \"not work\" and what is the expected behaviour? we won't be able to help you. \r\n\r\nFrom the snippet, it's not entirely clear how the code is being run, but there are two separate commands which should be entered on separate lines or cells\r\n\r\n```py\r\nfrom huggingface_hub import notebook_login\r\n\r\nnotebook_login()\r\n```",
"hi,I am giving details\r\n> I am trying this code to train the model\r\n\r\n>```python\r\n>trainer = Trainer(model=model,args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=emotion_encoded[\"train\"],\r\n eval_dataset=emotion_encoded[\"validation\"],\r\n tokenizer=tokenizer)\r\ntrainer.train()\r\n\r\n>and I am facing this error:\r\n>LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.\r\n>I have thought to apply the my token in the jupyter notebook like this:\r\n>```\r\n> ```python\r\n> from huggingface_hub import notebook_login\r\n> \r\n> notebook_login()\r\n>\r\n> ```\r\n>help me please:(\r\n",
"Hi @arda1906, are you running the notebook login cells before calling Trainer? Are you passing in a token to the interactive text box that appears when running notebook_login? ",
"> Hi @arda1906, are you running the notebook login cells before calling Trainer? Are you passing in a token to the interactive text box that appears when running notebook_login?\r\n\r\n![20240221_210517](https://github.com/huggingface/transformers/assets/157398066/57ee7c71-1614-4c44-8dc2-144f47cafacd)\r\n"
] | 1,708 | 1,708 | null | NONE | null | I run this [from huggingface_hub import notebook_login
notebook_login() ] on cell and enter my token. but it doesn't work:( | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29161/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29160/comments | https://api.github.com/repos/huggingface/transformers/issues/29160/events | https://github.com/huggingface/transformers/pull/29160 | 2,145,779,053 | PR_kwDOCUB6oc5neHY8 | 29,160 | [WIP] add Fusion In Decoder model | {
"login": "oh-gnues-iohc",
"id": 79557937,
"node_id": "MDQ6VXNlcjc5NTU3OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oh-gnues-iohc",
"html_url": "https://github.com/oh-gnues-iohc",
"followers_url": "https://api.github.com/users/oh-gnues-iohc/followers",
"following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}",
"gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions",
"organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs",
"repos_url": "https://api.github.com/users/oh-gnues-iohc/repos",
"events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}",
"received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
add FiD(Fusion In Decoder) models
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29160/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29160",
"html_url": "https://github.com/huggingface/transformers/pull/29160",
"diff_url": "https://github.com/huggingface/transformers/pull/29160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29160.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29159/comments | https://api.github.com/repos/huggingface/transformers/issues/29159/events | https://github.com/huggingface/transformers/issues/29159 | 2,145,650,790 | I_kwDOCUB6oc5_5Ahm | 29,159 | [tokenizer] Inconsistent behavior in slow tokenizer and fast tokenizer | {
"login": "Ki-Seki",
"id": 60967965,
"node_id": "MDQ6VXNlcjYwOTY3OTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/60967965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ki-Seki",
"html_url": "https://github.com/Ki-Seki",
"followers_url": "https://api.github.com/users/Ki-Seki/followers",
"following_url": "https://api.github.com/users/Ki-Seki/following{/other_user}",
"gists_url": "https://api.github.com/users/Ki-Seki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ki-Seki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ki-Seki/subscriptions",
"organizations_url": "https://api.github.com/users/Ki-Seki/orgs",
"repos_url": "https://api.github.com/users/Ki-Seki/repos",
"events_url": "https://api.github.com/users/Ki-Seki/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ki-Seki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Hey! Thanks for opening an issue. \r\nFew things first. You are using a custom / local checkpoint with trust remote code. \r\n\r\nFast is not erroring out when you feed OOV, while slow is and it is indeed inconsistent. Would you like to open a PR for a fix? 🤗 ",
"Yes, I'll try that. Thank you for your reply!"
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.10
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no need
- Using distributed or parallel set-up in script?: no need
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
def answer_or_exception(tokenizer, id):
print(f'<<<<<<{tokenizer.__class__}>>>>>>')
try:
print(f'"{tokenizer.decode([id])}"')
except Exception as e:
print(e)
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/phi-2", trust_remote_code=True, use_fast=False)
# vocab size: 50294
answer_or_exception(tokenizer, 50294) # correct
answer_or_exception(tokenizer, 50295) # wrong
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/phi-2", trust_remote_code=True, use_fast=True)
# vocab size: 50294
answer_or_exception(tokenizer, 50294) # correct
answer_or_exception(tokenizer, 50295) # correct
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/Llama-2-7b-chat-hf", trust_remote_code=True, use_fast=False)
# vocab size: 31999
answer_or_exception(tokenizer, 31999) # correct
answer_or_exception(tokenizer, 32000) # wrong
tokenizer = AutoTokenizer.from_pretrained("/mnt/data01/shichao/models/Llama-2-7b-chat-hf", trust_remote_code=True, use_fast=True)
# vocab size: 31999
answer_or_exception(tokenizer, 31999) # correct
answer_or_exception(tokenizer, 32000) # correct
```
Output:
```text
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<<<<<<<class 'transformers.models.codegen.tokenization_codegen.CodeGenTokenizer'>>>>>>>
" "
<<<<<<<class 'transformers.models.codegen.tokenization_codegen.CodeGenTokenizer'>>>>>>>
sequence item 0: expected str instance, NoneType found
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<<<<<<<class 'transformers.models.codegen.tokenization_codegen_fast.CodeGenTokenizerFast'>>>>>>>
" "
<<<<<<<class 'transformers.models.codegen.tokenization_codegen_fast.CodeGenTokenizerFast'>>>>>>>
""
<<<<<<<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>>>>>>>
"给"
<<<<<<<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>>>>>>>
piece id is out of range.
<<<<<<<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>>>>>>>
"给"
<<<<<<<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>>>>>>>
""
```
### Expected behavior
Consistent `decode` behavior in slow tokenizer and fast tokenizer when id exceeds vocab size. For example, instead of raise exceptions, the slow tokenizer output empty strings like the fast tokenizer does. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29159/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29158/comments | https://api.github.com/repos/huggingface/transformers/issues/29158/events | https://github.com/huggingface/transformers/pull/29158 | 2,145,552,337 | PR_kwDOCUB6oc5ndVY6 | 29,158 | [PyTorch/XLA] Fix extra TPU compilations introduced by recent changes | {
"login": "alanwaketan",
"id": 8573935,
"node_id": "MDQ6VXNlcjg1NzM5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8573935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanwaketan",
"html_url": "https://github.com/alanwaketan",
"followers_url": "https://api.github.com/users/alanwaketan/followers",
"following_url": "https://api.github.com/users/alanwaketan/following{/other_user}",
"gists_url": "https://api.github.com/users/alanwaketan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanwaketan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanwaketan/subscriptions",
"organizations_url": "https://api.github.com/users/alanwaketan/orgs",
"repos_url": "https://api.github.com/users/alanwaketan/repos",
"events_url": "https://api.github.com/users/alanwaketan/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanwaketan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR tries to fix some extra TPU compilations caused by recent HF changes.
1. PyTorch/XLA doesn't support sdpa yet. So we need to set the default attention implementation to eager.
2. tensor.item() will trigger tpu graph synchronization. We should avoid using it in the training loop.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29158",
"html_url": "https://github.com/huggingface/transformers/pull/29158",
"diff_url": "https://github.com/huggingface/transformers/pull/29158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29158.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29157/comments | https://api.github.com/repos/huggingface/transformers/issues/29157/events | https://github.com/huggingface/transformers/issues/29157 | 2,145,549,903 | I_kwDOCUB6oc5_4n5P | 29,157 | Error while saving with EarlyStoppingCallback | {
"login": "dhruvmullick",
"id": 7004024,
"node_id": "MDQ6VXNlcjcwMDQwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7004024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvmullick",
"html_url": "https://github.com/dhruvmullick",
"followers_url": "https://api.github.com/users/dhruvmullick/followers",
"following_url": "https://api.github.com/users/dhruvmullick/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvmullick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvmullick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvmullick/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvmullick/orgs",
"repos_url": "https://api.github.com/users/dhruvmullick/repos",
"events_url": "https://api.github.com/users/dhruvmullick/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvmullick/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.28.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DeepSpeed
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
* SFTTrainer is used for training the model
* transformers.EarlyStoppingCallback is added to the trainer prior to .train()
This error has appeared in the last few days, likely due to some recent change.
Error is fixed by either rolling back to transformers version 4.37.2 or remove the early stopping callback.
Here's the stack trace:
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 331, in train
> output = super().train(*args, **kwargs)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 1624, in train
> return inner_training_loop(
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2029, in _inner_training_loop
> self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2423, in _maybe_log_save_evaluate
> self._save_checkpoint(model, trial, metrics=metrics)
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer.py", line 2525, in _save_checkpoint
> self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))
> File "/workspace/envs/torch_env/lib/python3.10/site-packages/transformers/trainer_callback.py", line 113, in save_to_json
> json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n"
> File "/usr/lib/python3.10/json/__init__.py", line 238, in dumps
> **kw).encode(obj)
> File "/usr/lib/python3.10/json/encoder.py", line 201, in encode
> chunks = list(chunks)
> File "/usr/lib/python3.10/json/encoder.py", line 431, in _iterencode
> yield from _iterencode_dict(o, _current_indent_level)
> File "/usr/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 325, in _iterencode_list
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
> yield from chunks
> File "/usr/lib/python3.10/json/encoder.py", line 438, in _iterencode
> o = _default(o)
> File "/usr/lib/python3.10/json/encoder.py", line 179, in default
> raise TypeError(f'Object of type {o.__class__.__name__} '
> TypeError: Object of type Tensor is not JSON serializable
>
### Expected behavior
No error with 4.38.0.dev0 transformers version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29157/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29156/comments | https://api.github.com/repos/huggingface/transformers/issues/29156/events | https://github.com/huggingface/transformers/pull/29156 | 2,145,522,407 | PR_kwDOCUB6oc5ndO3J | 29,156 | Making extensible | {
"login": "ddevaul",
"id": 71190628,
"node_id": "MDQ6VXNlcjcxMTkwNjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/71190628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddevaul",
"html_url": "https://github.com/ddevaul",
"followers_url": "https://api.github.com/users/ddevaul/followers",
"following_url": "https://api.github.com/users/ddevaul/following{/other_user}",
"gists_url": "https://api.github.com/users/ddevaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddevaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddevaul/subscriptions",
"organizations_url": "https://api.github.com/users/ddevaul/orgs",
"repos_url": "https://api.github.com/users/ddevaul/repos",
"events_url": "https://api.github.com/users/ddevaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddevaul/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @ddevaul, what is the purpose of this PR? \r\n"
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29156/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29156",
"html_url": "https://github.com/huggingface/transformers/pull/29156",
"diff_url": "https://github.com/huggingface/transformers/pull/29156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29156.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29155/comments | https://api.github.com/repos/huggingface/transformers/issues/29155/events | https://github.com/huggingface/transformers/issues/29155 | 2,145,382,760 | I_kwDOCUB6oc5_3_Fo | 29,155 | PyTest import error | {
"login": "loadams",
"id": 114770087,
"node_id": "U_kgDOBtdApw",
"avatar_url": "https://avatars.githubusercontent.com/u/114770087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loadams",
"html_url": "https://github.com/loadams",
"followers_url": "https://api.github.com/users/loadams/followers",
"following_url": "https://api.github.com/users/loadams/following{/other_user}",
"gists_url": "https://api.github.com/users/loadams/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loadams/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loadams/subscriptions",
"organizations_url": "https://api.github.com/users/loadams/orgs",
"repos_url": "https://api.github.com/users/loadams/repos",
"events_url": "https://api.github.com/users/loadams/events{/privacy}",
"received_events_url": "https://api.github.com/users/loadams/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | ### System Info
Current head of transformers shows this issue, when importing functions from pytest, the `import_path` function is not found. Sample error from DeepSpeed's unit tests [here](https://github.com/microsoft/DeepSpeed/actions/runs/7977730884/job/21781270161?pr=5164#step:7:391).
```
______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________
ImportError while importing test module '/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/accelerate/tests/deepspeed/test_deepspeed.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../unit-test-venv/lib/python3.8/site-packages/_pytest/python.py:538: in importtestmodule
mod = import_path(path, mode=importmode, root=config.rootpath)
../unit-test-venv/lib/python3.8/site-packages/_pytest/pathlib.py:566: in import_path
importlib.import_module(module_name)
/opt/conda/envs/ptca/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
../unit-test-venv/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/deepspeed/test_deepspeed.py:26: in <module>
from transformers.testing_utils import mockenv_context
../unit-test-venv/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
from _pytest.doctest import (
E ImportError: cannot import name 'import_path' from '_pytest.doctest' (/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.8/site-packages/_pytest/doctest.py)
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 4.71s ===============================
```
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
With pytest 8.0.1:
1. `from _pytest.doctest import import_path`
2. observe error.
### Expected behavior
No errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29155/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29154/comments | https://api.github.com/repos/huggingface/transformers/issues/29154/events | https://github.com/huggingface/transformers/pull/29154 | 2,145,294,779 | PR_kwDOCUB6oc5nccpR | 29,154 | Update pytest `import_path` location | {
"login": "loadams",
"id": 114770087,
"node_id": "U_kgDOBtdApw",
"avatar_url": "https://avatars.githubusercontent.com/u/114770087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loadams",
"html_url": "https://github.com/loadams",
"followers_url": "https://api.github.com/users/loadams/followers",
"following_url": "https://api.github.com/users/loadams/following{/other_user}",
"gists_url": "https://api.github.com/users/loadams/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loadams/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loadams/subscriptions",
"organizations_url": "https://api.github.com/users/loadams/orgs",
"repos_url": "https://api.github.com/users/loadams/repos",
"events_url": "https://api.github.com/users/loadams/events{/privacy}",
"received_events_url": "https://api.github.com/users/loadams/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | NONE | null | # What does this PR do?
Fixes location of `import_path` from pytest from `_pytest.doctest` to `_pytest.pathlib` when using PyTest 8.0.1+ since it is finally deprecated from being in `_pytest.doctest`. It is provided in `_pytest.pathlib` from at least 7.2.0+ so we do not need to modify the supported pytest range in `setup.py`
Tested [here in DeepSpeed](https://github.com/microsoft/DeepSpeed/pull/5164) and tests appear to be passing.
Fixes: #29155
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29154",
"html_url": "https://github.com/huggingface/transformers/pull/29154",
"diff_url": "https://github.com/huggingface/transformers/pull/29154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29154.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29153/comments | https://api.github.com/repos/huggingface/transformers/issues/29153/events | https://github.com/huggingface/transformers/issues/29153 | 2,145,101,851 | I_kwDOCUB6oc5_26gb | 29,153 | Plans to add DoRA? | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada @pacman100 ",
"Hi @RonanKMcGovern ! \r\nThanks for the feature request! There is already an ongoing work from @BenjaminBossan to add DoRA in PEFT: https://github.com/huggingface/peft/pull/1474",
"Closing as there is a PR underway.",
"OK thank you @RonanKMcGovern !"
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Improves on LoRA by allowing magnitude fine-tuning.
### Motivation
Improved perplexity.
### Your contribution
Sebastien Bubeck has published demo code. https://github.com/rasbt/dora-from-scratch | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29153/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29152/comments | https://api.github.com/repos/huggingface/transformers/issues/29152/events | https://github.com/huggingface/transformers/pull/29152 | 2,145,071,699 | PR_kwDOCUB6oc5nbr5K | 29,152 | Alternative approach | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @Rocketknight1 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29152). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Alternative way to use stop words for generated sequences. Note - it doesn't
<details>
<summary>Script</summary>
```py
import time
import numpy as np
from transformers.generation.stopping_criteria import StopStringCriteria, StopStringCriteria2
from transformers import AutoTokenizer
model_id = "google-bert/bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_id)
stopping_criteria = StopStringCriteria(stop_strings=["giraffe", "polo"], tokenizer=tokenizer)
long_sentence = "This is a long sentence which should eventually stop because I have the word giraffe. This is a generated sentence"
input_ids = tokenizer(long_sentence, return_tensors="pt").input_ids
# Let's iterate over input_ids, increasing the length of the input sequence at each iteration and see when the
# criterion is met
print("Current implementation")
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria(input_ids=input_ids[:, :i], scores=None)
print(f"Current length: {i}, stops: {is_done}, input sequence: {tokenizer.batch_decode(input_seq)}")
N_RUNS = 100
times = []
for _ in range(N_RUNS):
start = time.time()
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria(input_ids=input_ids[:, :i], scores=None)
end = time.time()
times.append(end - start)
print(f"Average time taken - current: {np.mean(times)}, std: {np.std(times)}")
print("\nAlternative implementation")
stopping_criteria_2 = StopStringCriteria2(stop_strings=["giraffe", "polo"], tokenizer=tokenizer)
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria_2(input_ids=input_ids[:, :i], scores=None)
print(f"Current length: {i}, stops: {is_done}, input sequence: {tokenizer.batch_decode(input_seq)}")
times = []
for _ in range(N_RUNS):
start = time.time()
for i in range(1, len(input_ids[0]) + 1):
input_seq = input_ids[:, :i]
is_done = stopping_criteria_2(input_ids=input_ids[:, :i], scores=None)
end = time.time()
times.append(end - start)
print(f"Average time taken - new: {np.mean(times)}, std: {np.std(times)}")
```
</details>
Not sure if testing assumption is correct i.e. how the input ids are passed in. When testing this alternative and the current implementation, the original `StopStringCriteria` does not stop when "giraffe" is in the sentence.
This alternative is also faster on this small test.
Note: the alternative will stop when any of the generated strings has a stop word (which AFAICT is the same for the current `StopStringCriteria` too)
<details>
<summary>Output</summary>
```
Current implementation
Current length: 1, stops: False, input sequence: ['[CLS]']
Current length: 2, stops: False, input sequence: ['[CLS] this']
Current length: 3, stops: False, input sequence: ['[CLS] this is']
Current length: 4, stops: False, input sequence: ['[CLS] this is a']
Current length: 5, stops: False, input sequence: ['[CLS] this is a long']
Current length: 6, stops: False, input sequence: ['[CLS] this is a long sentence']
Current length: 7, stops: False, input sequence: ['[CLS] this is a long sentence which']
Current length: 8, stops: False, input sequence: ['[CLS] this is a long sentence which should']
Current length: 9, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually']
Current length: 10, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop']
Current length: 11, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because']
Current length: 12, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i']
Current length: 13, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have']
Current length: 14, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the']
Current length: 15, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word']
Current length: 16, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word gi']
Current length: 17, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraf']
Current length: 18, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe']
Current length: 19, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe.']
Current length: 20, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this']
Current length: 21, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is']
Current length: 22, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a']
Current length: 23, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated']
Current length: 24, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence']
Current length: 25, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence [SEP]']
Average time taken - current: 0.007625923156738281, std: 0.00019846132464233505
Alternative implementation
Current length: 1, stops: False, input sequence: ['[CLS]']
Current length: 2, stops: False, input sequence: ['[CLS] this']
Current length: 3, stops: False, input sequence: ['[CLS] this is']
Current length: 4, stops: False, input sequence: ['[CLS] this is a']
Current length: 5, stops: False, input sequence: ['[CLS] this is a long']
Current length: 6, stops: False, input sequence: ['[CLS] this is a long sentence']
Current length: 7, stops: False, input sequence: ['[CLS] this is a long sentence which']
Current length: 8, stops: False, input sequence: ['[CLS] this is a long sentence which should']
Current length: 9, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually']
Current length: 10, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop']
Current length: 11, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because']
Current length: 12, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i']
Current length: 13, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have']
Current length: 14, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the']
Current length: 15, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word']
Current length: 16, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word gi']
Current length: 17, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraf']
Current length: 18, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe']
Current length: 19, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe.']
Current length: 20, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this']
Current length: 21, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is']
Current length: 22, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a']
Current length: 23, stops: True, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated']
Current length: 24, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence']
Current length: 25, stops: False, input sequence: ['[CLS] this is a long sentence which should eventually stop because i have the word giraffe. this is a generated sentence [SEP]']
Average time taken - new: 0.0011045789718627929, std: 2.974982062175288e-05
```
</details>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29152",
"html_url": "https://github.com/huggingface/transformers/pull/29152",
"diff_url": "https://github.com/huggingface/transformers/pull/29152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29152.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29151/comments | https://api.github.com/repos/huggingface/transformers/issues/29151/events | https://github.com/huggingface/transformers/issues/29151 | 2,145,069,207 | I_kwDOCUB6oc5_2yiX | 29,151 | Static cache + torch.compile: support prefill static sequence length | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @gante ",
"@fxmarty this is the same problem as we have in TF and Flax. There, we nudged users to use the `pad_to_multiple_of` argument in the tokenizer, which I believe solves the problem 🤗 \r\n\r\nHow do you suggest us to let users know about this feature, other than docs?"
] | 1,708 | 1,708 | null | COLLABORATOR | null | ### Feature request
When using torch.compile, the prefill is recompiled for every new sequence length, which is slow. It may be nice to be able to compile only say for some sequence lengths (`1, 2, 4, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc`) on the fly depending on the input lengths, using some padding.
### Motivation
torch.compile compilation is prohibitively slow even with https://github.com/huggingface/transformers/pull/29114
If people want to use transformers + static cache + torch.compile, it should be FAST to run `generate` on new sequence lengths.
### Your contribution
None for now | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29151/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29150/comments | https://api.github.com/repos/huggingface/transformers/issues/29150/events | https://github.com/huggingface/transformers/issues/29150 | 2,144,941,834 | I_kwDOCUB6oc5_2TcK | 29,150 | Difficulty in adding custom model | {
"login": "El-chapo-007",
"id": 125077963,
"node_id": "U_kgDOB3SJyw",
"avatar_url": "https://avatars.githubusercontent.com/u/125077963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/El-chapo-007",
"html_url": "https://github.com/El-chapo-007",
"followers_url": "https://api.github.com/users/El-chapo-007/followers",
"following_url": "https://api.github.com/users/El-chapo-007/following{/other_user}",
"gists_url": "https://api.github.com/users/El-chapo-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/El-chapo-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/El-chapo-007/subscriptions",
"organizations_url": "https://api.github.com/users/El-chapo-007/orgs",
"repos_url": "https://api.github.com/users/El-chapo-007/repos",
"events_url": "https://api.github.com/users/El-chapo-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/El-chapo-007/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @El-chapo-007, thanks for opening this issue! \r\n\r\nGlad to hear that your journey has been mostly successful 🤗 \r\n\r\nHave you seen our documentation page about adding custom models? This should contain all the info and example code needed to get started: https://huggingface.co/docs/transformers/custom_models\r\n\r\nLet us know if anything does work! ",
"could you please add a Jupiter notebook template, as it would be alot more helpful as most of the other parts the hugging face team has put as a tutorial to better familirize with enviornment .....\r\nAnd also my custom architecture projects feed forward layer to the multiple of d_model,\r\nas other models project it back to same dimension it just up_project and i have tested several other models but this model yeilds order of magnitude more performance with respect to number of parameters,hence i can deploy it on edge devices ..\r\n\r\nBut the issue as far as i know i don't have to use lm head liner layer but hugging face library automatically put we call a certain class for example AutoModelForCausalLM....\r\n\r\nSince my model dimensions are same but in final liner layer how can I project it to multiple of d_model to final linear layer ...\r\n",
"also there is a bit confusion on https://huggingface.co/docs/transformers/custom_models \r\nvs https://huggingface.co/docs/transformers/add_new_model",
"Hi @El-chapo-007, \r\n\r\nWe can definitely think about adding a jupyter notebook. In the meantime, you should be able to run the code snippets in the documentation in cells in your own notebook.\r\n\r\nI'm not sure I understand your question about modifying the models. However, this is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nRegarding the documentation pages: \r\n* https://huggingface.co/docs/transformers/add_new_model outlines how to add a model into the transformers repo\r\n* https://huggingface.co/docs/transformers/custom_models outlines how to add a model on the hub\r\n",
"thanks alot"
] | 1,708 | 1,708 | null | NONE | null | ### Feature request
Hi
Hope all the team members of hugging face are well
I am a student and currently doing work on nlp projects , although most of my journey was successful because well documented information for starters especially example notebooks but what part is confusing and difficult is to upload and create a custom model from scratch i have and several other users have difficulty in it could you please make a notebook for step by step guidence to help me and other researchers to focus on thier projects not on these lenghty procedures
### Motivation
Difficulty in portting custom model
### Your contribution
Student | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29150/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29149/comments | https://api.github.com/repos/huggingface/transformers/issues/29149/events | https://github.com/huggingface/transformers/issues/29149 | 2,144,914,235 | I_kwDOCUB6oc5_2Ms7 | 29,149 | Generate: support passing position_ids | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] | [
"@zucchini-nlp FYI. We shouldn't fix this now, as it requires significant manual labor to update all models. After the static cache sprint we should have a look at this :)"
] | 1,708 | 1,708 | null | MEMBER | null | Thank you @tengomucho, for uncovering this bug.
### The problem
In a nutshell, passing the correct `position_ids` to `generate` should result in exactly the same results as not passing them. In other words, the following test should pass on all models, if added to `GenerationTesterMixin`. We can see that it is failing in general.
```py
def test_passing_position_ids(self):
# Check that passing position ids to generate yields the same results as not passing them, if the position ids
# are correctly built. If the test fails, it means one of two things:
# 1 - the manual position ids are not being piped correctly; OR
# 2 - the automated position ids are not being correctly built.
for model_class in self.all_generative_model_classes:
config, input_ids, attention_mask, _ = self._get_input_ids_and_config(batch_size=1)
if config.is_encoder_decoder:
self.skipTest("This model does not support position_ids")
# To truly test this property, let's create a batch where the second row corresponds to the test input with
# left padding of 1.
pad_token = torch.tensor([[config.pad_token_id or 0]], device=input_ids.device, dtype=input_ids.dtype)
input_ids = torch.cat((input_ids, torch.cat((pad_token, input_ids[:, 1:]), dim=1)), dim=0)
pad_mask = torch.zeros((1, 1), dtype=attention_mask.dtype, device=attention_mask.device)
attention_mask = torch.cat((attention_mask, torch.cat((pad_mask, attention_mask[:, 1:]), dim=1)), dim=0)
position_ids = torch.clamp(torch.cumsum(attention_mask, dim=-1) - 1, min=0)
config.use_cache = True
config.is_decoder = True
model = model_class(config).to(torch_device).eval()
try:
output_position_ids = model.generate(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
max_new_tokens=10
)
except ValueError as exc:
if "The following `model_kwargs` are not used by the model: ['position_ids']" in str(exc):
self.skipTest("This model does not support position_ids")
else:
raise
output_no_position_ids = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=10
)
self.assertListEqual(output_no_position_ids.tolist(), output_position_ids.tolist())
```
### The fix
There are two root causes for this:
1. `position_ids` is rejected in some models when it is passed (e.g. see [here](https://github.com/huggingface/transformers/blob/3c00b885b92fbcd0e7451e56ccf424a2d5a19bbb/src/transformers/models/gpt2/modeling_gpt2.py#L1022)). These models often assume no padding when `position_ids` is rejected.
2. `position_ids` is never updated, so it is only correct when created from scratch (=not passed).
As such, a fix to this problem should consist in updating `position_ids` in `generate`, with `prepare_inputs_for_generation` only creating new `position_ids` when they don't exist.
The test pasted above should be part of our tests after fixing the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29149/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29148/comments | https://api.github.com/repos/huggingface/transformers/issues/29148/events | https://github.com/huggingface/transformers/pull/29148 | 2,144,911,415 | PR_kwDOCUB6oc5nbILV | 29,148 | Token level timestamps for long-form generation in Whisper | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29148). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
Continuation of PR #28984. Adds token level timestamps for long-form generation. The previous PR had a quite different of way to add timestamps, specifically by calling `extract_timestamps` for each segment and each batch separately. I believe, it can be done in one batch, and then divided into segments the same way sequences are divided.
The final timestamps are already aligned with the total length, so there is not need to add start_time for each segment. Although, I am not sure if that is what we want to have, so I can remove this "total duration alignment" is needed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
@patrickvonplaten
@gante ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29148",
"html_url": "https://github.com/huggingface/transformers/pull/29148",
"diff_url": "https://github.com/huggingface/transformers/pull/29148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29148.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29147/comments | https://api.github.com/repos/huggingface/transformers/issues/29147/events | https://github.com/huggingface/transformers/pull/29147 | 2,144,785,389 | PR_kwDOCUB6oc5nasd- | 29,147 | Fix drop path being ignored in DINOv2 | {
"login": "fepegar",
"id": 12688084,
"node_id": "MDQ6VXNlcjEyNjg4MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12688084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fepegar",
"html_url": "https://github.com/fepegar",
"followers_url": "https://api.github.com/users/fepegar/followers",
"following_url": "https://api.github.com/users/fepegar/following{/other_user}",
"gists_url": "https://api.github.com/users/fepegar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fepegar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fepegar/subscriptions",
"organizations_url": "https://api.github.com/users/fepegar/orgs",
"repos_url": "https://api.github.com/users/fepegar/repos",
"events_url": "https://api.github.com/users/fepegar/events{/privacy}",
"received_events_url": "https://api.github.com/users/fepegar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reviewing, @amyeroberts!"
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
A `drop_path_rate` parameter exists in the DINOv2 model, which is propagated all the way to the DINOv2 layers, but never used. This PR addresses this by using the drop path layers in the `forward` pass of the DINOv2 layers, and removing an (I think) unnecessary extra instantiation.
Fixes #29140.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @NielsRogge @molbap
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29147",
"html_url": "https://github.com/huggingface/transformers/pull/29147",
"diff_url": "https://github.com/huggingface/transformers/pull/29147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29147.patch",
"merged_at": 1708450319000
} |
https://api.github.com/repos/huggingface/transformers/issues/29146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29146/comments | https://api.github.com/repos/huggingface/transformers/issues/29146/events | https://github.com/huggingface/transformers/pull/29146 | 2,144,586,510 | PR_kwDOCUB6oc5naAbp | 29,146 | Generate: missing generation config eos token setting in encoder-decoder tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?
These tests were failing with low likelihood, all for the same reason as fixed in [this recent PR](https://github.com/huggingface/transformers/pull/28923): there should be no EOS token to enable endless generation, but the generation config still had the default value.
I couldn't find more occurrences of this pattern.
Example of a failed run fixed by this PR: https://app.circleci.com/jobs/github/huggingface/transformers/1099434 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29146",
"html_url": "https://github.com/huggingface/transformers/pull/29146",
"diff_url": "https://github.com/huggingface/transformers/pull/29146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29146.patch",
"merged_at": 1708445871000
} |
https://api.github.com/repos/huggingface/transformers/issues/29145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29145/comments | https://api.github.com/repos/huggingface/transformers/issues/29145/events | https://github.com/huggingface/transformers/issues/29145 | 2,144,556,865 | I_kwDOCUB6oc5_01dB | 29,145 | AI2 Olmo 7B does not support Flash-Attention 2.0. ValueError: OLMoForCausalLM does not support Flash Attention 2.0 yet. | {
"login": "KaifAhmad1",
"id": 98801504,
"node_id": "U_kgDOBeOXYA",
"avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaifAhmad1",
"html_url": "https://github.com/KaifAhmad1",
"followers_url": "https://api.github.com/users/KaifAhmad1/followers",
"following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}",
"gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions",
"organizations_url": "https://api.github.com/users/KaifAhmad1/orgs",
"repos_url": "https://api.github.com/users/KaifAhmad1/repos",
"events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaifAhmad1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | ### Model description
Model Name: allenai/OLMo-7B
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29145/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29144/comments | https://api.github.com/repos/huggingface/transformers/issues/29144/events | https://github.com/huggingface/transformers/pull/29144 | 2,144,483,260 | PR_kwDOCUB6oc5nZpun | 29,144 | bug-fix: avoid 'Expected all tensors to be on the same device' error when doing multi-GPU training | {
"login": "kallewoof",
"id": 250224,
"node_id": "MDQ6VXNlcjI1MDIyNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/250224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kallewoof",
"html_url": "https://github.com/kallewoof",
"followers_url": "https://api.github.com/users/kallewoof/followers",
"following_url": "https://api.github.com/users/kallewoof/following{/other_user}",
"gists_url": "https://api.github.com/users/kallewoof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kallewoof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kallewoof/subscriptions",
"organizations_url": "https://api.github.com/users/kallewoof/orgs",
"repos_url": "https://api.github.com/users/kallewoof/repos",
"events_url": "https://api.github.com/users/kallewoof/events{/privacy}",
"received_events_url": "https://api.github.com/users/kallewoof/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | NONE | null | When doing DPO training, if the model has been split over multiple GPUs, the `tr_loss` and the `tr_loss_step` end up on different devices at some point, resulting in a
```
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1
```
error. This patch makes an explicit copy of the `tr_loss_step` value on the same device as the `tr_loss` value, when necessary.
Ping @patrickvonplaten (git blame last touched), @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29144/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29144",
"html_url": "https://github.com/huggingface/transformers/pull/29144",
"diff_url": "https://github.com/huggingface/transformers/pull/29144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29144.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29143/comments | https://api.github.com/repos/huggingface/transformers/issues/29143/events | https://github.com/huggingface/transformers/pull/29143 | 2,144,476,455 | PR_kwDOCUB6oc5nZoPN | 29,143 | Llama: update rope scaling to match static cache changes | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29143). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
(see title :))
Review suggestion:
1. Review changes in Llama
2. Review the rest | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29143",
"html_url": "https://github.com/huggingface/transformers/pull/29143",
"diff_url": "https://github.com/huggingface/transformers/pull/29143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29143.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29142/comments | https://api.github.com/repos/huggingface/transformers/issues/29142/events | https://github.com/huggingface/transformers/pull/29142 | 2,144,430,707 | PR_kwDOCUB6oc5nZeOR | 29,142 | Add training version check for AQLM quantizer. | {
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29142). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following this [PR](https://github.com/Vahe1994/AQLM/pull/26) form `aqlm` and this [PR](https://github.com/huggingface/peft/pull/1476) from `PEFT` it is necessary to check if AQLM supports training or not. It appears that this check is bypassed when not using trainer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29142",
"html_url": "https://github.com/huggingface/transformers/pull/29142",
"diff_url": "https://github.com/huggingface/transformers/pull/29142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29142.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29141/comments | https://api.github.com/repos/huggingface/transformers/issues/29141/events | https://github.com/huggingface/transformers/pull/29141 | 2,144,232,619 | PR_kwDOCUB6oc5nYyzq | 29,141 | Save (circleci) cache at the end of a job | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29141). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
This way, `pytest` will run before `cache saving` and we have access to the results earlier in the case of partial or no cache loaded. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29141",
"html_url": "https://github.com/huggingface/transformers/pull/29141",
"diff_url": "https://github.com/huggingface/transformers/pull/29141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29141.patch",
"merged_at": 1708435896000
} |
https://api.github.com/repos/huggingface/transformers/issues/29140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29140/comments | https://api.github.com/repos/huggingface/transformers/issues/29140/events | https://github.com/huggingface/transformers/issues/29140 | 2,144,160,231 | I_kwDOCUB6oc5_zUnn | 29,140 | Drop path is ignored in DINOv2 | {
"login": "fepegar",
"id": 12688084,
"node_id": "MDQ6VXNlcjEyNjg4MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/12688084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fepegar",
"html_url": "https://github.com/fepegar",
"followers_url": "https://api.github.com/users/fepegar/followers",
"following_url": "https://api.github.com/users/fepegar/following{/other_user}",
"gists_url": "https://api.github.com/users/fepegar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fepegar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fepegar/subscriptions",
"organizations_url": "https://api.github.com/users/fepegar/orgs",
"repos_url": "https://api.github.com/users/fepegar/repos",
"events_url": "https://api.github.com/users/fepegar/events{/privacy}",
"received_events_url": "https://api.github.com/users/fepegar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, thanks for the issue! I've checked out your branch, from what I'm seeing tests are passing on your fix, would you mind opening a PR? \r\nAlso, since this will affect training, do you have a script that compares both in a training scenario? AFAIK current integration tests for Dinov2 are not in a training setting.",
"Hi @molbap. Thanks for your response! I've created\r\n- #29147.\r\n\r\nThe high-level effects of these changes would take quite a lot of work to measure, but here's a little snippet:\r\n\r\n```python\r\n>>> import torch\r\n>>> from transformers import Dinov2Model\r\n>>> \r\n>>> torch.set_grad_enabled(False)\r\n>>> torch.manual_seed(0)\r\n>>> \r\n>>> model = Dinov2Model.from_pretrained(\"facebook/dinov2-base\", drop_path_rate=0.3)\r\n>>> model.train()\r\n>>> \r\n>>> x = torch.rand(1, 3, 224, 224)\r\n>>> out_1 = model(x)\r\n>>> out_2 = model(x)\r\n>>> torch.all(out_1.last_hidden_state == out_2.last_hidden_state)\r\n```\r\n\r\nThe output is `tensor(True)` in `main`, indicating that the depth is deterministic because the drop path rate is not being used, where it's `tensor(False)` in my branch due to stochastic depth being properly enabled."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.38.0.dev0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: not necessarily
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts @NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm not getting any errors, and I think sharing a script doesn't make sense in this case.
The issue is simply that two `Dinov2DropPath` layers are being instantiated in the `Dinov2Layer`:
https://github.com/huggingface/transformers/blob/7d312ad2e9473cd3a0ea3e9b206b8ed3c147e9be/src/transformers/models/dinov2/modeling_dinov2.py#L374-L392
But they're not being used anywhere else:
https://github.com/huggingface/transformers/blob/7d312ad2e9473cd3a0ea3e9b206b8ed3c147e9be/src/transformers/models/dinov2/modeling_dinov2.py#L394-L423
### Expected behavior
These layers should probably not be ignored. Moreover, I think there's no reason to instantiate two different ones.
I've implemented a fix in https://github.com/fepegar/transformers/pull/1/files. Please let me know if you'd like me to open a PR here.
I've tried running `pytest`, but I'm getting some `pytest`-related errors (not failing tests). Happy to report that somewhere else as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29139/comments | https://api.github.com/repos/huggingface/transformers/issues/29139/events | https://github.com/huggingface/transformers/issues/29139 | 2,144,132,992 | I_kwDOCUB6oc5_zN-A | 29,139 | past_key_values for SeamlessM4Tv2ForSpeechToText is not working as expected | {
"login": "vapemaster-kz",
"id": 65128133,
"node_id": "MDQ6VXNlcjY1MTI4MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/65128133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vapemaster-kz",
"html_url": "https://github.com/vapemaster-kz",
"followers_url": "https://api.github.com/users/vapemaster-kz/followers",
"following_url": "https://api.github.com/users/vapemaster-kz/following{/other_user}",
"gists_url": "https://api.github.com/users/vapemaster-kz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vapemaster-kz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vapemaster-kz/subscriptions",
"organizations_url": "https://api.github.com/users/vapemaster-kz/orgs",
"repos_url": "https://api.github.com/users/vapemaster-kz/repos",
"events_url": "https://api.github.com/users/vapemaster-kz/events{/privacy}",
"received_events_url": "https://api.github.com/users/vapemaster-kz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ylacombe "
] | 1,708 | 1,708 | null | NONE | null | ### System Info
transformers version: 4.37.2
python verison: 3.8.6.
OS: Windows 11
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have segments of audio, and I would like to pass past_key_values between them. I exptected the transcribtion quality to increase, but rather it became unreadable.
```python
processor = AutoProcessor.from_pretrained(path_to_model)
model = SeamlessM4Tv2ForSpeechToText.from_pretrained(path_to_model)
audio_chunks = [audio_segments]
past_key_values= None
for i in range(5):
audio_inputs = processor(audios=audio_chunks[i], return_tensors="pt", sampling_rate=16_000)
output = model.generate(**audio_inputs, tgt_lang="rus", repetition_penalty=1.1, return_dict_in_generate=True, past_key_values=past_key_values)
tmp_result = processor.decode(output[0][0], skip_special_tokens=True)
past_key_values = output['past_key_values']
```
### Expected behavior
The transcription quality is supposed to increase when I pass past_key_values (or at least have similar transcribation quality as when past_key_values=None).
The audio is the same. In other words, I had some audio, applied VAD to segment it into batches, and then fed this segments to the model one by one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29139/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29138/comments | https://api.github.com/repos/huggingface/transformers/issues/29138/events | https://github.com/huggingface/transformers/pull/29138 | 2,144,115,768 | PR_kwDOCUB6oc5nYZN3 | 29,138 | Fix ROPE embeddings for LLama | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?
This [test](https://app.circleci.com/pipelines/github/huggingface/transformers/84847/workflows/2a5e5769-9431-4e2b-babb-81a112558a97/jobs/1098065) failed on my PR and I checked to see the reason. I found that the changes introduced to make llama compile compatible are causing the issue.
The fixes here are tested with fullgraph compile, compilation is still working without graph breaks. Additionally, the failing [test]((https://app.circleci.com/pipelines/github/huggingface/transformers/84847/workflows/2a5e5769-9431-4e2b-babb-81a112558a97/jobs/1098065)) was run 500 times. I found that aside from rope embeddings, the cause of test failure was in SDPA attention. I cannot say what is the reason exactly, but running the test 500 times gives 95% success in SDPA and 100% success in eager, using the fixes introduced in this PR. Prior to these fixes, the tests were running with 90% success for both attentions.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29138",
"html_url": "https://github.com/huggingface/transformers/pull/29138",
"diff_url": "https://github.com/huggingface/transformers/pull/29138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29138.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29137/comments | https://api.github.com/repos/huggingface/transformers/issues/29137/events | https://github.com/huggingface/transformers/issues/29137 | 2,144,069,859 | I_kwDOCUB6oc5_y-jj | 29,137 | transformers.AutoTokenizer.from_pretrained( ... use_Fast=False) fails with 'TypeError: not a string' for some tokenizers | {
"login": "Jeronymous",
"id": 22522728,
"node_id": "MDQ6VXNlcjIyNTIyNzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22522728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeronymous",
"html_url": "https://github.com/Jeronymous",
"followers_url": "https://api.github.com/users/Jeronymous/followers",
"following_url": "https://api.github.com/users/Jeronymous/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeronymous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jeronymous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeronymous/subscriptions",
"organizations_url": "https://api.github.com/users/Jeronymous/orgs",
"repos_url": "https://api.github.com/users/Jeronymous/repos",
"events_url": "https://api.github.com/users/Jeronymous/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jeronymous/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker ",
"Hey! Thanks for reporting. \r\n`tokenizer.Load(self.vocab_file)` seems to be the issue here. If you check the repo it does not have the `tokenizer.model` .\r\nYou should raise the issue there! \r\n",
"Thanks @ArthurZucker 👍 "
] | 1,708 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Also I tried the following versions:
- `tokenizers` version: 0.15.0 and 0.15.2 (latest)
- `sentencepiece` version: 0.1.99 and 0.2.0 (latest)
### Who can help?
[ArthurZucker](https://github.com/ArthurZucker)
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(that may be a duplicate of https://github.com/huggingface/transformers/issues/27845)
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("croissantllm/CroissantLLMBase", use_fast=False)
```
This fails with
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-9fe531439285> in <module>
1 import transformers
----> 2 tokenizer = transformers.AutoTokenizer.from_pretrained("croissantllm/CroissantLLMBase", use_fast=False)
~/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
812 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
813 )
--> 814 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
815
816 # Otherwise we have to be creative.
~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs)
2027 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}")
2028
-> 2029 return cls._from_pretrained(
2030 resolved_vocab_files,
2031 pretrained_model_name_or_path,
~/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, token, cache_dir, local_files_only, _commit_hash, _is_local, *init_inputs, **kwargs)
2259 # Instantiate the tokenizer.
2260 try:
-> 2261 tokenizer = cls(*init_inputs, **init_kwargs)
2262 except OSError:
2263 raise OSError(
~/.local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py in __init__(self, vocab_file, unk_token, bos_token, eos_token, pad_token, sp_model_kwargs, add_bos_token, add_eos_token, clean_up_tokenization_spaces, use_default_system_prompt, spaces_between_special_tokens, legacy, **kwargs)
176 self.add_eos_token = add_eos_token
177 self.use_default_system_prompt = use_default_system_prompt
--> 178 self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
179
180 super().__init__(
~/.local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py in get_spm_processor(self, from_slow)
201 tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs)
202 if self.legacy or from_slow: # no dependency on protobuf
--> 203 tokenizer.Load(self.vocab_file)
204 return tokenizer
205
/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto)
903 if model_proto:
904 return self.LoadFromSerializedProto(model_proto)
--> 905 return self.LoadFromFile(model_file)
906
907
/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py in LoadFromFile(self, arg)
308
309 def LoadFromFile(self, arg):
--> 310 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
311
312 def _EncodeAsIds(self, text, enable_sampling, nbest_size, alpha, add_bos, add_eos, reverse, emit_unk_piece):
TypeError: not a string
```
### Expected behavior
I would expect that tokenizer to load.
(Note: I had this error while investigating why the fast tokenizer does not scale well with the text length, but this is another issue) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29136/comments | https://api.github.com/repos/huggingface/transformers/issues/29136/events | https://github.com/huggingface/transformers/pull/29136 | 2,144,048,828 | PR_kwDOCUB6oc5nYKjd | 29,136 | Generate: low memory tests are flaky | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29136). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts #29109 seems to have fixed most of the issue (this test does compare batched vs unbatched generation under the hood, which the PR linked above fixes)\r\n\r\nThe root issue about this and other tests being non-deterministic tests persists, though :) I'm going to close the PR and move the discussion to slack at a future time :)"
] | 1,708 | 1,708 | null | MEMBER | null | # What does this PR do?
As identified by @molbap -- generate tests with the `low_memory` flag are flaky. The full reason is the same as explained in [this comment](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535).
The error likelihood has low (~3%), but still quite disruptive for transformers devs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29136",
"html_url": "https://github.com/huggingface/transformers/pull/29136",
"diff_url": "https://github.com/huggingface/transformers/pull/29136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29136.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29135/comments | https://api.github.com/repos/huggingface/transformers/issues/29135/events | https://github.com/huggingface/transformers/pull/29135 | 2,144,037,386 | PR_kwDOCUB6oc5nYICS | 29,135 | Revert low cpu mem tie weights | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29135). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sounds good, thanks for taking care of this!"
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Reverts #28948 and #29043
See relevant comment: https://github.com/huggingface/transformers/pull/29110#issuecomment-1953847826
cc @hackyon @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29135/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29135",
"html_url": "https://github.com/huggingface/transformers/pull/29135",
"diff_url": "https://github.com/huggingface/transformers/pull/29135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29135.patch",
"merged_at": 1708430807000
} |
https://api.github.com/repos/huggingface/transformers/issues/29134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29134/comments | https://api.github.com/repos/huggingface/transformers/issues/29134/events | https://github.com/huggingface/transformers/pull/29134 | 2,143,960,967 | PR_kwDOCUB6oc5nX3V4 | 29,134 | Add generate kwargs to VQA pipeline | {
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29134). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29134",
"html_url": "https://github.com/huggingface/transformers/pull/29134",
"diff_url": "https://github.com/huggingface/transformers/pull/29134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29134.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29133/comments | https://api.github.com/repos/huggingface/transformers/issues/29133/events | https://github.com/huggingface/transformers/pull/29133 | 2,143,951,741 | PR_kwDOCUB6oc5nX1Va | 29,133 | [`cuda kernels`] only compile them when initializing | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'll make sure of that before merging! Testing now!",
"```bash\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head - AssertionError: False is not true\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_equivalence_cpu_gpu - AssertionError: assert False\r\nFAILED tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_with_box_refine_two_stage - AssertionError: False is not true\r\n```\r\nfailing. Logits do no match, failing on main as well\r\n\r\n```bash \r\nFAILED tests/models/deta/test_modeling_deta.py::DetaModelIntegrationTests::test_inference_object_detection_head - AssertionError: False is not true\r\nFAILED tests/models/deta/test_modeling_deta.py::DetaModelIntegrationTests::test_inference_object_detection_head_swin_backbone - AssertionError: False is not true\r\n```\r\nfailing as well on main. Merging\r\n\r\nYoso is alright"
] | 1,708 | 1,708 | 1,708 | COLLABORATOR | null | # What does this PR do?
Fixes #29130, from 1min to 6seconds | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29133/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29133",
"html_url": "https://github.com/huggingface/transformers/pull/29133",
"diff_url": "https://github.com/huggingface/transformers/pull/29133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29133.patch",
"merged_at": 1708429139000
} |
https://api.github.com/repos/huggingface/transformers/issues/29132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29132/comments | https://api.github.com/repos/huggingface/transformers/issues/29132/events | https://github.com/huggingface/transformers/issues/29132 | 2,143,872,350 | I_kwDOCUB6oc5_yOVe | 29,132 | SPAM | {
"login": "cook9019",
"id": 141466977,
"node_id": "U_kgDOCG6dYQ",
"avatar_url": "https://avatars.githubusercontent.com/u/141466977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cook9019",
"html_url": "https://github.com/cook9019",
"followers_url": "https://api.github.com/users/cook9019/followers",
"following_url": "https://api.github.com/users/cook9019/following{/other_user}",
"gists_url": "https://api.github.com/users/cook9019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cook9019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cook9019/subscriptions",
"organizations_url": "https://api.github.com/users/cook9019/orgs",
"repos_url": "https://api.github.com/users/cook9019/repos",
"events_url": "https://api.github.com/users/cook9019/events{/privacy}",
"received_events_url": "https://api.github.com/users/cook9019/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29132/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29131/comments | https://api.github.com/repos/huggingface/transformers/issues/29131/events | https://github.com/huggingface/transformers/pull/29131 | 2,143,812,725 | PR_kwDOCUB6oc5nXWfA | 29,131 | added the max_matching_ngram_size to GenerationConfig | {
"login": "mosheber",
"id": 22236370,
"node_id": "MDQ6VXNlcjIyMjM2Mzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosheber",
"html_url": "https://github.com/mosheber",
"followers_url": "https://api.github.com/users/mosheber/followers",
"following_url": "https://api.github.com/users/mosheber/following{/other_user}",
"gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosheber/subscriptions",
"organizations_url": "https://api.github.com/users/mosheber/orgs",
"repos_url": "https://api.github.com/users/mosheber/repos",
"events_url": "https://api.github.com/users/mosheber/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosheber/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
* Added the max_matching_ngram_size parameter into the GenerationConfig, for the PromptLookupCandidateGenerator.
* Included the max_matching_ngram_size when calling the __init__ of PromptLookupCandidateGenerator in _get_candidate_generator, in case it is specified.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante , would appreciate it if you could give this PR a glance, and thank you in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29131",
"html_url": "https://github.com/huggingface/transformers/pull/29131",
"diff_url": "https://github.com/huggingface/transformers/pull/29131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29131.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29130/comments | https://api.github.com/repos/huggingface/transformers/issues/29130/events | https://github.com/huggingface/transformers/issues/29130 | 2,143,788,296 | I_kwDOCUB6oc5_x50I | 29,130 | Move kernel compilation to init rather than at import stage | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | ### Feature request
Some models like Deformable DETR rely on custom CUDA kernels to be compiled as seen [here](https://github.com/huggingface/transformers/blob/f7ef7cec6c6c162087421f36a17eabdbb223579d/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L54).
Currently these are compiled when importing the Transformers library, but this needs to happen later, when initializing the models.
All custom CUDA kernels are defined here: https://github.com/huggingface/transformers/tree/main/src/transformers/kernels
### Motivation
This is pretty important to fix actually as currently running `make fixup` on machines that have `ninja` installed will compile all these kernels before running the quality checks, making it super slow.. thanks @younesbelkada for the info
### Your contribution
Not sure I can help with this, the current workaround is simply removing `ninja` from the environment | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29130/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29130/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29129/comments | https://api.github.com/repos/huggingface/transformers/issues/29129/events | https://github.com/huggingface/transformers/issues/29129 | 2,143,773,084 | I_kwDOCUB6oc5_x2Gc | 29,129 | Flash attention implementation with BERT base model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Not that expert but I suggest you can try bettertransformer for extreme speed up. ( In my knowledge that flash-attn is mainly focused on kv cache which is not exist on Bert-like model in most cases. )",
"> Not that expert but I suggest you can try bettertransformer for extreme speed up. ( In my knowledge that flash-attn is mainly focused on kv cache which is not exist on Bert-like model in most cases. )\r\n\r\nmosaic bert is based on flash attention. ",
"Hi @rahul-k01, \r\n\r\nPlease make sure to only tag a limited set of relevant people when opening an issue - everyone is very busy and if this was done on all issues we wouldn't be able to meaningfully address our notifications. \r\n\r\nFlashAttention isn't implemented yet for BERT. There's an open issue where you can track which models have it added and on-going work: #26350. At the moment, it seems @filippo82 is working on the addition for BERT. ",
"> Hi @rahul-k01,\r\n> \r\n> Please make sure to only tag a limited set of relevant people when opening an issue - everyone is very busy and if this was done on all issues we wouldn't be able to meaningfully address our notifications.\r\n> \r\n> FlashAttention isn't implemented yet for BERT. There's an open issue where you can track which models have it added and on-going work: #26350. At the moment, it seems @filippo82 is working on the addition for BERT.\r\n\r\nThanks for your response\r\n"
] | 1,708 | 1,708 | null | NONE | null | ### Model description
hello and thanks community.
I am trying to replace standard attention by flash attention in the BERT base Model. Anyone please help not able to find any tutorial or any discussions.
or just give some directions how to do that ..I have got the idea of making attention prob drop prob = 0 . it makes sense but not sure how it is going to work.
@tridao
@arthur
@jamaliki
@sorenmc
@LysandreJik @ArthurZucker
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29129/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29128/comments | https://api.github.com/repos/huggingface/transformers/issues/29128/events | https://github.com/huggingface/transformers/issues/29128 | 2,143,692,799 | I_kwDOCUB6oc5_xif_ | 29,128 | bart-large-xsum model: There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight']. | {
"login": "Aisuko",
"id": 8053949,
"node_id": "MDQ6VXNlcjgwNTM5NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8053949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aisuko",
"html_url": "https://github.com/Aisuko",
"followers_url": "https://api.github.com/users/Aisuko/followers",
"following_url": "https://api.github.com/users/Aisuko/following{/other_user}",
"gists_url": "https://api.github.com/users/Aisuko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aisuko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aisuko/subscriptions",
"organizations_url": "https://api.github.com/users/Aisuko/orgs",
"repos_url": "https://api.github.com/users/Aisuko/repos",
"events_url": "https://api.github.com/users/Aisuko/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aisuko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ArthurZucker @younesbelkada ",
"Hey @Aisuko, could you provide a **minimal** reproducer ? That would help use! \r\nAlso note that the `generation parameters` issues can probably be safely ignored. The missing keys is however a bit more problematic! \r\nMight be tied weights that are not tied properly, is `tie_word_embeddings` used ? ",
"Hi, guys. Thanks for your quick response.\r\n\r\nThe minimal code see below, the code only including the steps of processing data and training. And we cat get same result from it. https://www.kaggle.com/code/aisuko/minimal-reproducer-for-issue-29128/notebook\r\n\r\nThe embedding process without using `tie_word_embeddings` parameter.\r\n\r\n\r\n## libraries\r\n\r\n```\r\n!pip install transformers==4.37.2\r\n!pip install datasets==2.17.0\r\n!pip install evaluate==0.4.1\r\n!pip install rouge-score==0.1.2\r\n```\r\n\r\n## Code\r\n```\r\n# Import libraries\r\nimport os\r\nimport re\r\nimport nltk\r\nimport pandas as pd\r\nimport numpy as np\r\nimport warnings\r\nfrom datasets import Dataset\r\nfrom datasets import load_metric\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\nfrom transformers import BartForConditionalGeneration\r\nfrom transformers import DataCollatorForSeq2Seq\r\nfrom transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer\r\n\r\nos.environ['MODEL']='facebook/bart-large-xsum'\r\nos.environ[\"WANDB_NAME\"] = \"ft-facebook-bart-large-xsum-on-samsum\"\r\n\r\nwarnings.filterwarnings('ignore')\r\n\r\n# Loading and preprocessing data from https://www.kaggle.com/datasets/nileshmalode1/samsum-dataset-text-summarization\r\ntrain=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-train.csv')\r\ntest=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-test.csv')\r\nval=pd.read_csv('/kaggle/input/samsum-dataset-text-summarization/samsum-validation.csv')\r\n\r\ndef clean_tags(text):\r\n clean=re.compile('<.*?>') # compiling tags\r\n clean=re.sub(clean, '', text) # replacing tags text by an empty string\r\n \r\n # removing empty dialogues\r\n clean='\\n'.join([line for line in clean.split('\\n') if not re.match('.*:\\s*$', line)])\r\n return clean\r\n\r\ndef clean_df(df, cols):\r\n for col in cols:\r\n df[col]=df[col].fillna('').apply(clean_tags)\r\n return df\r\n\r\ntrain=clean_df(train, ['dialogue','summary'])\r\ntest=clean_df(test, ['dialogue', 'summary'])\r\nval=clean_df(val, ['dialogue', 'summary'])\r\n\r\ntrain_ds=Dataset.from_pandas(train)\r\ntest_ds=Dataset.from_pandas(test)\r\nval_ds=Dataset.from_pandas(val)\r\n\r\n# Tokenizer\r\ntokenizer=BartTokenizer.from_pretrained(os.getenv('MODEL'))\r\n\r\ndef preprocess_func(example):\r\n # Iterating over every `dialogue` in the datset and saving them as input to the model\r\n inputs=[doc for doc in example['dialogue']]\r\n # we use tokenizer convert the input dialogues into tokens that can be easily understood by the BART model.\r\n # The truncation=True parameter ensures that all dialogues have a maximum number of 1024 tokens, as defined by the `max_length` parameter\r\n model_inputs=tokenizer(inputs, max_length=1024, truncation=True)\r\n \r\n # Setup the tokenizer for targets\r\n with tokenizer.as_target_tokenizer():\r\n # we tokenizes the target variable, which is our summaries. And we expect summaries to be a much shorter text than that of dialogues max_length=128\r\n labels=tokenizer(example['summary'], max_length=128, truncation=True)\r\n \r\n # we adding the tokenized labels to the preprocessed dataset, alongside the tokenized inputs.\r\n model_inputs['labels']=labels['input_ids']\r\n return model_inputs\r\n\r\n\r\ntokenized_train= train_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\ntokenized_test=test_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\ntokenized_val=val_ds.map(preprocess_func, batched=True, remove_columns=['id', 'dialogue', 'summary'])\r\n\r\n# Loading the model\r\nmodel=BartForConditionalGeneration.from_pretrained(os.getenv('MODEL'))\r\n\r\n# Loading DataCollator\r\ndata_collator= DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)\r\n\r\n# Customizing metrics\r\nmetric=load_metric('rouge')\r\n\r\nnltk.download('punkt') # this divides a text into a list of sentences\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels=eval_pred # obtaining predictions and true labels\r\n \r\n # decoding predictions\r\n decoded_preds=tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n \r\n # obtaining the true labels tokens, while eliminating any possible masked token (i.e: label=-100)\r\n labels=np.where(labels!=-100, labels, tokenizer.pad_token_id)\r\n decoded_labels=tokenizer.batch_decode(labels, skip_special_tokens=True)\r\n \r\n # rouge expects a newline after each sentence\r\n decoded_preds=['\\n'.join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]\r\n decoded_labels=['\\n'.join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]\r\n \r\n # computing rouge score\r\n result=metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n result={key: value.mid.fmeasure*100 for key, value in result.items()} # extracting some results\r\n \r\n # add mean-genrated length\r\n prediction_lens=[np.count_nonzero(pred!=tokenizer.pad_token_id) for pred in predictions]\r\n result['gen_len']=np.mean(prediction_lens)\r\n return {k: round(v,4) for k,v in result.items()}\r\n\r\n\r\n# Training\r\ntraining_args=Seq2SeqTrainingArguments(\r\n output_dir=os.getenv('WANDB_NAME'),\r\n evaluation_strategy='epoch',\r\n save_strategy='epoch',\r\n load_best_model_at_end=True,\r\n metric_for_best_model='eval_loss',\r\n seed=42,\r\n learning_rate=2e-5,\r\n max_steps=100,\r\n per_device_train_batch_size=4,\r\n per_device_eval_batch_size=4,\r\n gradient_accumulation_steps=4,\r\n weight_decay=0.01,\r\n save_total_limit=2,\r\n num_train_epochs=1, # only for testing\r\n predict_with_generate=True,\r\n fp16=True,\r\n report_to='none',\r\n)\r\n\r\ntrainer=Seq2SeqTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_train,\r\n eval_dataset=tokenized_test,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics\r\n)\r\n\r\ntrainer.train()\r\n```"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23.dev20240116
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Thanks for your great work.
Please take a look at the notebook below in Kaggle. https://www.kaggle.com/code/aisuko/text-summarization-with-bart-series-llm/notebook
After training process finished it will show the warning message below
```
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
Non-default generation parameters: {'max_length': 62, 'min_length': 11, 'early_stopping': True, 'num_beams': 6, 'no_repeat_ngram_size': 3, 'forced_eos_token_id': 2}
There were missing keys in the checkpoint model loaded: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight'].
```
And the fine-tuned model cannot be used to do inference. I saw a similar type of issue https://github.com/huggingface/transformers/issues/27972
### Expected behavior
No warning issue and I can use the fine-tuned model to do inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29128/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29127/comments | https://api.github.com/repos/huggingface/transformers/issues/29127/events | https://github.com/huggingface/transformers/issues/29127 | 2,143,620,996 | I_kwDOCUB6oc5_xQ-E | 29,127 | err_handle(layoutlmv3): Error message doesn't give much clarity when boxes not containing enough information | {
"login": "Sushaanth-Suresh-Kumar",
"id": 123300765,
"node_id": "U_kgDOB1lrnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123300765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sushaanth-Suresh-Kumar",
"html_url": "https://github.com/Sushaanth-Suresh-Kumar",
"followers_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/followers",
"following_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/orgs",
"repos_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/repos",
"events_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sushaanth-Suresh-Kumar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Would you like to open a PR to improve the error? 🤗 ",
"Sure"
] | 1,708 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.37.2
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
****
Model I am using LayoutLMv3:
when `boxes = [[123, 53], [36, 87], ...]` (basically any list which is not according to the proper format)
by proper format I mean `[[123, 346, 234, 634], [356, 568, 234, 25], ...]`
```python
encoding = processor(
image_1,
text,
boxes=boxes,
max_length=512,
padding="max_length",
truncation=True,
return_tensors="pt"
)
```
It produces a this error message
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (labels in this case) have excessive nesting (inputs type list where type int is expected).
```
**To Reproduce**
Steps to reproduce the behavior:
1. add any list of boxes with not enough values like `boxes = [[123, 53], [36, 87], ...]`
2. when run it throws the ValueError mentioned above
### Expected behavior
Can throw an error saying
```
ValueError: boxes doesn't have enough values inside each box. Each box should contain 4 values
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29127/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29126/comments | https://api.github.com/repos/huggingface/transformers/issues/29126/events | https://github.com/huggingface/transformers/issues/29126 | 2,143,539,045 | I_kwDOCUB6oc5_w89l | 29,126 | WARNING: tokenization mismatch: 43 vs. 44. (ignored) | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @lucasjinreal, \r\n\r\nWithout a code sample to replicate, information about the running environment or more information about the error - including full trackback - there isn't much we can do to help you here."
] | 1,708 | 1,708 | null | NONE | null | Recently there are many errors got either from fastchat or llava code base if using latest transfomers.
WARNING: tokenization mismatch: 43 vs. 44. (ignored)
What does this happen and how to dismiss it? Will it effect the final training result? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29126/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29125/comments | https://api.github.com/repos/huggingface/transformers/issues/29125/events | https://github.com/huggingface/transformers/pull/29125 | 2,143,504,797 | PR_kwDOCUB6oc5nWUBE | 29,125 | feat: Upgrade Weights & Biases callback | {
"login": "parambharat",
"id": 12809212,
"node_id": "MDQ6VXNlcjEyODA5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/12809212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parambharat",
"html_url": "https://github.com/parambharat",
"followers_url": "https://api.github.com/users/parambharat/followers",
"following_url": "https://api.github.com/users/parambharat/following{/other_user}",
"gists_url": "https://api.github.com/users/parambharat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parambharat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parambharat/subscriptions",
"organizations_url": "https://api.github.com/users/parambharat/orgs",
"repos_url": "https://api.github.com/users/parambharat/repos",
"events_url": "https://api.github.com/users/parambharat/events{/privacy}",
"received_events_url": "https://api.github.com/users/parambharat/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds a few new functionalities to the Weights & Biases Callback
- Logs Peft and Lora Config to wandb if present
- Adds model parameter counts to wandb config and artifact metadata
- Adds on_predict methods to log prediction metrics
- Prints the model architecture to a file alongside the wandb artifact
- Logs initial and final models to the wandb artifact to full reproducibility
- Adds steps and epoch aliases to checkpoint artifacts
- Here's a [link](https://wandb.ai/parambharat/test-transformers/artifacts/model/model-rg4pcjcv/v3) to the what the logged artifacts look like
- Here's a run [overview page](https://wandb.ai/parambharat/test-transformers/runs/rg4pcjcv/overview?workspace=user-parambharat) with added config and metadata for the run with peft configs logged
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
## Who can review?
- trainer: @muellerzr and @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29125/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29125",
"html_url": "https://github.com/huggingface/transformers/pull/29125",
"diff_url": "https://github.com/huggingface/transformers/pull/29125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29125.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29124/comments | https://api.github.com/repos/huggingface/transformers/issues/29124/events | https://github.com/huggingface/transformers/pull/29124 | 2,143,420,111 | PR_kwDOCUB6oc5nWBoW | 29,124 | added unrolled whisper_generation.py | {
"login": "robertgshaw2-neuralmagic",
"id": 114415538,
"node_id": "U_kgDOBtHXsg",
"avatar_url": "https://avatars.githubusercontent.com/u/114415538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robertgshaw2-neuralmagic",
"html_url": "https://github.com/robertgshaw2-neuralmagic",
"followers_url": "https://api.github.com/users/robertgshaw2-neuralmagic/followers",
"following_url": "https://api.github.com/users/robertgshaw2-neuralmagic/following{/other_user}",
"gists_url": "https://api.github.com/users/robertgshaw2-neuralmagic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robertgshaw2-neuralmagic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robertgshaw2-neuralmagic/subscriptions",
"organizations_url": "https://api.github.com/users/robertgshaw2-neuralmagic/orgs",
"repos_url": "https://api.github.com/users/robertgshaw2-neuralmagic/repos",
"events_url": "https://api.github.com/users/robertgshaw2-neuralmagic/events{/privacy}",
"received_events_url": "https://api.github.com/users/robertgshaw2-neuralmagic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,708 | 1,708 | 1,708 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29124",
"html_url": "https://github.com/huggingface/transformers/pull/29124",
"diff_url": "https://github.com/huggingface/transformers/pull/29124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29124.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29123/comments | https://api.github.com/repos/huggingface/transformers/issues/29123/events | https://github.com/huggingface/transformers/pull/29123 | 2,143,416,822 | PR_kwDOCUB6oc5nWA8d | 29,123 | [`Core generation`] Let's be less restrictive on the arguments passed to the generation calls. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29123). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Updates generate calls | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29123/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29123",
"html_url": "https://github.com/huggingface/transformers/pull/29123",
"diff_url": "https://github.com/huggingface/transformers/pull/29123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29123.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29122/comments | https://api.github.com/repos/huggingface/transformers/issues/29122/events | https://github.com/huggingface/transformers/pull/29122 | 2,143,413,555 | PR_kwDOCUB6oc5nWARN | 29,122 | FIX [`bnb` / `tests`] Propagate the changes from #29092 to 4-bit tests | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29122). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
As per title, I overlooked the fix and forgot to push the changes of https://github.com/huggingface/transformers/pull/29092 in 4-bit tests 😢
cc @amyeroberts @Titus-von-Koeller | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29122",
"html_url": "https://github.com/huggingface/transformers/pull/29122",
"diff_url": "https://github.com/huggingface/transformers/pull/29122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29122.patch",
"merged_at": 1708423875000
} |
https://api.github.com/repos/huggingface/transformers/issues/29121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29121/comments | https://api.github.com/repos/huggingface/transformers/issues/29121/events | https://github.com/huggingface/transformers/issues/29121 | 2,143,187,142 | I_kwDOCUB6oc5_vnDG | 29,121 | AttributeError: 'DistilBertModel' object has no attribute '_use_flash_attention_2' | {
"login": "javilonso",
"id": 31996659,
"node_id": "MDQ6VXNlcjMxOTk2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31996659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javilonso",
"html_url": "https://github.com/javilonso",
"followers_url": "https://api.github.com/users/javilonso/followers",
"following_url": "https://api.github.com/users/javilonso/following{/other_user}",
"gists_url": "https://api.github.com/users/javilonso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/javilonso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javilonso/subscriptions",
"organizations_url": "https://api.github.com/users/javilonso/orgs",
"repos_url": "https://api.github.com/users/javilonso/repos",
"events_url": "https://api.github.com/users/javilonso/events{/privacy}",
"received_events_url": "https://api.github.com/users/javilonso/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @javilonso ! \r\nI quickly tried on transformers main: \r\n```python\r\nfrom transformers import pipeline\r\n\r\nunmasker = pipeline('fill-mask', model='distilbert-base-uncased')\r\nunmasker(\"Hello I'm a [MASK] model.\")\r\n```\r\nBut I did not managed to repro, can you share a snippet to reproduce the issue?\r\nI also tried:\r\n```python\r\nfrom transformers import DistilBertTokenizer, DistilBertModel\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\nmodel = DistilBertModel.from_pretrained(\"distilbert-base-uncased\")\r\ntext = \"Replace me by any text you'd like.\"\r\n\r\nencoded_input = tokenizer(text, return_tensors='pt')\r\noutput = model(**encoded_input)\r\n```\r\nCan you also try on transformers main?",
"Hi, I am also facing the same issue, the code @younesbelkada gave works well on my system. However, I get\r\n `AttributeError: DistilBertModel' object has no attribute '_use_flash_attention_2` \r\nwhen running my prediction with my finetuned distilbert model. I am also running transformers 4.37.2. It works fine on 4.35.2.\r\n\r\nThe error happens when trying to perform the prediction with a local copy of the model:\r\n```python\r\ninputs = tokenizer.encode(text, return_tensors=\"pt\").to(device)\r\nlogits = model(inputs).logits\r\n```\r\n\r\nThis is how I loaded the tokenizer:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n'models/tokenizer', add_prefix_space=True)\r\n```\r\n\r\nI am also loading a saved local copy of the model here:\r\n```python\r\nmodel = torch.load(model_name, map_location=torch.device(device))\r\n```\r\n\r\nHope the information provided is enough! Thanks in advance.\r\n ",
"This issue does not seem to occur when I finetune my model again on transformers 4.38.0.\nI guess the solution would be to update the transformers package."
] | 1,708 | 1,708 | null | NONE | null | ### System Info
Obtaining this error in last transformers 4.37.2, but works correctly in transformers 4.35.2
Simple inference with a finetuned distilbert model.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install transformers 4.37.2
2. Perform inference with model https://huggingface.co/distilbert/distilbert-base-uncased
### Expected behavior
Inference should go through without errors | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29121/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/29120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29120/comments | https://api.github.com/repos/huggingface/transformers/issues/29120/events | https://github.com/huggingface/transformers/pull/29120 | 2,143,042,742 | PR_kwDOCUB6oc5nUwcG | 29,120 | Starcoder2 model | {
"login": "jlamypoirier",
"id": 18523627,
"node_id": "MDQ6VXNlcjE4NTIzNjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlamypoirier",
"html_url": "https://github.com/jlamypoirier",
"followers_url": "https://api.github.com/users/jlamypoirier/followers",
"following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}",
"gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions",
"organizations_url": "https://api.github.com/users/jlamypoirier/orgs",
"repos_url": "https://api.github.com/users/jlamypoirier/repos",
"events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlamypoirier/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,708 | 1,708 | null | CONTRIBUTOR | null | The Starcoder2 model, adapted from Mistral. All changes are done through options, so Mistral itself is still supported. Main changes:
* Use layer norm (RMS still available as option)
* Use standard MLP (gated still available as option)
* Add back biases (optional)
* Change (default?) tokenizer class
*Embedding and residual dropout
It does not support absolute embeddings, so can't support Santacoder or Starcoder
Todo:
* Forward changes from #27931, #29027 (and future changes from Feb. 19)
* Documentation
* Copyright
* Point to starcoder2 checkpoint
* Other minor things (see todos)
@younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29120",
"html_url": "https://github.com/huggingface/transformers/pull/29120",
"diff_url": "https://github.com/huggingface/transformers/pull/29120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29120.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/29119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/29119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/29119/comments | https://api.github.com/repos/huggingface/transformers/issues/29119/events | https://github.com/huggingface/transformers/pull/29119 | 2,143,005,049 | PR_kwDOCUB6oc5nUoNF | 29,119 | Generate: unset GenerationConfig parameters do not raise warning | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,708 | 1,708 | 1,708 | MEMBER | null | # What does this PR do?:
Thank you @fxmarty for raising [this issue](https://github.com/huggingface/transformers/pull/25381#issuecomment-1952527813).
This PR allows users to unset (= set to `None`) unused parameters to ensure `generation_config.validate()` doesn't throw a warning. Previously, this was not possible when a parameter had a non-`None` default.
For instance, the following snippet would throw a warning before this PR:
```py
from transformers import GenerationConfig
generation_config = GenerationConfig()
generation_config.update(temperature=None)
generation_config.validate()
# "... UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`."
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/29119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/29119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/29119",
"html_url": "https://github.com/huggingface/transformers/pull/29119",
"diff_url": "https://github.com/huggingface/transformers/pull/29119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/29119.patch",
"merged_at": 1708428871000
} |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 36