url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
โŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
โŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/28860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28860/comments
https://api.github.com/repos/huggingface/transformers/issues/28860/events
https://github.com/huggingface/transformers/issues/28860
2,117,831,973
I_kwDOCUB6oc5-O40l
28,860
Question: How do LLMs learn to be "Generative", as we often describe them?
{ "login": "metalwhale", "id": 45712559, "node_id": "MDQ6VXNlcjQ1NzEyNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/45712559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/metalwhale", "html_url": "https://github.com/metalwhale", "followers_url": "https://api.github.com/users/metalwhale/followers", "following_url": "https://api.github.com/users/metalwhale/following{/other_user}", "gists_url": "https://api.github.com/users/metalwhale/gists{/gist_id}", "starred_url": "https://api.github.com/users/metalwhale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/metalwhale/subscriptions", "organizations_url": "https://api.github.com/users/metalwhale/orgs", "repos_url": "https://api.github.com/users/metalwhale/repos", "events_url": "https://api.github.com/users/metalwhale/events{/privacy}", "received_events_url": "https://api.github.com/users/metalwhale/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "When you run a forward pass, you are always predicting the distribution of the next token. So quite simply, model(input_1) with let's say [3442] will compute the distribution of `x1`. ", "@ArthurZucker \nThank you for your answer!\n\nBut in that case `x1` is no longer the \"first token\", isn't it? The probability of `x1` now is `p(x1|input_1)`, not `p(x1)` IMHO. Am I correct?\n\nWhat I want to know is how we compute the probability of the \"very first token\", a.k.a the token that doesn't have any token before it.", "you can't compute the proba of the first token, it's given by the user. Since it s a constant if you apply your rule, `p(input_1) = 1 `", "In a way, you can say the `p(x1)` is always `1` since `x1 = the input given by the user` it is certain", "@ArthurZucker \r\nYour answer makes everything very clear to me.\r\n\r\nMy understanding of a generative model is that it learns the probability of every \"piece\" of the dataset. In the case of autoregressive progress for LLMs training, they learn about the relation of all the tokens, including the first tokens of each text sequence.\r\n\r\nHowever I guess that's not entirely true: the \"generative aspect\" of LLMs is true only in particular cases if we provide them with the \"initial context\" (the first tokens, or the \"prompt\"). Otherwise they are not generative at all. They are \"conditional generative\", not generative over the entire dataset. Am I right?\r\n\r\nIf I understand what you are saying correctly, we can have a \"truly generative\" LLMs by setting a special token with a probability of 1, that always appears as the first token of every text sequence, something like BOS token. In other words, LLMs trained with the BOS token are \"more generative\" than ones that don't have it. What do you think?", "Iโ€™m not sure. But yes, all decoders are trained with the BOS which serves exactly that purpose: the one input that is always of constant proba. Iโ€™ll close this issue but feel free to discuss ๐Ÿ™‚", "@ArthurZucker Thank you so much for your help!" ]
1,707
1,707
1,707
NONE
null
(Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.) AFAIK to be called "generative", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to achieve this by leveraging the autoregressive method for every token of each input text sequence. For example, with a text sequence of 4 tokens, it can be written as: ``` p(x4,x3,x2,x1) = p(x4|x3,x2,x1) * p(x3|x2,x1) * p(x2|x1) * p(x1) ``` where `x1` denotes the 1st token, `x2` denotes the 2nd token and so on, respectively. I understand the conditional terms `p(x_n|...)` where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token `p(x1)`. How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function? IMHO, if the model doesn't learn `p(x1)` properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here? I asked the [same question on `nanoGPT` repo](https://github.com/karpathy/nanoGPT/issues/432) and [on HN](https://news.ycombinator.com/item?id=39249301). I'm also reading Transformer codes from this repo, but I haven't found the answer I'm looking for yet. Could someone please enlighten me? Thank in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28859/comments
https://api.github.com/repos/huggingface/transformers/issues/28859/events
https://github.com/huggingface/transformers/issues/28859
2,117,749,812
I_kwDOCUB6oc5-Okw0
28,859
TypeError: unhashable type: 'dict' with BGE-M3 using AutoTokenizer
{ "login": "NirantK", "id": 3250749, "node_id": "MDQ6VXNlcjMyNTA3NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/3250749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NirantK", "html_url": "https://github.com/NirantK", "followers_url": "https://api.github.com/users/NirantK/followers", "following_url": "https://api.github.com/users/NirantK/following{/other_user}", "gists_url": "https://api.github.com/users/NirantK/gists{/gist_id}", "starred_url": "https://api.github.com/users/NirantK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NirantK/subscriptions", "organizations_url": "https://api.github.com/users/NirantK/orgs", "repos_url": "https://api.github.com/users/NirantK/repos", "events_url": "https://api.github.com/users/NirantK/events{/privacy}", "received_events_url": "https://api.github.com/users/NirantK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I cannot reproduce this on main:\r\n```python \r\n\r\nIn [3]: from transformers import AutoTokenizer\r\n\r\nIn [4]: model_id = \"BAAI/bge-m3\"\r\n ...: hf_tokenizer = AutoTokenizer.from_pretrained(model_id)\r\ntokenizer_config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.31k/1.31k [00:00<00:00, 2.65MB/s]\r\nsentencepiece.bpe.model: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5.07M/5.07M [00:00<00:00, 7.40MB/s]\r\ntokenizer.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 17.1M/17.1M [00:00<00:00, 22.4MB/s]\r\nspecial_tokens_map.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 964/964 [00:00<00:00, 2.80MB/s]\r\n```\r\nworks as expected for me. Make sure to use the latest version of transformers", "ไฝ ็š„transformersๅบ“ๅคช่€ไบ†๏ผŒๆ›ดๆ–ฐไธ€ไธ‹ๅฐฑๆฒก่ฟ™ไธช้—ฎ้ข˜ไบ†๏ผŒๆˆ‘ไนŸๅ‡บ็Žฐไบ†ๅ’Œไฝ ไธ€ๆ ท็š„ๆ›ดๆ–ฐๅˆฐ4.37.2้—ฎ้ข˜ๆถˆๅคฑ", "Updating to the latest transformers worked. Thanks @ArthurZucker @song4875343 ", "Updating to `transformers==4.37.2` solved the issue for me.\r\n```bash\r\npip install -U transformers\r\n```" ]
1,707
1,708
1,707
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.8.0 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The notebook to reproduce it is here: https://colab.research.google.com/drive/1OAsds_zEjGEzkfuYlJfkSPTDw5MoVYsx#scrollTo=b1ecf0b6-db81-4da3-b47f-e31460ccfbf1 1. Install dependencies: optimum, onnx, onnx-runtime 2. Try to load the BGE-M3 using the AutoTokenizer ### Expected behavior 1. AutoTokenizer works with BGE-M3 too
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28859/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28859/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28858/comments
https://api.github.com/repos/huggingface/transformers/issues/28858/events
https://github.com/huggingface/transformers/pull/28858
2,117,525,860
PR_kwDOCUB6oc5l9_Gh
28,858
[`Doc`] update contribution guidelines
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28858). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
COLLABORATOR
null
# What does this PR do? Fixes a misunderstanding from #28767
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28858", "html_url": "https://github.com/huggingface/transformers/pull/28858", "diff_url": "https://github.com/huggingface/transformers/pull/28858.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28858.patch", "merged_at": 1707135561000 }
https://api.github.com/repos/huggingface/transformers/issues/28857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28857/comments
https://api.github.com/repos/huggingface/transformers/issues/28857/events
https://github.com/huggingface/transformers/issues/28857
2,117,499,996
I_kwDOCUB6oc5-Nnxc
28,857
add `push_to_hub( )` method when working with pipelines
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "FYI @Rocketknight1 " ]
1,707
1,707
null
CONTRIBUTOR
null
### Feature request it will be good to addition to the library to add a `push_to_hub( )` method when working with the pipeline method. ### Motivation I have just finished up a blogpost about creating custom architectures, but i couldn't help but notice that when working with ` AutoConfig` or `AutoModelForxxx` or `PreTrainedModel` have a `push_to_hub( )` methods alllowing them to easily be updated on the hub. unfortunately, this isn't the case for the pipeline method, I found a possible work around by cloning the repo then using `save_pretrained( )` then pushing the changes, but this not optimised and takes lots of time especially when working with really large language models check the documentation : https://huggingface.co/blog/not-lain/custom-architectures-with-huggingface#push-to-hub-%F0%9F%A4%97 <hr> ![image](https://github.com/huggingface/transformers/assets/70411813/11a3f62f-0a39-48b1-9109-0e7c3896f526) **VS** ![image](https://github.com/huggingface/transformers/assets/70411813/09562086-a0f5-4bb9-86c9-57b92ec637b1) ### Your contribution yes, I can help out with a PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28857/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28856/comments
https://api.github.com/repos/huggingface/transformers/issues/28856/events
https://github.com/huggingface/transformers/issues/28856
2,117,071,793
I_kwDOCUB6oc5-L_Ox
28,856
When using AutoModelForCausalLM, THUDM/cogagent-vqa-hf and load_in_8bit I get this error : self and mat2 must have the same dtype, but got Half and Char
{ "login": "FurkanGozukara", "id": 19240467, "node_id": "MDQ6VXNlcjE5MjQwNDY3", "avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FurkanGozukara", "html_url": "https://github.com/FurkanGozukara", "followers_url": "https://api.github.com/users/FurkanGozukara/followers", "following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}", "gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}", "starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions", "organizations_url": "https://api.github.com/users/FurkanGozukara/orgs", "repos_url": "https://api.github.com/users/FurkanGozukara/repos", "events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}", "received_events_url": "https://api.github.com/users/FurkanGozukara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @FurkanGozukara \r\nThis issue is a duplicate of https://github.com/TimDettmers/bitsandbytes/issues/1029 - can you share the full traceback of the error so that I can fix the issue on the Hub? \r\nMy gut feeling is that the model is not compatible with bnb-8bit, the model code authors will need to make a slight change to make it work. \r\nYou can also post the same issue on the model repo: https://huggingface.co/THUDM/cogagent-vqa-hf/discussions with the full traceback of the issue", "Also does the issue happens with 4-bit as well?", "> Also does the issue happens with 4-bit as well?\r\n\r\nthe thing is 4-bit working perfectly fine\r\n\r\nYou may be right that it is not supporting 8-bit i mean the model\r\n\r\ntherefore I am testing `cogvlm-chat-hf` right now\r\n\r\non CMD there aren't any errors so i also have 0 other info", "cogvlm-chat-hf worked with 8 bit\r\n\r\nso it is probably related to model itself i messaged the developers thank you" ]
1,707
1,707
1,707
NONE
null
### System Info ``` Microsoft Windows [Version 10.0.19045.3996] (c) Microsoft Corporation. All rights reserved. G:\temp Local install\CogVLM\venv\Scripts>activate (venv) G:\temp Local install\CogVLM\venv\Scripts>pip freeze accelerate==0.26.1 aiofiles==23.2.1 aiohttp==3.9.3 aiosignal==1.3.1 altair==5.2.0 annotated-types==0.6.0 anyio==4.2.0 anykeystore==0.2 apex==0.9.10.dev0 async-timeout==4.0.3 attrs==23.2.0 bitsandbytes @ https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl blinker==1.7.0 blis==0.7.11 boto3==1.34.34 botocore==1.34.34 braceexpand==0.1.7 cachetools==5.3.2 catalogue==2.0.10 certifi==2022.12.7 charset-normalizer==2.1.1 click==8.1.7 cloudpathlib==0.16.0 colorama==0.4.6 confection==0.1.4 contourpy==1.2.0 cpm-kernels==1.0.11 cryptacular==1.6.2 cycler==0.12.1 cymem==2.0.8 datasets==2.16.1 deepspeed @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/deepspeed-0.11.2_cuda121-cp310-cp310-win_amd64.whl defusedxml==0.7.1 dill==0.3.7 einops==0.7.0 en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl exceptiongroup==1.2.0 fastapi==0.109.1 ffmpy==0.3.1 filelock==3.9.0 fonttools==4.47.2 frozenlist==1.4.1 fsspec==2023.10.0 gitdb==4.0.11 GitPython==3.1.41 gradio==4.16.0 gradio_client==0.8.1 greenlet==3.0.3 h11==0.14.0 hjson==3.1.0 httpcore==1.0.2 httpx==0.26.0 huggingface-hub==0.20.3 hupper==1.12.1 idna==3.4 importlib-metadata==7.0.1 importlib-resources==6.1.1 Jinja2==3.1.2 jmespath==1.0.1 jsonlines==4.0.0 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 kiwisolver==1.4.5 langcodes==3.3.0 loguru==0.7.2 markdown-it-py==3.0.0 MarkupSafe==2.1.3 matplotlib==3.8.2 mdurl==0.1.2 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 murmurhash==1.0.10 networkx==3.2.1 ninja==1.11.1.1 numpy==1.26.3 oauthlib==3.2.2 orjson==3.9.13 packaging==23.2 pandas==2.2.0 PasteDeploy==3.1.0 pbkdf2==1.3 pillow==10.2.0 plaster==1.1.2 plaster-pastedeploy==1.0.1 preshed==3.0.9 protobuf==4.25.2 psutil==5.9.8 py-cpuinfo==9.0.0 pyarrow==15.0.0 pyarrow-hotfix==0.6 pydantic==2.6.0 pydantic_core==2.16.1 pydeck==0.8.1b0 pydub==0.25.1 Pygments==2.17.2 pynvml==11.5.0 pyparsing==3.1.1 pyramid==2.0.2 pyramid-mailer==0.15.1 python-dateutil==2.8.2 python-multipart==0.0.7 python3-openid==3.2.0 pytz==2024.1 PyYAML==6.0.1 referencing==0.33.0 regex==2023.12.25 repoze.sendmail==4.4.1 requests==2.28.1 requests-oauthlib==1.3.1 rich==13.7.0 rpds-py==0.17.1 ruff==0.2.0 s3transfer==0.10.0 safetensors==0.4.2 scipy==1.12.0 seaborn==0.13.2 semantic-version==2.10.0 sentencepiece==0.1.99 shellingham==1.5.4 six==1.16.0 smart-open==6.4.0 smmap==5.0.1 sniffio==1.3.0 spacy==3.7.2 spacy-legacy==3.0.12 spacy-loggers==1.0.5 SQLAlchemy==2.0.25 srsly==2.4.8 starlette==0.35.1 streamlit==1.31.0 SwissArmyTransformer==0.4.11 sympy==1.12 tenacity==8.2.3 tensorboardX==2.6.2.2 thinc==8.2.2 timm==0.9.12 tokenizers==0.15.1 toml==0.10.2 tomlkit==0.12.0 toolz==0.12.1 torch==2.2.0+cu121 torchaudio==2.2.0+cu121 torchvision==0.17.0+cu121 tornado==6.4 tqdm==4.66.1 transaction==4.0 transformers==4.37.2 translationstring==1.4 triton @ https://huggingface.co/MonsterMMORPG/SECourses/resolve/main/triton-2.1.0-cp310-cp310-win_amd64.whl typer==0.9.0 typing_extensions==4.8.0 tzdata==2023.4 tzlocal==5.2 urllib3==1.26.13 uvicorn==0.27.0.post1 validators==0.22.0 velruse==1.1.1 venusian==3.1.0 wasabi==1.1.2 watchdog==3.0.0 weasel==0.3.4 webdataset==0.2.86 WebOb==1.8.7 websockets==11.0.3 win32-setctime==1.1.0 WTForms==3.1.2 wtforms-recaptcha==0.3.2 xformers==0.0.24 xxhash==3.4.1 yarl==1.9.4 zipp==3.17.0 zope.deprecation==5.0 zope.interface==6.1 zope.sqlalchemy==3.1 (venv) G:\temp Local install\CogVLM\venv\Scripts> ``` ### Who can help? @ArthurZucker @amyeroberts @pacman100 @SunMarc @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction here the full code and pip freeze the error: `self and mat2 must have the same dtype, but got Half and Char` there are no visible errors on CMD window this error returns as response **Same code load in 4 bit working** ``` DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' MODEL_PATH = "THUDM/cogagent-vqa-hf" tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5') torch_type = torch.float16 model = AutoModelForCausalLM.from_pretrained( MODEL_PATH, low_cpu_mem_usage=True, load_in_8bit=True, trust_remote_code=True ).eval() ``` ``` def process_image(image, input_text, temperature, top_p, top_k, do_sample): with torch.no_grad(): input_by_model = model.build_conversation_input_ids(tokenizer, query=input_text, history=[], images=[image], template_version='base') inputs = { 'input_ids': input_by_model['input_ids'].unsqueeze(0).to(DEVICE), 'token_type_ids': input_by_model['token_type_ids'].unsqueeze(0).to(DEVICE), 'attention_mask': input_by_model['attention_mask'].unsqueeze(0).to(DEVICE), 'images': [[input_by_model['images'][0].to(DEVICE).to(torch_type)]], } if 'cross_images' in input_by_model and input_by_model['cross_images']: inputs['cross_images'] = [[input_by_model['cross_images'][0].to(DEVICE).to(torch_type)]] gen_kwargs = { "max_length": 2048, "temperature": temperature, "do_sample": do_sample, "top_p": top_p, "top_k": top_k } outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] response = tokenizer.decode(outputs[0]) return response.split("</s>")[0] ``` ### Expected behavior no error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28856/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28855/comments
https://api.github.com/repos/huggingface/transformers/issues/28855/events
https://github.com/huggingface/transformers/pull/28855
2,116,787,677
PR_kwDOCUB6oc5l7lL3
28,855
[Docs] Fix bad doc: replace save with logging
{ "login": "chenzizhao", "id": 31786519, "node_id": "MDQ6VXNlcjMxNzg2NTE5", "avatar_url": "https://avatars.githubusercontent.com/u/31786519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chenzizhao", "html_url": "https://github.com/chenzizhao", "followers_url": "https://api.github.com/users/chenzizhao/followers", "following_url": "https://api.github.com/users/chenzizhao/following{/other_user}", "gists_url": "https://api.github.com/users/chenzizhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/chenzizhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chenzizhao/subscriptions", "organizations_url": "https://api.github.com/users/chenzizhao/orgs", "repos_url": "https://api.github.com/users/chenzizhao/repos", "events_url": "https://api.github.com/users/chenzizhao/events{/privacy}", "received_events_url": "https://api.github.com/users/chenzizhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,707
1,707
1,707
CONTRIBUTOR
null
# Fix doc typos in TrainingArguments.set_logging ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28855", "html_url": "https://github.com/huggingface/transformers/pull/28855", "diff_url": "https://github.com/huggingface/transformers/pull/28855.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28855.patch", "merged_at": 1707100688000 }
https://api.github.com/repos/huggingface/transformers/issues/28854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28854/comments
https://api.github.com/repos/huggingface/transformers/issues/28854/events
https://github.com/huggingface/transformers/pull/28854
2,116,662,720
PR_kwDOCUB6oc5l7LNL
28,854
Honor trust_remote_code for custom tokenizers
{ "login": "rl337", "id": 387895, "node_id": "MDQ6VXNlcjM4Nzg5NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/387895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rl337", "html_url": "https://github.com/rl337", "followers_url": "https://api.github.com/users/rl337/followers", "following_url": "https://api.github.com/users/rl337/following{/other_user}", "gists_url": "https://api.github.com/users/rl337/gists{/gist_id}", "starred_url": "https://api.github.com/users/rl337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rl337/subscriptions", "organizations_url": "https://api.github.com/users/rl337/orgs", "repos_url": "https://api.github.com/users/rl337/repos", "events_url": "https://api.github.com/users/rl337/events{/privacy}", "received_events_url": "https://api.github.com/users/rl337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yep - I think I might have accidentally caused this with that PR! The original problem I was fixing was that the prompt was not displaying when `trust_remote_code=None`. Let me take a look at this fix.", "Hi @rl337, can you give me some code to reproduce the issue? I picked `Qwen/Qwen-VL` because it's a recent release with a custom tokenizer, but I didn't get any prompt when I ran `AutoTokenizer.from_pretrained(\"Qwen/Qwen-VL\", trust_remote_code=True)`.", "> Hi @rl337, can you give me some code to reproduce the issue? I picked `Qwen/Qwen-VL` because it's a recent release with a custom tokenizer, but I didn't get any prompt when I ran `AutoTokenizer.from_pretrained(\"Qwen/Qwen-VL\", trust_remote_code=True)`.\r\n\r\nSure. I started working on a test but i couldn't get the test suite to run properly so i pulled the test out into a standalone script. Rename verify_tokenizer_standalone.txt to a .py and it's only dependency is transformers so it should be quick to create a venv and try it out. \r\n\r\nI had to put a os.chdir() to the temp directory because i couldn't seem to get the subdir param to work either. Likely it is broken too. \r\n\r\nIf i drop the meat of this test into tests/models/auto/test_tokenization_auto.py, will that work as a test?\r\n\r\n[verify_tokenizer_standalone.txt](https://github.com/huggingface/transformers/files/14183375/verify_tokenizer_standalone.txt)\r\n", "@Rocketknight1 @ArthurZucker okay i took the body of that script i added and created a test in test_tokenization_auto.py. see latest commits. ", "okay so here's the deal. os.chdir causes failures in other tests so i'm going to hack this to look up the current directory before the chdir and restore the current directory after the test is run. \r\n\r\nI traced the subfolder= down to what i think is the root cause I'll open a new PR against it once i confirm that. Basically get_class_from_dynamic_module() takes the subfolder= via kwargs but doesn't pass it to the subsequent call to get_cached_module_file() partly because get_cached_module_file doesn't take either a subfolder or kwargs. eventually we call cached_file() which does take a subfolder= but it ends up being None. \r\n", "ok we've got a clean build ", "@ArthurZucker i don't have write access. who should do the actual merge?", "Can someone please merge this? i don't have write access. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28854). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "verified change in my workflow. ", "> verified change in my workflow.\r\n\r\n@rl337 Could you expand a bit on what you mean? A few minutes ago, there was a comment saying that you were still getting prompted. Do the tests here reflect the tests being run to verify on your workflow? ", "@amyeroberts i deleted that comment. I realized that i was working from an out of date checkout of my fork. sorry for the confusion. ", "Hi @rl337, I'm still a bit confused about this! Like I mentioned above, I tried loading models like `Qwen/Qwen-VL` and didn't see the prompt issue described here, even though they have custom code in their tokenizer. Before we merge this PR, I'd like to know what conditions actually trigger this bug - for example, does it only occur with local repos, rather than repos on the Hub, or is there some specific config value that triggers the issue in your code that isn't an issue for `Qwen`? ", "@Rocketknight1 The code that's in [verify_tokenizer_standalone.txt](https://github.com/huggingface/transformers/files/14183375/verify_tokenizer_standalone.txt) as well as what i attached in the unit test both exercise the bug. I'm not sure what the key difference between what's in the test and what's exercised by Qwen/Qwen-VL. I can look into it when i get a chance. ", "@Rocketknight1 Okay. I have an answer for you. The difference between the configs from the test case and Qwen/Qwen-VL comes down to the contents of the `tokenizer_config.json`. In the `Owen/Owen-VL`, there's an explicitly defined `tokenizer_class` entry which circumvents the need to check to see if a tokenizer class is defined in the config.json via AutoConfig which is where my patch adds the trust_remote_code.\r\n\r\nI don't 100% understand this behavior of looking up a tokenizer_class given that to get to this code, we're already calling the tokenizer class's from_pretrained() which means that cls is the tokenizer class. Is this to allow the tokenizer_config.json to override the AutoMap's tokenizer definitions? but there you have it. that's why Owen/Owen-VL works but my test case does not. ", "@ArthurZucker @Rocketknight1 @amyeroberts is there anything else i need to verify / or do to get this merged?", "Hi @rl337, the delay is internal - we're trying to figure out if this is the right approach to the problem, since this is a fairly complex issue that touches some of the core `transformers` code. We want to fix it, but also avoid patches that will create further problems in future. You don't need to do anything else for now, but give us a little bit of time to investigate!", "Posting my understanding of the issue:\r\n\r\n- When you load a tokenizer, the tokenizer tries to figure out the tokenizer class of the repo\r\n- To do this, the tokenizer reads `tokenizer_config.json` and looks for a `tokenizer_class` key to determine the model class\r\n- If the key isn't present, the tokenizer tries to initialize a config from the repo with `AutoConfig`\r\n- Before this PR, `trust_remote_code` was not propagated correctly to the `AutoConfig` call. Therefore, an unwanted confirmation prompt is created if:\r\n - You load a custom code tokenizer with `AutoTokenizer.from_pretrained()`\r\n - You set `trust_remote_code=True`\r\n - `tokenizer_class` is not defined in `tokenizer_config.json`\r\n - The model config also requires custom code\r\n- The solution in this PR is to add a `trust_remote_code` argument to `tokenizer.from_pretrained()`. This argument does nothing in the function itself, but is passed to the `AutoConfig` call. \r\n- This PR also updates `AutoTokenizer.from_pretrained()` to pass its `trust_remote_code` value to `tokenizer.from_pretrained()`.\r\n\r\nAfter investigating, I think this is a good change, and doesn't introduce other security issues, or conflict with other areas of the library, so I'm willing to approve it.\r\n\r\n**However**, one thing worth noting is that the tokenizer doesn't actually use the tokenizer class string in any of the loading code! The only purpose of the code block that causes this issue is just to check the tokenizer class name against the repo tokenizer name and raise a warning for users loading a tokenizer with a different tokenizer class. Since most people load tokenizers with `AutoTokenizer` now, we could consider just removing that block instead, as I don't think we need that big code block to raise a mostly unused warning anymore.\r\n\r\ncc @ArthurZucker ", "@Rocketknight1 I see. Yeah your summarization is my understanding of the situation. \r\n\r\nI think that removing the path to lead to a warning and instead failing fast with an appropriate error message would be awesome.\r\n\r\nAnother thing that I was thinking was to add some kind of LoadingPolicy object which can be used to aggregate options for loading model classes, et al instead of relying on kwargs to propagate these options. It'll future proof the API because adding additional members to the policy object won't change all of the signatures all the way down the stack but still expose allow deep code to access stuff from the original calling function. One can also have a visible json which describes explicitly what the policy of the loader is which can then be used across different code. \r\n\r\nSo concretely something like this:\r\n\r\n```\r\nclass LoaderPolicy:\r\n trust_remote_code: bool\r\n local_files_only: bool\r\n cache_directory: string\r\n ...\r\n\r\n @staticmethod\r\n def from_json(cls, filename: str, policy_dir: str = '.')\r\n using open(os.path.join(policy_dir, filename),) as fp:\r\n policy_json = json.load(fp)\r\n # fill in members here\r\n\r\n policy = LoaderPolicy.from_json('loader_policy.json', policy_dir='some_dir/policies')\r\n\r\n AutoModel.from_pretrained(model_id, load_policy=policy)\r\n```", "Hi @rl337 - it's a cool idea, but I'd worry about users having to create a `Policy` object. Although that might simplify the internal infrastructure, it would complicate the UX for people who often just want to make one single call to `AutoModel` or `AutoTokenizer`.\r\n\r\nAnyway, for now I think we should just merge this PR, and consider removal of that entire code block in a future PR (@rl337 if you want to open an issue or PR for that after this, I'd support it, but no pressure - it's mostly for code cleanup rather than an essential feature!) You could also open an issue suggesting your `Policy` class if you want, but there might be some pushback!\r\n\r\nAnyways, since we have core maintainer approval already, @rl337 are you okay with me merging now?", "@Rocketknight1 Sure. go ahead and merge it. I'll see if i have time to write a PR with the policy idea. I don't think it'd have to make things more complicated for end users and it'd allow flexibility for people who want a more formal way of specifying how to load models depending on environment. \r\n\r\n I'll tag you when i get to it. ", "Got it, and thanks for the fix! Even if we just leave the code block as-is, it's still a really nice usability improvement for `transformers`.", "Down to remove the block that adds a warning as well", "@rl337 want to make a follow-up PR to remove the entire warning block, in that case?" ]
1,706
1,708
1,708
CONTRIBUTOR
null
When trying to use AutoTokenizer.from_pretrained for a tokenizer that is not recognized in existing tokenizer maps, you're required respond to this trust_remote_code prompt even if you specify trust_remote_code=True. The cause of this is, in tokenization_auto.py, we kwargs.pop("trust_remote_code"...) but then don't explicitly pass it when we call the _from_pretrained(). There are two obvious fixes. Either kwargs.get() instead of kwargs.pop() or explicitly pass it along to _from_pretrained(). This PR does the latter because we don't necessarily want to keep the trust_remote_code in the kwargs when we pass it down into other functions. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28854", "html_url": "https://github.com/huggingface/transformers/pull/28854", "diff_url": "https://github.com/huggingface/transformers/pull/28854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28854.patch", "merged_at": 1708090823000 }
https://api.github.com/repos/huggingface/transformers/issues/28853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28853/comments
https://api.github.com/repos/huggingface/transformers/issues/28853/events
https://github.com/huggingface/transformers/issues/28853
2,116,646,239
I_kwDOCUB6oc5-KXVf
28,853
after batch_encode_plus generate returns sequence of EOS tokens
{ "login": "tempdeltavalue", "id": 36921178, "node_id": "MDQ6VXNlcjM2OTIxMTc4", "avatar_url": "https://avatars.githubusercontent.com/u/36921178?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tempdeltavalue", "html_url": "https://github.com/tempdeltavalue", "followers_url": "https://api.github.com/users/tempdeltavalue/followers", "following_url": "https://api.github.com/users/tempdeltavalue/following{/other_user}", "gists_url": "https://api.github.com/users/tempdeltavalue/gists{/gist_id}", "starred_url": "https://api.github.com/users/tempdeltavalue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tempdeltavalue/subscriptions", "organizations_url": "https://api.github.com/users/tempdeltavalue/orgs", "repos_url": "https://api.github.com/users/tempdeltavalue/repos", "events_url": "https://api.github.com/users/tempdeltavalue/events{/privacy}", "received_events_url": "https://api.github.com/users/tempdeltavalue/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Most probably a padding side issue. If you skip the special tokens you should no longer see them when decoding. \r\nIt is planned to get rid of this function anyway to keep only encode and decode for simplicity! \r\nI recommend to use encode" ]
1,706
1,707
null
NONE
null
### System Info Here's the screenshot which is show this issue ![Screenshot 2024-02-03 210015](https://github.com/huggingface/transformers/assets/36921178/2492512e-c555-4870-be15-f38fea07e516) HF - discussion: https://discuss.huggingface.co/t/gpt2-returns-sequence-of-endoftext-after-finetuning/70418 HF topic - discord: https://discord.com/channels/879548962464493619/1201159949288488970 thank you.. ### Who can help? Would be great to get some advice about it @gante @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction To reproduce please check this notebook and repo with the code https://github.com/tempdeltavalue/temp_l/blob/main/finetune_seq2seq.ipynb ### Expected behavior I want to encode list of strings to batch and pass it to generate and get output without EOS sequence
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28853/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28852/comments
https://api.github.com/repos/huggingface/transformers/issues/28852/events
https://github.com/huggingface/transformers/pull/28852
2,116,511,489
PR_kwDOCUB6oc5l6rzE
28,852
[Docs] Add missing language options and fix broken links
{ "login": "khipp", "id": 9824526, "node_id": "MDQ6VXNlcjk4MjQ1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9824526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khipp", "html_url": "https://github.com/khipp", "followers_url": "https://api.github.com/users/khipp/followers", "following_url": "https://api.github.com/users/khipp/following{/other_user}", "gists_url": "https://api.github.com/users/khipp/gists{/gist_id}", "starred_url": "https://api.github.com/users/khipp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khipp/subscriptions", "organizations_url": "https://api.github.com/users/khipp/orgs", "repos_url": "https://api.github.com/users/khipp/repos", "events_url": "https://api.github.com/users/khipp/events{/privacy}", "received_events_url": "https://api.github.com/users/khipp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28852). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Rebased to resolve merge conflicts with the main branch." ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? This PR adds missing entries to the language selection menu, as well as links to the Colab and AWS Studio notebooks for the ONNX examples. It also fixes various hyperlinks that were broken due to spaces within the URL or spaces between the link text and the URL, and updates the links to OpenAI research articles.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28852", "html_url": "https://github.com/huggingface/transformers/pull/28852", "diff_url": "https://github.com/huggingface/transformers/pull/28852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28852.patch", "merged_at": 1707249662000 }
https://api.github.com/repos/huggingface/transformers/issues/28851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28851/comments
https://api.github.com/repos/huggingface/transformers/issues/28851/events
https://github.com/huggingface/transformers/issues/28851
2,116,466,689
I_kwDOCUB6oc5-JrgB
28,851
Getting the error: "ValueError: The following model_kwargs are not used by the model:....."
{ "login": "Pranav110500", "id": 104620839, "node_id": "U_kgDOBjxjJw", "avatar_url": "https://avatars.githubusercontent.com/u/104620839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pranav110500", "html_url": "https://github.com/Pranav110500", "followers_url": "https://api.github.com/users/Pranav110500/followers", "following_url": "https://api.github.com/users/Pranav110500/following{/other_user}", "gists_url": "https://api.github.com/users/Pranav110500/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pranav110500/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pranav110500/subscriptions", "organizations_url": "https://api.github.com/users/Pranav110500/orgs", "repos_url": "https://api.github.com/users/Pranav110500/repos", "events_url": "https://api.github.com/users/Pranav110500/events{/privacy}", "received_events_url": "https://api.github.com/users/Pranav110500/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@ArthurZucker @younesbelkada Can either of you provide any insights on this? If additional info is needed to explain the issue further do let me know. Will be really glad for any assistance from you.", "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nIn the mean time, you absolutely cannot use any new model with the old transformers, it's simply because you are missing 1m lines of code. \r\nThe issue that you have should be easily debugged: generate does not accept kwargs that are not used by the model. The model's signature should either add this kwargs, or remove them as a whole. \r\nYou should place breakpoints and debug step by step. ", "@ArthurZucker thanks for responding. Sure I will open my question on the forum to get additional help thanks for suggesting that. \r\nI just have another small query regarding what you've said and the error. If generate doesnt accept the kwargs in the given code in run_generation.py, why is it that it is working on the older transformers version without this error? Like in a sense would it mean that the model does accept the args and only the older version accepts this? OR is it that in the older version case it just overrides the error and are these args are still not used in the case?", ">is it that in the older version case it just overrides the error and are these args are still not used in the case?\r\n\r\nthat is the correct answer. We decided to add an error to make sure nothing get's silently ignored! " ]
1,706
1,707
null
NONE
null
Hello @ArthurZucker, I found you to be active in the issues section hence I have tagged you. I needed some help with this error I am facing. So I am basically re-implementing a particular research paper's code, available on their github page with the link:- https://github.com/XiangLi1999/ContrastiveDecoding When I run the commands with the proper environment with all necessary libraries installed, I get the following error:- **valueError: The following `model_kwargs` are not used by the model: ['min_prob', 'student_lm', 'teacher_student', 'model_kwargs_student', 'st_coef', 'tokenizer', 'student_min_prob', 'student_temperature', 'use_cap_student', 'use_switch'] (note: typos in the generate arguments will also show up in this list)** I found somewhere online that if I downgraded my transformers version from the current latest one to 4.21.0 it would work and it did, but in that case I am unable to then implement the code using 2 new transformer models like Mistral or LLaMa2. I wanted to know how to work around this issue and resolve it. My main query is how can the code be altered in the run_generation.py (from the repo) file so that it works for the latest version of transformers. A secondary query is regarding how to use newer models like Mistral with the older transformers version (for the older version when I run the code's command with mistral replacing gpt2xl which is in their code I get the error: Keyerror: "Mistral"). I need some help urgently for this, and would appreciate any input regarding this matter from your side. At the same time if there's anyone else whom I can coordinate with for this, please do let me know of thme. I'll be grateful from any assistance from your end. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28851/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28850/comments
https://api.github.com/repos/huggingface/transformers/issues/28850/events
https://github.com/huggingface/transformers/issues/28850
2,116,277,709
I_kwDOCUB6oc5-I9XN
28,850
'LayoutLMv2Processor' object has no attribute 'image_processor'
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I believe the [preprocessor_config.json](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/preprocessor_config.json) needs to be edited. Unfortunately I can't get detectron2 installed on my system right now, but I think this should do it:\r\n\r\n```\r\n{\r\n \"apply_ocr\": true,\r\n \"do_resize\": true,\r\n \"image_processor_type\": \"LayoutLMv2ImageProcessor\",\r\n \"resample\": 2,\r\n \"size\": 224\r\n}\r\n```\r\n\r\n@andysingal Maybe you can give it a try on your system. The file can be found in the model subdirectory of `~/.cache/huggingface/hub/`. See whether it works if you adjust it on your system. If it works you can click \"contribute\" on the [model page preprocessor_config.json](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/preprocessor_config.json) and submit a fix.", "Hi @andysingal, thanks for raising an issue! \r\n\r\nThe problem is coming from the commit being used to install transformers. It's v4.16 which is before image processors were added. I suggest upgrading to the most recent version of transformers. ", "Had the same issue today. \r\n\r\nUsed paperspace notebook with preinstalled Transformer + NLP package. I guess there was an issue with the version so just using notebook with preinstall Pytorch package worked for me. Hope this can help." ]
1,706
1,707
null
NONE
null
### System Info Colab Notebook ### Who can help? @NielsRogge @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code: https://huggingface.co/docs/transformers/v4.34.1/en/tasks/document_question_answering#preprocessing-document-images ``` !sudo apt update !sudo apt install tesseract-ocr !sudo apt install libtesseract-dev ! pip install Pillow ! pip install pytesseract # !pip install git+https://github.com/huggingface/transformers.git@2ef774211733f0acf8d3415f9284c49ef219e991 # !pip install tensorflow # !pip install h5py # !pip install transformers !pip install git+https://github.com/huggingface/transformers.git@2ef774211733f0acf8d3415f9284c49ef219e991 datasets !pip install 'git+https://github.com/facebookresearch/detectron2.git' from transformers import AutoProcessor model_checkpoint="microsoft/layoutlmv2-base-uncased" batch_size=4 processor=AutoProcessor.from_pretrained(model_checkpoint) image_processor=processor.image_processor def get_ocr_words_and_boxes(examples): images=[image.convert("RGB") for image in examples["image"]] encoded_inputs=image_processor(images) examples["image"]=encoded_inputs.pixel_values examples["words"]=encoded_inputs.words examples["boxes"]=encoded_inputs.boxes return examples ERROR: AttributeError Traceback (most recent call last) [<ipython-input-14-914d2324f9c7>](https://localhost:8080/#) in <cell line: 1>() ----> 1 image_processor=processor.image_processor 2 3 4 def get_ocr_words_and_boxes(examples): 5 images=[image.convert("RGB") for image in examples["image"]] AttributeError: 'LayoutLMv2Processor' object has no attribute 'image_processor' ``` ### Expected behavior running the model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28850/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28849/comments
https://api.github.com/repos/huggingface/transformers/issues/28849/events
https://github.com/huggingface/transformers/issues/28849
2,115,972,878
I_kwDOCUB6oc5-Hy8O
28,849
TextStreamer: An option to print / put every single token instead of whole words
{ "login": "vicboyv", "id": 12023068, "node_id": "MDQ6VXNlcjEyMDIzMDY4", "avatar_url": "https://avatars.githubusercontent.com/u/12023068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vicboyv", "html_url": "https://github.com/vicboyv", "followers_url": "https://api.github.com/users/vicboyv/followers", "following_url": "https://api.github.com/users/vicboyv/following{/other_user}", "gists_url": "https://api.github.com/users/vicboyv/gists{/gist_id}", "starred_url": "https://api.github.com/users/vicboyv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vicboyv/subscriptions", "organizations_url": "https://api.github.com/users/vicboyv/orgs", "repos_url": "https://api.github.com/users/vicboyv/repos", "events_url": "https://api.github.com/users/vicboyv/events{/privacy}", "received_events_url": "https://api.github.com/users/vicboyv/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @gante ", "@vicboyv it's in our plans, but I can't give you a timeframe :) " ]
1,706
1,707
null
NONE
null
### Feature request https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/generation/streamers.py#L108-L114 The current implementation forces us to visualize decoding on a per word basis. Add an option to print the string as soon as it's generated. ### Motivation I use TextStreamer to visualize several things in my terminal: 1. How words are split 2. Actual token-per-second performance 3. Understand how it generates very long words like URLs ### Your contribution I'm a Python noob
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28849/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28848/comments
https://api.github.com/repos/huggingface/transformers/issues/28848/events
https://github.com/huggingface/transformers/pull/28848
2,115,904,800
PR_kwDOCUB6oc5l4mu-
28,848
Clean up staging tmp checkpoint directory
{ "login": "woshiyyya", "id": 26745457, "node_id": "MDQ6VXNlcjI2NzQ1NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/26745457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woshiyyya", "html_url": "https://github.com/woshiyyya", "followers_url": "https://api.github.com/users/woshiyyya/followers", "following_url": "https://api.github.com/users/woshiyyya/following{/other_user}", "gists_url": "https://api.github.com/users/woshiyyya/gists{/gist_id}", "starred_url": "https://api.github.com/users/woshiyyya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woshiyyya/subscriptions", "organizations_url": "https://api.github.com/users/woshiyyya/orgs", "repos_url": "https://api.github.com/users/woshiyyya/repos", "events_url": "https://api.github.com/users/woshiyyya/events{/privacy}", "received_events_url": "https://api.github.com/users/woshiyyya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @pacman100 @muellerzr ", "cc @amyeroberts ", "@pacman100 Is pinging me for review here a signal that you approve of these changes and it's ready for final review? ", "Any updates on this?", "@pacman100 Requested review from you to confirm this fix is in line with expected trainer behaviour. Once you've approved I can do the final maintainer review ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28848). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? In PR #28364 , it changed the checkpoint folder renaming logic, which makes only local_rank_0 workers or global_rank_0 worker to rename the checkpoint dir from `tmp-checkpoint-*` to `checkpoint-*`. It will be a noop on other ranks. When we have multiple nodes, and if you save on global rank 0 only, the tmp checkpoint dir on other nodes might not get cleaned up on. This PR checks and deletes those tmp directories. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28848", "html_url": "https://github.com/huggingface/transformers/pull/28848", "diff_url": "https://github.com/huggingface/transformers/pull/28848.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28848.patch", "merged_at": 1707752841000 }
https://api.github.com/repos/huggingface/transformers/issues/28847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28847/comments
https://api.github.com/repos/huggingface/transformers/issues/28847/events
https://github.com/huggingface/transformers/pull/28847
2,115,744,741
PR_kwDOCUB6oc5l4DPG
28,847
Fast image processor
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28847). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
null
COLLABORATOR
null
# What does this PR do? Initial benchmark comparing the two image processors: ![benchmark_fast_image_processor](https://github.com/huggingface/transformers/assets/22614925/0049db43-8094-4907-ba8f-ccdb4178c1f9) Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28847/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28847/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28847", "html_url": "https://github.com/huggingface/transformers/pull/28847", "diff_url": "https://github.com/huggingface/transformers/pull/28847.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28847.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28846/comments
https://api.github.com/repos/huggingface/transformers/issues/28846/events
https://github.com/huggingface/transformers/issues/28846
2,115,724,286
I_kwDOCUB6oc5-G2P-
28,846
Donut model only works on 4.36.2 for inference, not 4.37.2
{ "login": "VikParuchuri", "id": 913340, "node_id": "MDQ6VXNlcjkxMzM0MA==", "avatar_url": "https://avatars.githubusercontent.com/u/913340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VikParuchuri", "html_url": "https://github.com/VikParuchuri", "followers_url": "https://api.github.com/users/VikParuchuri/followers", "following_url": "https://api.github.com/users/VikParuchuri/following{/other_user}", "gists_url": "https://api.github.com/users/VikParuchuri/gists{/gist_id}", "starred_url": "https://api.github.com/users/VikParuchuri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VikParuchuri/subscriptions", "organizations_url": "https://api.github.com/users/VikParuchuri/orgs", "repos_url": "https://api.github.com/users/VikParuchuri/repos", "events_url": "https://api.github.com/users/VikParuchuri/events{/privacy}", "received_events_url": "https://api.github.com/users/VikParuchuri/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "fyi @NielsRogge " ]
1,706
1,707
null
NONE
null
### System Info I have a donut model with some slight customizations to the mbart decoder (added gqa and moe). It works fine on 4.36.2 and 4.37.2 for training. But inference only works on 4.36.2. When I run inference on 4.37.2, then the output degenerates into repetition. @amyeroberts Here is an example (the text has been ocred with donut, then rendered back onto a page image): This is with 4.36.2: ![image](https://github.com/huggingface/transformers/assets/913340/0382c56c-66eb-4989-a35d-92c1c6dbaba9) And this is with 4.37.2: ![image](https://github.com/huggingface/transformers/assets/913340/01d710c3-d19e-4f00-a6bf-1cb1e530bd15) Everything else is identical (same system, same packages). I don't see anything obvious in the release notes that would cause this. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction - Clone https://github.com/VikParuchuri/surya/tree/rec (has to be the rec branch) - Install (see README) - Run `python benchmark/recognition.py --max 1 --debug` You should see different output with different transformers versions. ### Expected behavior I expect the output to be the same with both versions, and to not degenerate into repetition.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28846/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28845/comments
https://api.github.com/repos/huggingface/transformers/issues/28845/events
https://github.com/huggingface/transformers/pull/28845
2,115,516,456
PR_kwDOCUB6oc5l3P4r
28,845
Bump dash from 2.3.0 to 2.15.0 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" }, { "id": 6410654816, "node_id": "LA_kwDOCUB6oc8AAAABfhrUYA", "url": "https://api.github.com/repos/huggingface/transformers/labels/python", "name": "python", "color": "2b67c6", "default": false, "description": "Pull requests that update Python code" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28845). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
1,707
CONTRIBUTOR
null
Bumps [dash](https://github.com/plotly/dash) from 2.3.0 to 2.15.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/plotly/dash/releases">dash's releases</a>.</em></p> <blockquote> <h2>Dash v2.15.0</h2> <h2>Added</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2695">#2695</a> Adds <code>triggered_id</code> to <code>dash_clientside.callback_context</code>. Fixes <a href="https://redirect.github.com/plotly/dash/issues/2692">#2692</a></li> <li><a href="https://redirect.github.com/plotly/dash/pull/2723">#2723</a> Improve dcc Slider/RangeSlider tooltips. Fixes <a href="https://redirect.github.com/plotly/dash/issues/1846">#1846</a> <ul> <li>Add <code>tooltip.template</code> a string for the format template, {value} will be formatted with the actual value.</li> <li>Add <code>tooltip.style</code> a style object to give to the div of the tooltip.</li> <li>Add <code>tooltip.transform</code> a reference to a function in the <code>window.dccFunctions</code> namespace.</li> </ul> </li> <li><a href="https://redirect.github.com/plotly/dash/pull/2732">#2732</a> Add special key <code>_dash_error</code> to <code>setProps</code>, allowing component developers to send error without throwing in render. Usage <code>props.setProps({_dash_error: new Error(&quot;custom error&quot;)})</code></li> </ul> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2732">#2732</a> Sanitize html props that are vulnerable to xss vulnerability if user data is inserted. Fix Validate url to prevent XSS attacks <a href="https://redirect.github.com/plotly/dash/issues/2729">#2729</a></li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2652">#2652</a> dcc.Clipboard supports htm_content and triggers a copy to clipboard when n_clicks are changed</li> <li><a href="https://redirect.github.com/plotly/dash/pull/2721">#2721</a> Remove ansi2html, fixes <a href="https://redirect.github.com/plotly/dash/issues/2713">#2613</a></li> </ul> <h2>Dash v2.14.2</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2700">#2700</a> Fix <code>_allow_dynamic_callbacks</code> for newly-added components.</li> </ul> <h2>Dash v2.14.1</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2672">#2672</a> Fix <code>get_caller_name</code> in case the source is not available.</li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2674">#2674</a> Raise flask &amp; werkzeug limits to &lt;3.1</li> </ul> <h2>Dash v2.14.0</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2634">#2634</a> Fix deprecation warning on pkg_resources, fix <a href="https://redirect.github.com/plotly/dash/issues/2631">#2631</a></li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2635">#2635</a> Get proper app module name, remove need to give <code>__name__</code> to Dash constructor.</li> </ul> <h2>Added</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2647">#2647</a> <code>routing_callback_inputs</code> allowing to pass more Input and/or State arguments to the pages routing callback</li> <li><a href="https://redirect.github.com/plotly/dash/pull/2649">#2649</a> Add <code>_allow_dynamic_callbacks</code>, register new callbacks inside other callbacks. <strong>WARNING: dynamic callback creation can be dangerous, use at you own risk. It is not intended for use in a production app, multi-user or multiprocess use as it only works for a single user.</strong></li> </ul> <h2>Dash v2.13.0</h2> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2610">#2610</a> Load plotly.js bundle/version from plotly.py</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/plotly/dash/blob/dev/CHANGELOG.md">dash's changelog</a>.</em></p> <blockquote> <h2>[2.15.0] - 2024-01-31</h2> <h2>Added</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2695">#2695</a> Adds <code>triggered_id</code> to <code>dash_clientside.callback_context</code>. Fixes <a href="https://redirect.github.com/plotly/dash/issues/2692">#2692</a></li> <li><a href="https://redirect.github.com/plotly/dash/pull/2723">#2723</a> Improve dcc Slider/RangeSlider tooltips. Fixes <a href="https://redirect.github.com/plotly/dash/issues/1846">#1846</a> <ul> <li>Add <code>tooltip.template</code> a string for the format template, {value} will be formatted with the actual value.</li> <li>Add <code>tooltip.style</code> a style object to give to the div of the tooltip.</li> <li>Add <code>tooltip.transform</code> a reference to a function in the <code>window.dccFunctions</code> namespace.</li> </ul> </li> <li><a href="https://redirect.github.com/plotly/dash/pull/2732">#2732</a> Add special key <code>_dash_error</code> to <code>setProps</code>, allowing component developers to send error without throwing in render. Usage <code>props.setProps({_dash_error: new Error(&quot;custom error&quot;)})</code></li> </ul> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2732">#2732</a> Sanitize html props that are vulnerable to xss vulnerability if user data is inserted. Fix Validate url to prevent XSS attacks <a href="https://redirect.github.com/plotly/dash/issues/2729">#2729</a></li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2652">#2652</a> dcc.Clipboard supports htm_content and triggers a copy to clipboard when n_clicks are changed</li> <li><a href="https://redirect.github.com/plotly/dash/pull/2721">#2721</a> Remove ansi2html, fixes <a href="https://redirect.github.com/plotly/dash/issues/2713">#2613</a></li> </ul> <h2>[2.14.2] - 2023-11-27</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2700">#2700</a> Fix <code>_allow_dynamic_callbacks</code> for newly-added components.</li> </ul> <h2>[2.14.1] - 2023-10-26</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2672">#2672</a> Fix <code>get_caller_name</code> in case the source is not available.</li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2674">#2674</a> Raise flask &amp; werkzeug limits to &lt;3.1</li> </ul> <h2>[2.14.0] - 2023-10-11</h2> <h2>Fixed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2634">#2634</a> Fix deprecation warning on pkg_resources, fix <a href="https://redirect.github.com/plotly/dash/issues/2631">#2631</a></li> </ul> <h2>Changed</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2635">#2635</a> Get proper app module name, remove need to give <code>__name__</code> to Dash constructor.</li> </ul> <h2>Added</h2> <ul> <li><a href="https://redirect.github.com/plotly/dash/pull/2647">#2647</a> <code>routing_callback_inputs</code> allowing to pass more Input and/or State arguments to the pages routing callback</li> <li><a href="https://redirect.github.com/plotly/dash/pull/2649">#2649</a> Add <code>_allow_dynamic_callbacks</code>, register new callbacks inside other callbacks. <strong>WARNING: dynamic callback creation can be dangerous, use at you own risk. It is not intended for use in a production app, multi-user or multiprocess use as it only works for a single user.</strong></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/plotly/dash/commit/115aa4ea1718ca107bf2bd944ff1b4600c80335d"><code>115aa4e</code></a> Merge pull request <a href="https://redirect.github.com/plotly/dash/issues/2739">#2739</a> from plotly/master-2.15.0</li> <li><a href="https://github.com/plotly/dash/commit/9243f93fef1758ec47c6a3d1699fb6caae4da0f9"><code>9243f93</code></a> build</li> <li><a href="https://github.com/plotly/dash/commit/83c54226425e89f2ad3e75afaf6ed0fee9080e90"><code>83c5422</code></a> Version 2.15.0 build artifacts</li> <li><a href="https://github.com/plotly/dash/commit/78d07c42a2a03415ac53cb709c9da4944d5cedad"><code>78d07c4</code></a> Merge branch 'dev' into master-2.15.0</li> <li><a href="https://github.com/plotly/dash/commit/6a8da527fd679d52cf8d5286116134504696131a"><code>6a8da52</code></a> Merge pull request <a href="https://redirect.github.com/plotly/dash/issues/2737">#2737</a> from plotly/version-2.15.0</li> <li><a href="https://github.com/plotly/dash/commit/7cb6f07cbce130226e116dda9c6d83966dac972d"><code>7cb6f07</code></a> build</li> <li><a href="https://github.com/plotly/dash/commit/da4261e3328f4b20310cfd10316475b547d60536"><code>da4261e</code></a> Fix changelog.</li> <li><a href="https://github.com/plotly/dash/commit/27751a8428e89eb8b0933f13441fb80cf768d1b2"><code>27751a8</code></a> Version 2.15.0</li> <li><a href="https://github.com/plotly/dash/commit/49ac14fe23df82d8dc141281999cf361007d3373"><code>49ac14f</code></a> Merge pull request <a href="https://redirect.github.com/plotly/dash/issues/2723">#2723</a> from plotly/slider-tips</li> <li><a href="https://github.com/plotly/dash/commit/06fb03a6fc4a1f8a41517cf25d1421474a4e53d5"><code>06fb03a</code></a> docstring typos</li> <li>Additional commits viewable in <a href="https://github.com/plotly/dash/compare/v2.3.0...v2.15.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=dash&package-manager=pip&previous-version=2.3.0&new-version=2.15.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28845", "html_url": "https://github.com/huggingface/transformers/pull/28845", "diff_url": "https://github.com/huggingface/transformers/pull/28845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28845.patch", "merged_at": 1707099150000 }
https://api.github.com/repos/huggingface/transformers/issues/28844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28844/comments
https://api.github.com/repos/huggingface/transformers/issues/28844/events
https://github.com/huggingface/transformers/pull/28844
2,115,505,747
PR_kwDOCUB6oc5l3NgX
28,844
[Docs] Spanish translation of task_summary.md
{ "login": "aaronjimv", "id": 67152883, "node_id": "MDQ6VXNlcjY3MTUyODgz", "avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaronjimv", "html_url": "https://github.com/aaronjimv", "followers_url": "https://api.github.com/users/aaronjimv/followers", "following_url": "https://api.github.com/users/aaronjimv/following{/other_user}", "gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions", "organizations_url": "https://api.github.com/users/aaronjimv/orgs", "repos_url": "https://api.github.com/users/aaronjimv/repos", "events_url": "https://api.github.com/users/aaronjimv/events{/privacy}", "received_events_url": "https://api.github.com/users/aaronjimv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello. This is a long doc page, so I am open to any feedback. Thanks.", "Thanks for the PR. Let's try not to ping as many people for this please! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> The structure/format of the docs LGTM! I can't review the content itself since I'm not a native Spanish speaker. Do you know anyone in the ML community who would be interested in contributing and reviewing your work?\r\n\r\nHi @stevhliu, thanks!\r\n@osanseviero has reviewed my work before.", "For sure, but itโ€™ll probably be faster if there are other community members who can help review your translation since \r\n@osanseviero is pretty busy himself!\r\n\r\nLet me see if I can organize something with @mariagrandury and @mrm8488, founders of [SomosNLP](https://somosnlp.org/) (an awesome NLP community for Spanish speakers) to help review these translations ๐Ÿ™‚ ", "> For sure, but itโ€™ll probably be faster if there are other community members who can help review your translation since @osanseviero is pretty busy himself!\r\n> \r\n> Let me see if I can organize something with @mariagrandury and @mrm8488, founders of [SomosNLP](https://somosnlp.org/) (an awesome NLP community for Spanish speakers) to help review these translations ๐Ÿ™‚\r\n\r\nHi @stevhliu thanks for your support, I really appreciate it. \r\nI am attentive to any feedback by SomosNLP ๐Ÿค—. ", "Hi @stevhliu.\r\nI would like to ask, is there any update on this PR? \r\nI appreciate your help, thanks.", "cc @gisturiz, would you be interested in helping review this translation? ๐Ÿ™‚ ", "Hi! I saw a message from @osanseviero in the SomosNLP community asking for help with the review of a translation. \r\n\r\nI would be happy to review it!", "Hi @tadeodonegana, thanks you! I am open to any feedback.", "Thanks for the help @tadeodonegana ๐Ÿ˜Š\r\n\r\n@stevhliu let me know is it anything's else, thanks๐Ÿค—. \r\n" ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish version of task_summary.md toย transformers/docs/source/es Fixes #15947 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28844/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28844", "html_url": "https://github.com/huggingface/transformers/pull/28844", "diff_url": "https://github.com/huggingface/transformers/pull/28844.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28844.patch", "merged_at": 1708127406000 }
https://api.github.com/repos/huggingface/transformers/issues/28843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28843/comments
https://api.github.com/repos/huggingface/transformers/issues/28843/events
https://github.com/huggingface/transformers/pull/28843
2,115,465,267
PR_kwDOCUB6oc5l3E3J
28,843
Abstract image processor arg checks.
{ "login": "molbap", "id": 39954772, "node_id": "MDQ6VXNlcjM5OTU0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/molbap", "html_url": "https://github.com/molbap", "followers_url": "https://api.github.com/users/molbap/followers", "following_url": "https://api.github.com/users/molbap/following{/other_user}", "gists_url": "https://api.github.com/users/molbap/gists{/gist_id}", "starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/molbap/subscriptions", "organizations_url": "https://api.github.com/users/molbap/orgs", "repos_url": "https://api.github.com/users/molbap/repos", "events_url": "https://api.github.com/users/molbap/events{/privacy}", "received_events_url": "https://api.github.com/users/molbap/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also linked to #28711 as I discovered logic flow issues here, seems fitting to abstract them separately and deal with the actual processing in the main PR. Here I'll try to stick to verifications and fix what's necessary to satisfy existing tests.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28843). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts I think it's ok to take another look at this one now! Improved a few things, didn't add much, will rebase the other refactor off of that one " ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? This refactors existing image processor argument checks that sprawl out on all existing models that have an `ImageProcessor`. Lines such as ```python if do_resize is not None and size is None: raise ValueError("Size and max_size must be specified if do_resize is True.") if do_rescale is not None and rescale_factor is None: raise ValueError("Rescale factor must be specified if do_rescale is True.") if do_normalize is not None and (image_mean is None or image_std is None): raise ValueError("Image mean and std must be specified if do_normalize is True.") ``` can be abstracted away in a `validate...` function residing in `image_utils`. Also, fixing (when it doesn't break BC) some cases where existence of arguments is checked, but the actual `preprocess` method doesn't seem to call them. `bridgetower` doesn't pass `center_crop` from init, `chinese_clip` does not `convert_to_rgb`, and so on. ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28843/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28843/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28843", "html_url": "https://github.com/huggingface/transformers/pull/28843", "diff_url": "https://github.com/huggingface/transformers/pull/28843.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28843.patch", "merged_at": 1708423546000 }
https://api.github.com/repos/huggingface/transformers/issues/28842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28842/comments
https://api.github.com/repos/huggingface/transformers/issues/28842/events
https://github.com/huggingface/transformers/pull/28842
2,115,229,416
PR_kwDOCUB6oc5l2Q0V
28,842
Mark `test_encoder_decoder_model_generate` for `vision_encoder_deocder` as flaky
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28842). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "thanks, everyone has a nice Friday evening now :-) I hope" ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Marks the tests as flaky to avoid blocking PRs. Reference failling CI run: https://app.circleci.com/pipelines/github/huggingface/transformers/83611/workflows/666b01c9-1be8-4daa-b85d-189e670fc168/jobs/1078635/tests#failed-test-0 Reference issue for tracking: #28841 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28842/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28842", "html_url": "https://github.com/huggingface/transformers/pull/28842", "diff_url": "https://github.com/huggingface/transformers/pull/28842.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28842.patch", "merged_at": 1706893028000 }
https://api.github.com/repos/huggingface/transformers/issues/28841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28841/comments
https://api.github.com/repos/huggingface/transformers/issues/28841/events
https://github.com/huggingface/transformers/issues/28841
2,115,227,084
I_kwDOCUB6oc5-E83M
28,841
test_encoder_decoder_model_generate for vision_encoder_deocder is flaky
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @zucchini-nlp" ]
1,706
1,707
1,707
COLLABORATOR
null
### System Info transformers 4.38.0dev ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hard to reproduce, unfortunately. Running the single test enough times will trigger a failure: ``` python -m pytest -v tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::ViT2TrOCR::test_encoder_decoder_model_generate ``` Fails with: ``` FAILED tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py::ViT2TrOCR::test_encoder_decoder_model_generate - AssertionError: torch.Size([13, 8]) != (13, 20) ``` Reference CI run: https://app.circleci.com/pipelines/github/huggingface/transformers/83611/workflows/666b01c9-1be8-4daa-b85d-189e670fc168/jobs/1078635/tests#failed-test-0 ### Expected behavior Non-flaky behaviour for the tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28841/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28840/comments
https://api.github.com/repos/huggingface/transformers/issues/28840/events
https://github.com/huggingface/transformers/pull/28840
2,115,160,657
PR_kwDOCUB6oc5l2Bzd
28,840
Use `-v` for `pytest` on CircleCI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Example run\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/83697/workflows/6b7bf7c2-cf98-474c-a3dd-3eea4cfa0cfa", "For context for anyone coming cold to this PR - the motivation for introducing this is to enable retrieving which tests were run on which process to help when debugging.\r\n\r\nRecently we had an issue with flaky tests, because some tests were affecting other tests. For example, the global logger state - which was resolved in #28638. Failures like these only happen sporadically because the interacting tests aren't always in the same process. \r\n\r\nHaving the outputs means we can easily narrow down the culprit tests to a subset of size `1 / pytest_num_workers`. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28840). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I guess it's still not (super) easy to achieve re-run the subset of tests that were run in a process (grabbed from log). For example, we might get 20000 test names. It won't work if we concatenate those many test names and paste that long line to the terminal to run the test. \r\n\r\nWe can have some utility to grab the folders (their name) seen by a process. This is more realistic, but not urgent task. I could work on this later." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? So we get info about which process run which tests > [gw4] [ 99%] PASSED tests/models/kosmos2/test_processor_kosmos2.py::Kosmos2ProcessorTest::test_model_input_names tests/models/kosmos2/test_processor_kosmos2.py::Kosmos2ProcessorTest::test_processor [gw3] [ 99%] PASSED tests/models/pix2struct/test_processor_pix2struct.py::Pix2StructProcessorTest::test_model_input_names tests/models/pix2struct/test_processor_pix2struct.py::Pix2StructProcessorTest::test_processor [gw0] [ 99%] PASSED tests/models/bloom/test_tokenization_bloom.py::BloomTokenizationTest::test_training_new_tokenizer Full CI will produce ~100K lines. The output is also saved and uploaded as artifact.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28840", "html_url": "https://github.com/huggingface/transformers/pull/28840", "diff_url": "https://github.com/huggingface/transformers/pull/28840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28840.patch", "merged_at": 1706888653000 }
https://api.github.com/repos/huggingface/transformers/issues/28839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28839/comments
https://api.github.com/repos/huggingface/transformers/issues/28839/events
https://github.com/huggingface/transformers/issues/28839
2,115,080,605
I_kwDOCUB6oc5-EZGd
28,839
Llava-demo-4bit.ipynb Error (KeyError: 'llava')
{ "login": "RonanKMcGovern", "id": 78278410, "node_id": "MDQ6VXNlcjc4Mjc4NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RonanKMcGovern", "html_url": "https://github.com/RonanKMcGovern", "followers_url": "https://api.github.com/users/RonanKMcGovern/followers", "following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}", "gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}", "starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions", "organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs", "repos_url": "https://api.github.com/users/RonanKMcGovern/repos", "events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}", "received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You are probably running this on a local host, which does not have the latest version of `transformers` make sure to update with `pip install -U transformers`. You should have `import transformers;print(transformers.__version__)` >= transformers==4.37.2", "Thanks @ArthurZucker , I was running the colab notebook in colab, which specifies that version.\r\n```\r\n!pip install -q -U transformers==4.37.2\r\n```\r\n\r\nI ran again today and it worked, so I'm not quite sure what the issue was and it seems there was no bug." ]
1,706
1,707
1,707
NONE
null
### System Info See [this notebook from HuggingFace](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing#scrollTo=DFVZgElEQk3x) ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the notebook through to get this error: ``` KeyError Traceback (most recent call last) [<ipython-input-15-2c1efb550f3d>](https://localhost:8080/#) in <cell line: 6>() 4 5 model_id = "llava-hf/llava-1.5-7b-hf" ----> 6 pipe = pipeline("image-to-text", model=model_id) 7 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" 8 2 frames [/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py](https://localhost:8080/#) in __getitem__(self, key) 759 [ 760 ("openai-gpt", "openai"), --> 761 ("data2vec-audio", "data2vec"), 762 ("data2vec-text", "data2vec"), 763 ("data2vec-vision", "data2vec"), KeyError: 'llava' ``` ### Expected behavior I would expect the notebook to run through the image + text example.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28838/comments
https://api.github.com/repos/huggingface/transformers/issues/28838/events
https://github.com/huggingface/transformers/pull/28838
2,114,848,356
PR_kwDOCUB6oc5l08WZ
28,838
fix / skip (for now) some tests before switch to torch 2.2
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note, this PR doesn't unpin torch ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28838). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Some changes are necessary to switch to 2.2. Others are flaky even in torch 2.1.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28838", "html_url": "https://github.com/huggingface/transformers/pull/28838", "diff_url": "https://github.com/huggingface/transformers/pull/28838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28838.patch", "merged_at": 1706879510000 }
https://api.github.com/repos/huggingface/transformers/issues/28837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28837/comments
https://api.github.com/repos/huggingface/transformers/issues/28837/events
https://github.com/huggingface/transformers/pull/28837
2,114,828,881
PR_kwDOCUB6oc5l04GH
28,837
Llama: device/type-invariant RoPE sin/cos computation, eager attention matches original implementation
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28837). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
null
MEMBER
null
# What does this PR do? This PR fixes the following problems, all related to RoPE: 1. Casting a model with `.from_pretrained(..., torch_dtype=...)` or `.to(dtype=...)` would produce different sin/cos tensors at recomputation time. The underlying cause was `inv_freq` being a buffer, which means it was subject to buffer manipulation (like a `.to()` operation in the wrapping module). Note that [the original repo assumed it was always a `torch.float32` tensor](https://github.com/facebookresearch/llama/blob/ef351e9cd9496c579bf9f2bb036ef11bdc5ca3d2/llama/model.py#L100). In some models, there was a visible performance degradation when doing inference with `seq_len > max_position_embeddings` (see [here](https://github.com/huggingface/transformers/issues/28685#issuecomment-1921473526)); 2. The `inv_freq` tensor was being loaded from the state dict, due to a previous version of the code where it was a persistent buffer; 3. โš ๏ธ Perhaps more importantly, the sin/cos tensors are now always computed on CPU. As pointed out in [this comment](https://github.com/microsoft/DeepSpeed/issues/4932#issuecomment-1911509172), there are subtle numerical differences that depend on the initialization device, which quickly escalate into further downstream issues. This particular change results in the following: a. Smaller modeling performance differences across devices, as CPUs are ubiquitous (as opposed to accelerators, which may change); b. Prevention of loss spikes at train time, possibly due to the more accurate sin/cos computation (see [this comment](https://github.com/microsoft/DeepSpeed/issues/4932#issuecomment-1902039818) and the whole issue); c. Slightly slower throughput when recomputing the sin/cos tensors, i.e. when going beyond `self.max_seq_len_cached`. See additional data and experiments below for the impact of this PR. Most of the diff in this PR is tests, to ensure we don't regress ๐Ÿค— Suggested review order: 1. Llama modelling changes 2. Llama test changes 3. GPTNeoX changes (fixes dtype cast as intended, see experiments below :) ) 4. Other models (direct #Copied from changes) 5. Other tests (copy/paste) (Other RoPE models will follow in a future PR) ## Related GH issues Fixes https://github.com/huggingface/transformers/issues/28685 Fixes https://github.com/huggingface/transformers/issues/25681 Fixes https://github.com/huggingface/transformers/issues/28596 Fixes https://github.com/huggingface/transformers/issues/27179 Should fix/help https://github.com/microsoft/DeepSpeed/issues/4932 ## Additional data and experiments <details> <summary>Perlplexity, memory, and latency results before/after this PR</summary> NOTE: using the `.to()` casting method. The `torch_dtype` sees no differences, as `inv_freq` is not casted. <details> <summary>Llama 2 -- very little ppl differences</summary> Dtype: `bfloat16` (ignore the vram -- the latest commit has the same GPU memory footprint as `main`) ![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/b3b882b1-a324-4c58-92e4-310d4bbc48cb) ![plot_latency](https://github.com/huggingface/transformers/assets/12240844/043ae176-0fe8-4bf3-a3d2-f2b4e542496b) Dtype: `float16` (ignore the vram -- the latest commit has the same GPU memory footprint as `main`) ![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/5348e55a-958d-4d2f-a3d4-bebadd6637e8) ![plot_latency](https://github.com/huggingface/transformers/assets/12240844/2d5cc562-1ac2-4277-ac7e-89adfb8438ff) </details> <details> <summary>TinyLlama -- visible ppl upgrade</summary> Dtype: `bfloat16` (ignore the vram -- the latest commit has the same GPU memory footprint as `main`) ![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/aecbd70b-ffb5-4b4a-bc1e-e1ac2c4725c8) ![plot_latency](https://github.com/huggingface/transformers/assets/12240844/7c19fccc-885d-4bfa-ad35-bc2944f06b06) Dtype: `float16` (ignore the vram -- the latest commit has the same GPU memory footprint as `main`) ![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/f7b39bf4-0b03-4049-a702-b28c5b1cd2bf) ![plot_latency](https://github.com/huggingface/transformers/assets/12240844/e045e5d0-dc44-4867-a595-b74728e5ab70) </details> </details> <details> <summary>How sensible is the sin/cos creation to the device placement?</summary> Consider the following script: ```py import torch from transformers.models.llama.modeling_llama import LlamaRotaryEmbedding TEST_DTYPE = torch.bfloat16 for dim in (64, 256, 1024): for max_position_embeddings in (1024, 2048, 4096): for base in (10000, 100000, 1000000): rope_gpu = LlamaRotaryEmbedding(dim=dim, max_position_embeddings=max_position_embeddings, base=base, device='cuda') rope_cpu = LlamaRotaryEmbedding(dim=dim, max_position_embeddings=max_position_embeddings, base=base, device='cpu') rope_cpu = rope_cpu.to(device='cuda', dtype=TEST_DTYPE) rope_gpu = rope_gpu.to(device='cuda', dtype=TEST_DTYPE) max_sin_diff = (rope_gpu.sin_cached - rope_cpu.sin_cached).abs().max() max_cos_diff = (rope_gpu.cos_cached - rope_cpu.cos_cached).abs().max() max_diff = max(max_sin_diff, max_cos_diff) if max_diff > 0.0: print(f"dim={dim}, max_position_embeddings={max_position_embeddings}, base={base}, max_diff={max_diff:.2e}") ``` On `main`, before this PR, we can see differences as large as ~`1e-3` regardless of `TEST_DTYPE` (even in `torch.float64`!). After this PR, the difference is `0.0`. </details> <details> <summary>Original Llama codebase vs our codebase after this PR?</summary> Key takeaways: ๐Ÿ‘‰ sin/cos are created on the available device (and not on CPU) ๐Ÿ‘‰ sin/cos are not only kept in FP32, [but also applied in FP32](https://github.com/facebookresearch/llama/blob/ef351e9cd9496c579bf9f2bb036ef11bdc5ca3d2/llama/model.py#L156)! Consider the following script, which compares this hugging face's implementation against [meta's repo](https://github.com/facebookresearch/llama) ```py # run as `torchrun this_script.py` from llama import Llama from transformers import AutoModelForCausalLM import torch # Loaded in FP16 on GPU original_llama = Llama.build( ckpt_dir="/home/joao/meta_llama/Llama-2-7b/", tokenizer_path="/home/joao/meta_llama/Llama-2-7b/tokenizer.model", max_seq_len=2048, # internaly, 2048*2 is considered to compute sin/cos max_batch_size=1, ) og_logits = original_llama.model(tokens=torch.tensor([list(range(1000))]), start_pos=0) og_sin = original_llama.model.freqs_cis.imag og_cos = original_llama.model.freqs_cis.real del original_llama torch.cuda.empty_cache() # Loaded in FP16 on GPU transformers_llama = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf", device_map="auto", torch_dtype=torch.float16 ) logits = transformers_llama(torch.tensor([list(range(1000))])).logits.float() sin = transformers_llama.model.layers[0].self_attn.rotary_emb.sin_cached cos = transformers_llama.model.layers[0].self_attn.rotary_emb.cos_cached logits_diff = (og_logits.cpu() - logits.cpu()).abs().max() print(f"Max logits diff: {logits_diff.item()}") # .cat -> our sin/cos have a period of 4pi (2 cycles), the orginal have a period of 2pi (1 cycle) # .float() -> on main, we cast sin/cos to the model dtype sin_diff = (torch.cat([og_sin, og_sin], dim=1).cpu() - sin.float().cpu()).abs().max() cos_diff = (torch.cat([og_cos, og_cos], dim=1).cpu() - cos.float().cpu()).abs().max() print(f"Max sin diff: {sin_diff.item()}") print(f"Max cos diff: {cos_diff.item()}") ``` On `main` + GPU + FP16, before this PR, we can see sin/cos and logits differences as large as `2e-4` and `6e-2` (respectively). After this PR, the difference is `0.0`. </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28837/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28837", "html_url": "https://github.com/huggingface/transformers/pull/28837", "diff_url": "https://github.com/huggingface/transformers/pull/28837.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28837.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28836/comments
https://api.github.com/repos/huggingface/transformers/issues/28836/events
https://github.com/huggingface/transformers/pull/28836
2,114,788,501
PR_kwDOCUB6oc5l0vTq
28,836
Avoid edge case in audio utils
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28836). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
null
COLLABORATOR
null
# What does this PR do? Throw an error if the spectrogram is of complex values but `mel_filters` are passed. Fixes #27772 cc @sanchit-gandhi @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28836/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28836/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28836", "html_url": "https://github.com/huggingface/transformers/pull/28836", "diff_url": "https://github.com/huggingface/transformers/pull/28836.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28836.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28835/comments
https://api.github.com/repos/huggingface/transformers/issues/28835/events
https://github.com/huggingface/transformers/issues/28835
2,114,779,598
I_kwDOCUB6oc5-DPnO
28,835
ZeroDivisonError in inverse_sqrt scheduler
{ "login": "Sangh0", "id": 47784418, "node_id": "MDQ6VXNlcjQ3Nzg0NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/47784418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sangh0", "html_url": "https://github.com/Sangh0", "followers_url": "https://api.github.com/users/Sangh0/followers", "following_url": "https://api.github.com/users/Sangh0/following{/other_user}", "gists_url": "https://api.github.com/users/Sangh0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sangh0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sangh0/subscriptions", "organizations_url": "https://api.github.com/users/Sangh0/orgs", "repos_url": "https://api.github.com/users/Sangh0/repos", "events_url": "https://api.github.com/users/Sangh0/events{/privacy}", "received_events_url": "https://api.github.com/users/Sangh0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @muellerzr ", "Thank you for your interest in this issue. I will close this issue now." ]
1,706
1,708
1,708
NONE
null
Hello! I've been findiing your repository very useful. However, I've encountered a bug. It occurs when trying to train using the inverse_sqrt scheduler, leading to a ZeroDivisionError. Please refer to the following code link for details. Therefore, I suggest modifying the code from `decay = 1.0 / math.sqrt((current_step + shift) / timescale)` to `decay = 1.0 / math.sqrt((current_step + shift) / (timescale + 1e-9))`. [Bug code](https://github.com/huggingface/transformers/blob/ec29d25d9f7109f3fdaadfa51515eb6745a136ba/src/transformers/optimization.py#L292) ![zerodivisionerror](https://github.com/huggingface/transformers/assets/47784418/5ce8c101-5da9-49c6-b8c4-2a79c9ab5f80) @muellerzr @pacman100 Could you please check this issue? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python args = TrainingArguments( output_dir="./checkpoints", lr_scheduler_type="inverse_sqrt", # ZeroDivisonError ) ``` ### Expected behavior ```python decay = 1.0 / math.sqrt((current_step + shift) / (timescale + 1e-9)) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28834/comments
https://api.github.com/repos/huggingface/transformers/issues/28834/events
https://github.com/huggingface/transformers/pull/28834
2,114,594,175
PR_kwDOCUB6oc5l0Eac
28,834
Fix issues caused by natten
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> might have nightly issues no?\r\n\r\nsorry, what do you mean? You mean the daily CI? I haven't have the time to check, recently quite a lot of changes arriving", "I mean we might want to change this for the daily ci as well ๐Ÿ˜‰ ", "Sure, but let's unblock red CircleCI first (I'm fine but you probably need it's green more than me need it ๐Ÿ˜„ )", "Yep that's why I approved! ๐Ÿ‘๐Ÿป ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28834). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Don't really know what happened, but from the suggestion > E Failed to import NATTEN's CPP backend. This could be due to an invalid/incomplete install. Please uninstall NATTEN (pip uninstall natten) and re-install with the correct torch build: shi-labs.com/natten I changed the installation command. It works, see https://app.circleci.com/pipelines/github/huggingface/transformers/83676/workflows/36ffade9-c7e6-49c2-b8f4-bb5fcb016a83/jobs/1079519 Fix current error we have > __________ ERROR collecting tests/models/dinat/test_modeling_dinat.py __________ ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/natten/functional.py:28: in <module> from natten import _C E ImportError: /home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/natten/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops10zeros_like4callERKNS_6TensorESt8optionalIN3c1010ScalarTypeEES5_INS6_6LayoutEES5_INS6_6DeviceEES5_IbES5_INS6_12MemoryFormatEE
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28834", "html_url": "https://github.com/huggingface/transformers/pull/28834", "diff_url": "https://github.com/huggingface/transformers/pull/28834.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28834.patch", "merged_at": 1706875908000 }
https://api.github.com/repos/huggingface/transformers/issues/28833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28833/comments
https://api.github.com/repos/huggingface/transformers/issues/28833/events
https://github.com/huggingface/transformers/pull/28833
2,114,581,716
PR_kwDOCUB6oc5l0Bob
28,833
Dynamic parallel processing Size Adjustment for Low Mem Beam Search
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @Saibo-creator ๐Ÿ‘‹ \r\n\r\nWe're doing a sprint to add `torch.compile` support on `generate` ([tracker](https://github.com/huggingface/transformers/issues/28981)), so I'm halting the addition of changes that substantially modify a decoding method until that is complete. In particular, beam search will have to be rewritten, so this PR will likely need to come in a different shape. \r\n\r\nI'll keep you updated ๐Ÿค— " ]
1,706
1,707
null
CONTRIBUTOR
null
# What does this PR do? TL;DR This PR addresses feedback from the community, specifically a [suggestion](https://github.com/huggingface/transformers/pull/26304#issuecomment-1900489106) from @gante, to enhance memory management in beam search operations without adding complexity through additional flags. This development strikes a balance between performance and usability, ensuring the model dynamically adjusts to various hardware constraints. ## Details This Pull Request (PR) introduces to dynamically adjust the batch size during low memory beam search operations. Our traditional beam search, with a beam width of `k` and a batch size of `n`, operates as though the batch size were `n*k`. The recently introduced [low memory beam search](https://github.com/huggingface/transformers/pull/26304) improves memory efficiency by dividing the `n*k` batch into `k` sub-batches of size `n`. However, this approach has shown limitations, particularly in two scenarios: 1. **Optimizing for Hardware's Maximum Parallel Processing Capacity (`s`)**: In instances where the hardware's maximum parallel processing capacity `s` falls between `n*k` and `n`, our current method might not utilize the available resources efficiently. For example, with `n=10`, `k=10`, and `S=30`, the low memory beam search would execute ten sequential operations with a batch size of 10, whereas it could achieve better throughput with four operations of batch size 25. 2. **Handling Out-Of-Memory (OOM) Errors When `s` < `n`**: In cases where `s` is smaller than `n`, the low memory beam search might encounter OOM errors, even though a further split of the batch could allow the operation to proceed. While one might argue for using smaller batch sizes from the start, this PR provides a solution to optimize processing dynamically. ### Implementation Highlights: - **Dynamic Batch Size Adjustment**: By adopting a try/except loop, the system starts with the standard beam search parameters and dynamically reduces the batch size in half upon encountering OOM errors, with a minimum threshold set at 1. This mechanism ensures optimal memory usage and performance efficiency. - **Global Batch Size Caching**: The implementation includes caching the most recent successful batch size in a global variable, `optimal_low_mem_beam_search_bs`. This approach allows for rapid adaptation to the most efficient processing conditions without the need for rediscovery. As text inputs lengthen and memory usage increases during generation, `optimal_low_mem_beam_search_bs` is periodically updated to reflect the most current optimal conditions. ### API Impact: This update will be transparent to end users, involving no changes to the existing API. Users can expect improved efficiency without any alteration to the results produced by previous implementations. ### Testing: Existing tests confirm that the results from the low memory beam search align with those from the traditional beam search method. Specific tests for dynamic parallel processing sizes are not yet implemented. If you think it's worth adding some, I have a draft below. ### Doc: Do you think we should mention this is the doc ? Currently we have ``` sequential (`bool`, defaults to `False`): By default, beam search has `batch_size * num_beams` as effective batch size (see `beam_search()` for more details). This flag will avoid parallelizing the beam search and will instead run beam search sequentially. ``` in the [doc](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.low_memory) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28833/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28833", "html_url": "https://github.com/huggingface/transformers/pull/28833", "diff_url": "https://github.com/huggingface/transformers/pull/28833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28833.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28832/comments
https://api.github.com/repos/huggingface/transformers/issues/28832/events
https://github.com/huggingface/transformers/issues/28832
2,114,324,509
I_kwDOCUB6oc5-Bggd
28,832
Add gpt_neox in fx supported models?
{ "login": "TXacs", "id": 60869411, "node_id": "MDQ6VXNlcjYwODY5NDEx", "avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TXacs", "html_url": "https://github.com/TXacs", "followers_url": "https://api.github.com/users/TXacs/followers", "following_url": "https://api.github.com/users/TXacs/following{/other_user}", "gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}", "starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TXacs/subscriptions", "organizations_url": "https://api.github.com/users/TXacs/orgs", "repos_url": "https://api.github.com/users/TXacs/repos", "events_url": "https://api.github.com/users/TXacs/events{/privacy}", "received_events_url": "https://api.github.com/users/TXacs/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "feel free to open a PR and ping @fxmarty ! ๐Ÿค— " ]
1,706
1,706
null
NONE
null
### System Info change `qkv = qkv.view(*new_qkv_shape)` to ` qkv = qkv.view(new_qkv_shape)` I've made a modification, and now it can be traced by `hftracer`. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior I wonder if this could be incorporated into future versions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28832/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28831/comments
https://api.github.com/repos/huggingface/transformers/issues/28831/events
https://github.com/huggingface/transformers/issues/28831
2,114,290,540
I_kwDOCUB6oc5-BYNs
28,831
Missed a guard on hf_quantizer.check_quantized_param
{ "login": "pharaouk", "id": 36641995, "node_id": "MDQ6VXNlcjM2NjQxOTk1", "avatar_url": "https://avatars.githubusercontent.com/u/36641995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pharaouk", "html_url": "https://github.com/pharaouk", "followers_url": "https://api.github.com/users/pharaouk/followers", "following_url": "https://api.github.com/users/pharaouk/following{/other_user}", "gists_url": "https://api.github.com/users/pharaouk/gists{/gist_id}", "starred_url": "https://api.github.com/users/pharaouk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pharaouk/subscriptions", "organizations_url": "https://api.github.com/users/pharaouk/orgs", "repos_url": "https://api.github.com/users/pharaouk/repos", "events_url": "https://api.github.com/users/pharaouk/events{/privacy}", "received_events_url": "https://api.github.com/users/pharaouk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada ", "Yes, should be fixed by #28804", "@pharaouk could you try on transformers main?" ]
1,706
1,706
null
NONE
null
### System Info ``` model = AutoModelForCausalLM.from_pretrained( File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 562, in from_pretrained return model_class.from_pretrained( File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3504, in from_pretrained ) = cls._load_pretrained_model( File "/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3732, in _load_pretrained_model ) or not hf_quantizer.check_quantized_param( AttributeError: 'NoneType' object has no attribute 'check_quantized_param' ``` only happens w multi gpu for some reason missed a guard on hf_quantizers ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Private code ### Expected behavior Code not breaking
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28831/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28830/comments
https://api.github.com/repos/huggingface/transformers/issues/28830/events
https://github.com/huggingface/transformers/pull/28830
2,114,243,647
PR_kwDOCUB6oc5ly3mS
28,830
Reduce GPU memory usage when using FSDP+PEFT
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28830). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? 1. For FSDP+PEFT, When using `use_orig_params=True` which is now the default in Accelerate, there is no memory savings when compared to FSDP full fine-tuning. We have to set `use_orig_params=False` to realize the memory savings which makes it difficult to be in line with Accelerateโ€™s minimal API. However, this means that the model needs to be wrapped in FSDP unit before the creation of the optimizer. This PR wraps the model in FSDP before the creation of optimizer so that GPU memory savings are realized when using FSDP+PEFT. ![Screenshot 2023-12-27 at 8 48 12โ€ฏPM (2)](https://github.com/huggingface/transformers/assets/13534540/4156e0b1-b84a-4a32-87ad-6d111fb2f81d)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28830/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28830", "html_url": "https://github.com/huggingface/transformers/pull/28830", "diff_url": "https://github.com/huggingface/transformers/pull/28830.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28830.patch", "merged_at": 1706888881000 }
https://api.github.com/repos/huggingface/transformers/issues/28829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28829/comments
https://api.github.com/repos/huggingface/transformers/issues/28829/events
https://github.com/huggingface/transformers/issues/28829
2,114,203,039
I_kwDOCUB6oc5-BC2f
28,829
AWQ models loaded with AutoModelForCausalLM.from_pretrained not as fast as AutoAWQForCausalLM.from_quantized
{ "login": "nilichen", "id": 9046815, "node_id": "MDQ6VXNlcjkwNDY4MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/9046815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nilichen", "html_url": "https://github.com/nilichen", "followers_url": "https://api.github.com/users/nilichen/followers", "following_url": "https://api.github.com/users/nilichen/following{/other_user}", "gists_url": "https://api.github.com/users/nilichen/gists{/gist_id}", "starred_url": "https://api.github.com/users/nilichen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nilichen/subscriptions", "organizations_url": "https://api.github.com/users/nilichen/orgs", "repos_url": "https://api.github.com/users/nilichen/repos", "events_url": "https://api.github.com/users/nilichen/events{/privacy}", "received_events_url": "https://api.github.com/users/nilichen/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for reporting. This is expected, the operations are not fused by default in transformers, [as per the documentation](https://huggingface.co/docs/transformers/quantization#fused-modules). ", "Hi. I did have `fused_layers = False` when loading with `AutoAWQForCausalLM.from_quantized` so in theory they should share similar latency?", "cc @younesbelkada ", "@nilichen I think the autoawq use by default fused layer norm maybe that explains the whole difference - make sure also to run greedy generation to avoid any potential bottleneck, @casper-hansen is the speedup over the transformers implementation expected?", "There should not be any difference between the two implementations when fused layers are off", "That's what I thought. Let me see if I can reproduce on a public dataset and report back.", "ok that would be great - thanks @nilichen !" ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-1018-aws-x86_64-with-glibc2.35 - Python version: 3.11.7 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-Instruct-v0.2-AWQ", device_map="auto", ) model = AutoAWQForCausalLM.from_quantized( "TheBloke/Mistral-7B-Instruct-v0.2-AWQ", fuse_layers=False, quant_filename="model.safetensors", safetensors=True, ) ``` I tested both on long inputs (~5k tokens, data of privacy so won't be able to share) for generation, in both cases, I didn't use fused modules. `AutoModelForCausalLM.from_pretrained` took ~24s each while `AutoAWQForCausalLM.from_quantized` took ~18s each. The output token count didn't seem to differ much. I ran the comparison at least 3 times and got consistent results. ### Expected behavior The latency should be close.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28829/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28828/comments
https://api.github.com/repos/huggingface/transformers/issues/28828/events
https://github.com/huggingface/transformers/issues/28828
2,114,045,444
I_kwDOCUB6oc5-AcYE
28,828
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[]
1,706
1,707
null
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community ๐ŸŒ (currently 0 out of 267 complete) Who would want to translate? Please follow the ๐Ÿค— [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers ๐Ÿค—). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * ๐Ÿ™‹ If you'd like others to help you with the translation, you can also post in the ๐Ÿค— [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [x] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [x] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [x] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [x] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [x] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [x] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [x] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [x] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [x] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go ๐Ÿ”ฅ -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28828/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28827/comments
https://api.github.com/repos/huggingface/transformers/issues/28827/events
https://github.com/huggingface/transformers/issues/28827
2,113,910,992
I_kwDOCUB6oc59_7jQ
28,827
SDPA attention causes graph break while compiling model
{ "login": "huzama", "id": 34284201, "node_id": "MDQ6VXNlcjM0Mjg0MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/34284201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huzama", "html_url": "https://github.com/huzama", "followers_url": "https://api.github.com/users/huzama/followers", "following_url": "https://api.github.com/users/huzama/following{/other_user}", "gists_url": "https://api.github.com/users/huzama/gists{/gist_id}", "starred_url": "https://api.github.com/users/huzama/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huzama/subscriptions", "organizations_url": "https://api.github.com/users/huzama/orgs", "repos_url": "https://api.github.com/users/huzama/repos", "events_url": "https://api.github.com/users/huzama/events{/privacy}", "received_events_url": "https://api.github.com/users/huzama/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! would recommend you to try #27931 ๐Ÿ˜‰ " ]
1,706
1,706
null
NONE
null
### Feature request Static graph support in SDPA attention ### Motivation The use of SPDA attention significantly enhances transformers' performance and memory utilization. However, when attempting to compile it with the XLA backend on TPUs, there are graph breaks that require continuous recompilation. This makes the use of SPDA attention on TPUs impractical. ### Your contribution I did some debugging and found out that this [line](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/modeling_attn_mask_utils.py#L371) of code uses torch.all on attention_mask which changes for each iteration causing dynamic control flow, which causes recompilation in XLA-TPU. Output of torch._dynamo.explain for Llama-7b with SDPA attention graph_count=6, graph_break_count=5, break_reasons=[GraphCompileReason(reason='Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands', user_stack=[<FrameSummary file /home/ubuntu/miniforge3/envs/transformers/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py, line 1035 in forward>, <FrameSummary file /home/ubuntu/miniforge3/envs/transformers/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py, line 372 in _prepare_4d_causal_attention_mask_for_sdpa>], graph_break=True), GraphCompileReason(reason='generic_jump TensorVariable()', user_stack=[<FrameSummary file /home/ubuntu/miniforge3/envs/transformers/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py, line 372 in _prepare_4d_causal_attention_mask_for_sdpa>], graph_break=True), GraphCompileReason(reason='dynamic shape operator: aten.nonzero.default', user_stack=[<FrameSummary file /home/ubuntu/miniforge3/envs/transformers/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py, line 243 in _unmask_unattended>], graph_break=True)], op_count=977,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28827/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28826/comments
https://api.github.com/repos/huggingface/transformers/issues/28826/events
https://github.com/huggingface/transformers/issues/28826
2,113,842,792
I_kwDOCUB6oc59_q5o
28,826
Llama 2 model divergence with FSDP
{ "login": "Teng-xu", "id": 67929972, "node_id": "MDQ6VXNlcjY3OTI5OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/67929972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Teng-xu", "html_url": "https://github.com/Teng-xu", "followers_url": "https://api.github.com/users/Teng-xu/followers", "following_url": "https://api.github.com/users/Teng-xu/following{/other_user}", "gists_url": "https://api.github.com/users/Teng-xu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Teng-xu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Teng-xu/subscriptions", "organizations_url": "https://api.github.com/users/Teng-xu/orgs", "repos_url": "https://api.github.com/users/Teng-xu/repos", "events_url": "https://api.github.com/users/Teng-xu/events{/privacy}", "received_events_url": "https://api.github.com/users/Teng-xu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada I think we have seen something similar recently? ", "@Teng-xu are you correctly enabling mixed precision through `bf16=True` in `TrainingArguments` ?", "Yeah bf16 was passed into the training args, and I can verify it is being applied correctly.", "Just to provide more context on this issue I am attaching a simple script to reproduce the issue and its associated output. Note, I am just using a random tensor as the dataset and for consistency I just saved the labels associated from another training script and loaded it from a pickle object.\r\n\r\n## Script:\r\n```\r\nimport functools\r\n\r\nimport numpy as np\r\nimport torch\r\n\r\n# pylint: disable=import-error,import-outside-toplevel,invalid-name,no-member,no-name-in-module,protected-access\r\nimport transformers\r\nfrom fsdp_utils import get_transformer_layer\r\nfrom learning_rates import AnnealingLR # pylint: disable=wrong-import-order\r\nfrom logging_utils import get_logger\r\nfrom packaging import version as pversion\r\nfrom torch.nn import LayerNorm\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers.models.llama.modeling_llama import LlamaRMSNorm\r\n#model init\r\n# flash_attention_2, sdpa, eager\r\nmodel1 = AutoModelForCausalLM.from_pretrained(pretrained_model_weights, attn_implementation=\"flash_attention_2\")\r\nmodel2 = AutoModelForCausalLM.from_pretrained(pretrained_model_weights, attn_implementation=\"sdpa\")\r\n\r\nmodel1.model.layers = model1.model.layers[:4]\r\nmodel2.model.layers = model2.model.layers[:4]\r\n\r\nmodel1 = model1.type(torch.bfloat16)\r\nmodel2 = model2.type(torch.bfloat16)\r\n\r\nmodel1 = model1.to(\"cuda\")\r\nmodel2 = model2.to(\"cuda\")\r\n\r\n\r\n# creating dummy tensor\r\ntensor = torch.randint(low=0, high=9, size=(1, 4096), dtype=torch.int32).to(\"cuda\")\r\n#tensor = torch.randint([1, 4096], dtype=torch.int32).to(\"cuda\")\r\nimport pickle\r\nlabels = pickle.load( open( \"labels.p\", \"rb\" ) ).to(\"cuda\")\r\n\r\n# model fwd/bwd pass\r\nout1 = model1(input_ids=tensor, attention_mask=None, labels=labels)\r\nloss1 = out1[\"loss\"]\r\nlogits1 = out1[\"logits\"]\r\n\r\nout2 = model2(input_ids=tensor, attention_mask=None, labels=labels)\r\nloss2 = out2[\"loss\"]\r\nlogits2 = out2[\"logits\"]\r\n\r\n# model output cmp\r\nif torch.allclose(logits1, logits2, atol=1e-0):\r\n print(\"logits equal~~~~~~~~~\")\r\nelse:\r\n print(\"logits not equal~~~~~~~~~~\")\r\n\r\nprint(\"logits 1:\")\r\nprint(logits1)\r\n\r\nprint(\"logits 2:\")\r\nprint(logits2)\r\n\r\nprint(\"max diff between logits:\")\r\nprint(torch.max(torch.abs(logits1 - logits2)))\r\n\r\nloss1.backward()\r\nloss2.backward()\r\n\r\nprint(\"loss 1:\")\r\nprint(loss1)\r\n\r\nprint(\"loss 2:\")\r\nprint(loss2)\r\n\r\nif (torch.allclose(loss1, loss2)):\r\n print(\"loss equal~~~~~~~~~\")\r\nelse:\r\n print(\"loss not equal~~~~~~~~~~\")\r\n```\r\n\r\n## Output of script:\r\n```You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour \r\nYou are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. \r\nFlash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. No dtype was provided, you should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator. \r\nFlash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. No dtype was provided, you should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator. \r\nLoading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:15<00:00, 7.91s/it]\r\nLoading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:24<00:00, 12.15s/it]\r\nlogits equal~~~~~~~~~ \r\nlogits 1: \r\ntensor([[[-1.3047, -2.2812, 2.2500, ..., -1.6094, -1.5078, 0.6914], \r\n [-2.1094, -4.6875, 1.2031, ..., -1.1484, -1.7109, -0.4336], \r\n [-1.1719, -4.6562, 0.3516, ..., 0.3301, -0.9727, 0.2852], \r\n ..., \r\n [-2.2188, 8.8125, 1.4219, ..., -1.3906, -1.7266, -3.6250], \r\n [-0.9844, 11.0625, 0.7617, ..., -0.4609, 0.0225, -2.7188], \r\n [-1.0234, 10.8750, 0.8125, ..., -0.4395, -0.1641, -2.7656]]], \r\n device='cuda:0', grad_fn=<ToCopyBackward0>) \r\nlogits 2: \r\ntensor([[[-1.3047e+00, -2.2812e+00, 2.2500e+00, ..., -1.6094e+00, \r\n -1.5078e+00, 6.9141e-01], \r\n [-2.1094e+00, -4.6875e+00, 1.2031e+00, ..., -1.1484e+00, \r\n -1.7109e+00, -4.3359e-01], \r\n [-1.1719e+00, -4.6562e+00, 3.5156e-01, ..., 3.3008e-01, \r\n -9.7266e-01, 2.8516e-01], \r\n ..., \r\n [-2.2188e+00, 8.8125e+00, 1.4297e+00, ..., -1.3984e+00, \r\n -1.7344e+00, -3.6562e+00], \r\n [-9.8047e-01, 1.1062e+01, 7.5391e-01, ..., -4.3945e-01, \r\n -3.9673e-04, -2.7188e+00], \r\n [-1.0391e+00, 1.0875e+01, 8.1641e-01, ..., -4.4922e-01, \r\n -1.7188e-01, -2.7812e+00]]], device='cuda:0', \r\n grad_fn=<ToCopyBackward0>) \r\nmax diff between logits: \r\ntensor(0.2500, device='cuda:0', grad_fn=<MaxBackward1>) \r\nloss 1: \r\ntensor(13.4215, device='cuda:0', grad_fn=<NllLossBackward0>) \r\nloss 2: \r\ntensor(13.4206, device='cuda:0', grad_fn=<NllLossBackward0>) \r\nloss not equal~~~~~~~~~~```", "Tagging @pacman100 to take a look. ", "Hi @rnadimp \r\nThanks for the snippet ! \r\nI am not surprised to see that there is a relatively small difference between SDPA and FA2. The diff you shared is quite small and acceptable IMO, note that even though FA2 guarantees numerically identical results against SDPA, in practice due to kernels being different, there is always going to be a small difference between both implementations.\r\n" ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.37.1 - Platform: Linux-5.10.199-190.747.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.3.3 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? When fine-tuning Llama 2 model with HF 4.37 and PT FSDP, found model divergence in comparison to HF 4.31. Fine-tuning with 4.31 works fine, but with HF 4.37, the loss consistently rises instead of stabilizing when setting attn_implementation="flash_attention_2", while attn_implementation="sdpa" works fine. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The model is inited as `model = AutoModelForCausalLM.from_pretrained(pretrained_model_weights, attn_implementation="flash_attention_2")` ### Expected behavior The loss should not go up as the training goes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28826/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28825/comments
https://api.github.com/repos/huggingface/transformers/issues/28825/events
https://github.com/huggingface/transformers/pull/28825
2,113,802,629
PR_kwDOCUB6oc5lxWY_
28,825
[Docs] Fix spelling and grammar mistakes
{ "login": "khipp", "id": 9824526, "node_id": "MDQ6VXNlcjk4MjQ1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9824526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khipp", "html_url": "https://github.com/khipp", "followers_url": "https://api.github.com/users/khipp/followers", "following_url": "https://api.github.com/users/khipp/following{/other_user}", "gists_url": "https://api.github.com/users/khipp/gists{/gist_id}", "starred_url": "https://api.github.com/users/khipp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khipp/subscriptions", "organizations_url": "https://api.github.com/users/khipp/orgs", "repos_url": "https://api.github.com/users/khipp/repos", "events_url": "https://api.github.com/users/khipp/events{/privacy}", "received_events_url": "https://api.github.com/users/khipp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
1,706
CONTRIBUTOR
null
# What does this PR do? This PR fixes various spelling and grammar mistakes in the documentation and docstrings within the source code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28825/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28825", "html_url": "https://github.com/huggingface/transformers/pull/28825", "diff_url": "https://github.com/huggingface/transformers/pull/28825.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28825.patch", "merged_at": 1706859900000 }
https://api.github.com/repos/huggingface/transformers/issues/28824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28824/comments
https://api.github.com/repos/huggingface/transformers/issues/28824/events
https://github.com/huggingface/transformers/pull/28824
2,113,513,170
PR_kwDOCUB6oc5lwVa0
28,824
Explicitly check if token ID's are None in TFBertTokenizer constructor
{ "login": "skumar951", "id": 25424300, "node_id": "MDQ6VXNlcjI1NDI0MzAw", "avatar_url": "https://avatars.githubusercontent.com/u/25424300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skumar951", "html_url": "https://github.com/skumar951", "followers_url": "https://api.github.com/users/skumar951/followers", "following_url": "https://api.github.com/users/skumar951/following{/other_user}", "gists_url": "https://api.github.com/users/skumar951/gists{/gist_id}", "starred_url": "https://api.github.com/users/skumar951/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skumar951/subscriptions", "organizations_url": "https://api.github.com/users/skumar951/orgs", "repos_url": "https://api.github.com/users/skumar951/repos", "events_url": "https://api.github.com/users/skumar951/events{/privacy}", "received_events_url": "https://api.github.com/users/skumar951/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failing test is unrelated I'll merge" ]
1,706
1,707
1,706
CONTRIBUTOR
null
# What does this PR do? Since the token ID's are ints, the `if token_id` check will fail if `token_id == 0`, when it should only fail when `token_id == None`. This PR adds an explicit None check. @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28824", "html_url": "https://github.com/huggingface/transformers/pull/28824", "diff_url": "https://github.com/huggingface/transformers/pull/28824.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28824.patch", "merged_at": 1706861617000 }
https://api.github.com/repos/huggingface/transformers/issues/28823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28823/comments
https://api.github.com/repos/huggingface/transformers/issues/28823/events
https://github.com/huggingface/transformers/pull/28823
2,113,307,590
PR_kwDOCUB6oc5lvoA3
28,823
Unblock Llama2 ONNX export w/ sdpa by falling back to manual impl
{ "login": "BowenBao", "id": 9376104, "node_id": "MDQ6VXNlcjkzNzYxMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9376104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BowenBao", "html_url": "https://github.com/BowenBao", "followers_url": "https://api.github.com/users/BowenBao/followers", "following_url": "https://api.github.com/users/BowenBao/following{/other_user}", "gists_url": "https://api.github.com/users/BowenBao/gists{/gist_id}", "starred_url": "https://api.github.com/users/BowenBao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BowenBao/subscriptions", "organizations_url": "https://api.github.com/users/BowenBao/orgs", "repos_url": "https://api.github.com/users/BowenBao/repos", "events_url": "https://api.github.com/users/BowenBao/events{/privacy}", "received_events_url": "https://api.github.com/users/BowenBao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ArthurZucker, I have validated the issue is fixed under your PR, thanks! Do you have an ETA when it will get merged? Our workstreams have been blocked by this issue for a while, we need to resolve this export issue asap.", "This week ๐Ÿ˜‰ Waiting for @gante's green light and will merge #27931 (it was not clear)", "I don't understand why this change is necessary. The error that is normally raised\r\n```\r\nValueError: Attention using SDPA can not be traced with torch.jit.trace when no attention_mask is provided. To solve this issue, please either load your model with the argument attn_implementation=\"eager\" or pass an attention_mask input when tracing the model.\r\n```\r\nexplicitly gives a solution.", "@ArthurZucker @BowenBao I believe we can close this issue now that #27931 was merged" ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Unblocks Llama2 ONNX export with sdpa by falling back to manual implementation. > ValueError: Attention using SDPA can not be traced with torch.jit.trace when no attention_mask is provided. To solve this issue, please either load your model with the argument attn_implementation="eager" or pass an attention_mask input when tracing the model. Fixes #28610 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @fxmarty
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28823", "html_url": "https://github.com/huggingface/transformers/pull/28823", "diff_url": "https://github.com/huggingface/transformers/pull/28823.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28823.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28822/comments
https://api.github.com/repos/huggingface/transformers/issues/28822/events
https://github.com/huggingface/transformers/issues/28822
2,113,287,140
I_kwDOCUB6oc599jPk
28,822
Prompt feature causing repetitions and hallucinations
{ "login": "vchagari", "id": 10948110, "node_id": "MDQ6VXNlcjEwOTQ4MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/10948110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vchagari", "html_url": "https://github.com/vchagari", "followers_url": "https://api.github.com/users/vchagari/followers", "following_url": "https://api.github.com/users/vchagari/following{/other_user}", "gists_url": "https://api.github.com/users/vchagari/gists{/gist_id}", "starred_url": "https://api.github.com/users/vchagari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vchagari/subscriptions", "organizations_url": "https://api.github.com/users/vchagari/orgs", "repos_url": "https://api.github.com/users/vchagari/repos", "events_url": "https://api.github.com/users/vchagari/events{/privacy}", "received_events_url": "https://api.github.com/users/vchagari/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @vchagari ๐Ÿ‘‹ \r\n\r\nWe would need access to the audio file to be able to reproduce the issue ๐Ÿค— (or a self-contained reproducer)" ]
1,706
1,707
null
NONE
null
### System Info Hi @sanchit-gandhi and @gante Using Prompt Feature like it is mentioned here (https://github.com/huggingface/transformers/issues/22395) causing the model output to have too many repetitions and too much of hallucinations. I recorded an audio and gave it to the Whisper ASR model with prompt like as mentioned below. More details: Transformers Commit: https://github.com/huggingface/transformers/commit/1c7e5e236823cd38faac8115f96205a82c17fff9 Test-Case: Steps how to reproduce the issue. Audio contents: "The full name of Donald is Donald J. Trump Jr" prompt = "Donald Duck" model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda") feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir) processor = WhisperProcessor.from_pretrained(model_dir) prompt_ids = processor.get_prompt_ids(prompt) input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features.to("cuda"), prompt_ids=prompt_ids, num_beams=4) text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids] transcript = text[0] Output: The full name of Donald is Donald J. Trump Jr. Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Test-Case: Steps how to reproduce the issue. Audio contents: "The full name of Donald is Donald J. Trump Jr" prompt = "Donald Duck" model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda") feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir) processor = WhisperProcessor.from_pretrained(model_dir) prompt_ids = processor.get_prompt_ids(prompt) input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features.to("cuda"), prompt_ids=prompt_ids, num_beams=4) text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids] transcript = text[0] Output: The full name of Donald is Donald J. Trump Jr. Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck ### Expected behavior It has to give either "The full name of Donald is Donald J. Trump" or "The full name of Donald is Donald Duck", not infinite no of prompt key words.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28822/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28821/comments
https://api.github.com/repos/huggingface/transformers/issues/28821/events
https://github.com/huggingface/transformers/pull/28821
2,113,189,186
PR_kwDOCUB6oc5lvNLD
28,821
Correct wav2vec2-bert inputs_to_logits_ratio
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28821). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "inputs_to_logit_ratio is:\r\n\r\n-> Inputs (raw audio) -> [32000; u32] (4s audio at 8kHz for instance)\r\n-> Logits [2000; u32] (Same 4s audio, just the convolutions has a reduction factor of 16 for instance, I don't remember the actual w2v2 ratio).\r\n\r\n\r\nUsually mel spectrogram also has a downsampling factor, but I don't remember on the top of my head the name of the controlling factor. Mel Spectro is basically some convolutions on the raw audio too. They just happen to be FFT.\r\n", "Thanks @Narsil, the ratio is wrong then, I'll correct it", "Thanks for the review! All slow tests for the ASR pipeline pass! BTW, I'll make a subsequent PR to correct the fact that the long transcription script for s2s models (beside Whisper) is never called", "Very good refactoring ๐Ÿ˜‰ " ]
1,706
1,708
1,707
COLLABORATOR
null
# What does this PR do? `chunk_length_s` parameter doesnโ€™t yet work for w2v2-bert, when using the ASR pipeline. ```python from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="ylacombe/wav2vec2-bert-CV16-en-libri-cv", device=0) pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", chunk_length_s=20.0, batch_size=16) ``` Turns out the `inputs_to_logits_ratio` wasn't working properly. I'm not totally sure that I've chosen the right input ratio so I'd like @Narsil's opinion on this. The original W2V2 uses a [learned feature extractor](https://github.com/huggingface/transformers/blob/abbffc4525566a48a9733639797c812301218b83/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L486) to downsample the raw audio waveform to a more reasonable dimension, using a series of convolution layer. However, w2v2-bert does that feature reduction thanks to a mel-spectrogram transformation; If I understood `inputs_to_logits_ratio`, it only computes the hidden size after this feature reduction, which in the case of w2v2-bert doesn't exist. So I've put `inputs_to_logits_ratio=1`. I could very be wrong on this, WDYT @Narsil ? cc @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28821/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28821", "html_url": "https://github.com/huggingface/transformers/pull/28821", "diff_url": "https://github.com/huggingface/transformers/pull/28821.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28821.patch", "merged_at": 1707138887000 }
https://api.github.com/repos/huggingface/transformers/issues/28820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28820/comments
https://api.github.com/repos/huggingface/transformers/issues/28820/events
https://github.com/huggingface/transformers/pull/28820
2,113,110,830
PR_kwDOCUB6oc5lu7XH
28,820
[docs] HfQuantizer
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28820). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
MEMBER
null
A follow-up PR to tidy up some things for the `HfQuantizer`: โž• `HfQuantizer` to the API docs and links to the relevant files for faster navigation for users โž– I think all those emojis are a bit too much and distracting ๐Ÿ˜…
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28820", "html_url": "https://github.com/huggingface/transformers/pull/28820", "diff_url": "https://github.com/huggingface/transformers/pull/28820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28820.patch", "merged_at": 1706858538000 }
https://api.github.com/repos/huggingface/transformers/issues/28819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28819/comments
https://api.github.com/repos/huggingface/transformers/issues/28819/events
https://github.com/huggingface/transformers/pull/28819
2,112,998,190
PR_kwDOCUB6oc5luiPq
28,819
Add MusicGen Melody
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28819). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hey @sanchit-gandhi, thanks for the really relevant suggestions! I've addressed most of them! Could you give it another tour before asking a core maintainer's opinion ?\r\n\r\nBTW, I don't know yet how to efficiently initialize `audio_enc_to_dec_proj` that is part of `MusicgenMelodyForConditionalGeneration`, any opinion on this ? ", "It's mostly corrected, asking for @ArthurZucker review now !\r\nAlso cc @gante, could you take a quick look at the custom generation methods ? It's mostly the same as Musicgen but in a decoder-only settings!", "@ylacombe could you make sure CI's are green! ๐Ÿค— ", "Hey @ArthurZucker, should be okay now, there is still the documentation tests that won't pass yet, but it's related to #28905 !", "Thanks, I'll have to review tomorrow ๐Ÿ˜‰ ", "@ylacombe at a quick glance, the `generate` method looks the same as in MusicGen, which has been previously approved. Are there differences that you'd like me to review? ๐Ÿค— ", "> I'd like to request a follow up PR where a lot of this is abstracted out into smaller, more modular methods.\r\n\r\n@amyeroberts 100% agreed! However, I believe the ball is mostly on the `generate` side -- it should be made more flexible, such that enabling models like MusicGen becomes a < 100-line task.\r\n\r\nThe MusicGen PR had the exact same pattern :)" ]
1,706
1,708
null
COLLABORATOR
null
# What does this PR do? MusicGen Melody was released at the same time than the "o.g" MusicGen that has already been integrated to [`transformers`](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/musicgen#overview). Contrarily to the already integrated model, you can condition the generation with an audio prompt (instead of continuation of the audio prompt). **Main conceptual difference**-> we no longer use cross-attention to condition the generation with the text/audio prompt, but instead we concatenate the text/audio prompt to the decoder hidden states. This makes the model a bit simpler, since it's no longer a "proper" encoder-decoder architecture but a decoder-only that can be conditioned (a bit like Fuyu). Note that there are 3 key "modalities": -> the prompt text that is passed through a text encoder model. -> the audio prompt that is processed by the feature extractor to give a chromagram. -> the musicgen decoder, that generate [Encodec](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/encodec#overview) codes. Why is this model interesting? 1. Audio prompting instead of audio generation gives really interesting generation 2. Musicgen is a decoder-only model, and is difficult to train using the original library. I ideally plan to add training capabilities to the model. cc @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28819", "html_url": "https://github.com/huggingface/transformers/pull/28819", "diff_url": "https://github.com/huggingface/transformers/pull/28819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28819.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28818/comments
https://api.github.com/repos/huggingface/transformers/issues/28818/events
https://github.com/huggingface/transformers/pull/28818
2,112,216,285
PR_kwDOCUB6oc5lr03q
28,818
[`OWL-VIT`] Added sdpa attention
{ "login": "nileshkokane01", "id": 8201108, "node_id": "MDQ6VXNlcjgyMDExMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nileshkokane01", "html_url": "https://github.com/nileshkokane01", "followers_url": "https://api.github.com/users/nileshkokane01/followers", "following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}", "gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}", "starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions", "organizations_url": "https://api.github.com/users/nileshkokane01/orgs", "repos_url": "https://api.github.com/users/nileshkokane01/repos", "events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}", "received_events_url": "https://api.github.com/users/nileshkokane01/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I have added an initial draft for sdpa attention, but I guess, more changes are needed as the OWL-ViT is a bit different compared to llama or Mistral.\r\n\r\nCan you please point me out a similar model close to OWL-ViT or rather let me know the additional changes required in the file. Also, casual_attention_mask is not handled correctly - have no clue how to handle. Additionally, a corresponding test case is also necessary.", "fyi @younesbelkada ", "@younesbelkada ,\r\nI'm trying to solve the errors. I'll let you know when its all ready or if I need any assistance.", "@younesbelkada ,\r\n\r\nI get the following error since the batch size is dropped, and therefore the dimensionality is not matching. Any clues ? \r\n\r\n`\r\n[192, 16, 16] doesn't match the broadcast shape [48, 192, 16 ,16]\r\n`\r\n\r\nAlso causal_attention_mask is not used at all in sdpa; don't know how to handle it on below line. \r\n\r\nhttps://github.com/nileshkokane01/transformers/blob/sdpa_for_OWL_ViT/src/transformers/models/owlvit/modeling_owlvit.py#L484 ", "@younesbelkada ,\r\n\r\nI sought of tried to fix the dimensionality mismatch for batch size , but couldn't figure out. Any clue ? \r\n```python \r\nRuntimeError: output with shape \r\n[192, 16, 16] doesn't match the broadcast shape [48, 192, 16, 16]\r\n```\r\nwith these 11 test seems to fail.\r\n", "Hi @nileshkokane01 \r\nThanks for getting back ! For that I need to deep dive into your branch and try to fix things, I will do that in the next days ๐Ÿ™ " ]
1,706
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR add sdpa attention for OWL-ViT. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28103 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @younesbelkada Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28818", "html_url": "https://github.com/huggingface/transformers/pull/28818", "diff_url": "https://github.com/huggingface/transformers/pull/28818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28818.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28817/comments
https://api.github.com/repos/huggingface/transformers/issues/28817/events
https://github.com/huggingface/transformers/issues/28817
2,112,124,997
I_kwDOCUB6oc595HhF
28,817
Populate torch_dtype from a model to a pipeline
{ "login": "B-Step62", "id": 31463517, "node_id": "MDQ6VXNlcjMxNDYzNTE3", "avatar_url": "https://avatars.githubusercontent.com/u/31463517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/B-Step62", "html_url": "https://github.com/B-Step62", "followers_url": "https://api.github.com/users/B-Step62/followers", "following_url": "https://api.github.com/users/B-Step62/following{/other_user}", "gists_url": "https://api.github.com/users/B-Step62/gists{/gist_id}", "starred_url": "https://api.github.com/users/B-Step62/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/B-Step62/subscriptions", "organizations_url": "https://api.github.com/users/B-Step62/orgs", "repos_url": "https://api.github.com/users/B-Step62/repos", "events_url": "https://api.github.com/users/B-Step62/events{/privacy}", "received_events_url": "https://api.github.com/users/B-Step62/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "B-Step62", "id": 31463517, "node_id": "MDQ6VXNlcjMxNDYzNTE3", "avatar_url": "https://avatars.githubusercontent.com/u/31463517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/B-Step62", "html_url": "https://github.com/B-Step62", "followers_url": "https://api.github.com/users/B-Step62/followers", "following_url": "https://api.github.com/users/B-Step62/following{/other_user}", "gists_url": "https://api.github.com/users/B-Step62/gists{/gist_id}", "starred_url": "https://api.github.com/users/B-Step62/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/B-Step62/subscriptions", "organizations_url": "https://api.github.com/users/B-Step62/orgs", "repos_url": "https://api.github.com/users/B-Step62/repos", "events_url": "https://api.github.com/users/B-Step62/events{/privacy}", "received_events_url": "https://api.github.com/users/B-Step62/received_events", "type": "User", "site_admin": false }
[ { "login": "B-Step62", "id": 31463517, "node_id": "MDQ6VXNlcjMxNDYzNTE3", "avatar_url": "https://avatars.githubusercontent.com/u/31463517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/B-Step62", "html_url": "https://github.com/B-Step62", "followers_url": "https://api.github.com/users/B-Step62/followers", "following_url": "https://api.github.com/users/B-Step62/following{/other_user}", "gists_url": "https://api.github.com/users/B-Step62/gists{/gist_id}", "starred_url": "https://api.github.com/users/B-Step62/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/B-Step62/subscriptions", "organizations_url": "https://api.github.com/users/B-Step62/orgs", "repos_url": "https://api.github.com/users/B-Step62/repos", "events_url": "https://api.github.com/users/B-Step62/events{/privacy}", "received_events_url": "https://api.github.com/users/B-Step62/received_events", "type": "User", "site_admin": false } ]
[ "cc @Rocketknight1 WDYT? Sounds good to me ", "This sounds like a safe assumption to me too, though obviously I'd like to confirm that with some tests! I'm in favour of the PR if you're happy to open it @B-Step62 ", "@ArthurZucker @Rocketknight1 Great! I will open a PR soon, in the meantime could you assign the issue to me?", "@B-Step62 Done!", "cc @Rocketknight1 we usually don't assign issues, and rather let the code talk: if a PR is open and pinned then that means someone is working on something and we can check the progress ๐Ÿ˜‰ ", "Hi @Rocketknight1 @ArthurZucker! I just opened a PR ^, please take a look whenever you have time, thanks!" ]
1,706
1,707
null
CONTRIBUTOR
null
### Feature request When constructing a pipeline object from a model and a tokenizer, the pipeline doesn't inherit the `torch_dtype` field from the underlying model. ``` model = AutoModelForCausalLM.from_pretrained("t5-small", torch_dtype = torch.bfloat16) pipeline = pipeline(model=model, task="text-generation", tokenizer=...) print(pipeline.torch_dtype) => None ``` However, it would be more convenient if the constructor extract the dtype from the model and populate it to pipeline's `torch_dtype` field. I think it's safe to assume the store model's dtype as pipeline's `torch_dtype` based on the documentation. > Sent directly as model_kwargs (just a simpler shortcut) to use the available precision for this model (torch.float16, torch.bfloat16, โ€ฆ or "auto"). We should be able to determine model's dtype either from `model.config.torch_dtype` or `next(model.parameters()).dtype`. ### Motivation I'm a maintainer of [MLflow](https://github.com/mlflow/mlflow/tree/master) and we have a logic to save metadata of Transformers pipeline, such as torch_dtype, task, etc. Since the pipeline doesn't populate `torch_dtype` field from the model, we need to check the underlying model's parameters. While we've implemented [a custom extraction logic](https://github.com/mlflow/mlflow/pull/10979) in our code base, I think this capability could be beneficial for other users of Transformers as well. ### Your contribution I can submit a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28817/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28816/comments
https://api.github.com/repos/huggingface/transformers/issues/28816/events
https://github.com/huggingface/transformers/pull/28816
2,112,120,604
PR_kwDOCUB6oc5lrfWr
28,816
Don't use a subset in test fetcher if on `main` branch
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Let me know if you want me to explore the option to block a PR being merged if the last commit message is not of the form \r\n\r\n> [no_filter] xxx yyy ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
null
COLLABORATOR
null
# What does this PR do? [I don't like surprise, like you do I guess] ๐Ÿ˜‰ Don't select a subset from the detected tests to run when they are many - if we are on the main branch. This could detect any issue as early as at the merge time, not at the nightly run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28816", "html_url": "https://github.com/huggingface/transformers/pull/28816", "diff_url": "https://github.com/huggingface/transformers/pull/28816.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28816.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28815/comments
https://api.github.com/repos/huggingface/transformers/issues/28815/events
https://github.com/huggingface/transformers/issues/28815
2,112,073,298
I_kwDOCUB6oc59465S
28,815
codellama/CodeLlama-34b Fine-Tune Evaluation
{ "login": "sanipanwala", "id": 44312637, "node_id": "MDQ6VXNlcjQ0MzEyNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/44312637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanipanwala", "html_url": "https://github.com/sanipanwala", "followers_url": "https://api.github.com/users/sanipanwala/followers", "following_url": "https://api.github.com/users/sanipanwala/following{/other_user}", "gists_url": "https://api.github.com/users/sanipanwala/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanipanwala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanipanwala/subscriptions", "organizations_url": "https://api.github.com/users/sanipanwala/orgs", "repos_url": "https://api.github.com/users/sanipanwala/repos", "events_url": "https://api.github.com/users/sanipanwala/events{/privacy}", "received_events_url": "https://api.github.com/users/sanipanwala/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada safetensors + peft", "hi @sanipanwala \r\ndo you face the same issue when you update `peft` and `safetensors` ? `pip install -U peft safetensors`", "@younesbelkada ,\r\n\r\nYes, I have created a fresh conda environment and installed all the latest packages.\r\n\r\nThanks,", "Have you used torch.compile by any chance? It seems to be a duplicate of https://github.com/huggingface/transformers/issues/27397", "Hi @younesbelkada ,\r\n\r\nI'm not using torch.compile but if I'm using torch.compile before trainer.train() It gives me the below exception.\r\n\r\n```\r\n raise RuntimeError(\"Dynamo is not supported on Python 3.12+\")\r\nRuntimeError: Dynamo is not supported on Python 3.12+\r\n```\r\n\r\nThanks.\r\n" ]
1,706
1,706
null
NONE
null
Hello, I have done training and saved the model, and adapter config file on the local disk. When I load the model from the local disk again to generate the output I get the below error. Anyone can help me with this issue? `File "PythonV2.py", line 11, in <module> model = AutoPeftModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.12/site-packages/peft/auto.py", line 127, in from_pretrained return cls._target_peft_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.12/site-packages/peft/peft_model.py", line 354, in from_pretrained model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs) File "/python3.12/site-packages/peft/peft_model.py", line 695, in load_adapter adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.12/site-packages/peft/utils/save_and_load.py", line 313, in load_peft_weights adapters_weights = safe_load_file(filename, device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.12/site-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization ` Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28815/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28814/comments
https://api.github.com/repos/huggingface/transformers/issues/28814/events
https://github.com/huggingface/transformers/issues/28814
2,112,036,474
I_kwDOCUB6oc594x56
28,814
Bug in whisper finetuning tutorial? "Multiple languages detected when trying to predict..."
{ "login": "SethvdAxe", "id": 155735980, "node_id": "U_kgDOCUhXrA", "avatar_url": "https://avatars.githubusercontent.com/u/155735980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SethvdAxe", "html_url": "https://github.com/SethvdAxe", "followers_url": "https://api.github.com/users/SethvdAxe/followers", "following_url": "https://api.github.com/users/SethvdAxe/following{/other_user}", "gists_url": "https://api.github.com/users/SethvdAxe/gists{/gist_id}", "starred_url": "https://api.github.com/users/SethvdAxe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SethvdAxe/subscriptions", "organizations_url": "https://api.github.com/users/SethvdAxe/orgs", "repos_url": "https://api.github.com/users/SethvdAxe/repos", "events_url": "https://api.github.com/users/SethvdAxe/events{/privacy}", "received_events_url": "https://api.github.com/users/SethvdAxe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Ok, can confirm that on 4.37.2 this bug does not appear.\r\nSomething to do with https://github.com/huggingface/transformers/pull/28687 I guess?", "cc @patrickvonplaten as well", "I to have the same error. Verified my dataset, this is 1 language. ", "Sorry for being a bit late here. Yes this error is expected, we've recently changed the default behavior to language detection when not specifying which language is to be evaluated. \r\n\r\nIf you train your model on Hindi as shown in the notebook, can you make sure to pass:\r\n\r\n```diff\r\n- eval_results = trainer.evaluate()\r\n+ eval_results = trainer.evaluate(language=\"hi\")\r\n```\r\n\r\n<img width=\"724\" alt=\"Screenshot 2024-02-09 at 16 28 40\" src=\"https://github.com/huggingface/transformers/assets/23423619/d2a47388-df2e-4503-9238-a7fb94eb9a6f\">\r\n\r\nso that the model doesn't try to detect the language it has to transcribe? \r\n\r\n", "@sanchit-gandhi we should probably also make sure to install `accelerate` in the notebook (newer versions of Transformes require accelerate for training) and I'd say we also pin transformers in the blog no? It's currently set to \"main\" of Transformers", "I am getting a similar error during training. Any help is appreciated. \r\n\r\n<img width=\"1072\" alt=\"Screenshot 2024-02-12 at 12 17 32\" src=\"https://github.com/huggingface/transformers/assets/19801035/8714ca98-e567-49ed-8e6e-255224097eca\">\r\n", "Hey @rishabhjain16,\r\n\r\nAh yes indeed the training loop runs the evaluation loop inside and sadly doesn't let the user pass any generation key word params such as `\"language\"`. You can however fix this easily by replacing the following cell in the notebook: \r\n\r\n<img width=\"815\" alt=\"Screenshot 2024-02-12 at 19 15 02\" src=\"https://github.com/huggingface/transformers/assets/23423619/009fcee6-367a-4641-ac5e-0c671e43a1e3\">\r\n\r\n\r\nwith:\r\n\r\n```py\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\nmodel.generation_config.language = \"hi\" # define your language of choice here\r\n```\r\n\r\nand the training should work! ", "> Hey @rishabhjain16,\r\n> \r\n> Ah yes indeed the training loop runs the evaluation loop inside and sadly doesn't let the user pass any generation key word params such as `\"language\"`. You can however fix this easily by replacing the following cell in the notebook:\r\n> \r\n> <img alt=\"Screenshot 2024-02-12 at 19 15 02\" width=\"815\" src=\"https://private-user-images.githubusercontent.com/23423619/304185019-009fcee6-367a-4641-ac5e-0c671e43a1e3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDc4MTY1MTEsIm5iZiI6MTcwNzgxNjIxMSwicGF0aCI6Ii8yMzQyMzYxOS8zMDQxODUwMTktMDA5ZmNlZTYtMzY3YS00NjQxLWFjNWUtMGM2NzFlNDNhMWUzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMjEzVDA5MjMzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWEzNDFiMjk1MjQ0ZTk5OWM0OTc4OTkxZTRhNGY0NDkzY2U5MmE3YzQ3ODExY2RjM2FhMTY0M2NjMTM0NTdhYWMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.jV6ioBhh1ODPxvA7BwbaeTQ0nMlsWSTQIQuXd5zWJsM\">\r\n> with:\r\n> \r\n> ```python\r\n> from transformers import WhisperForConditionalGeneration\r\n> \r\n> model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\n> model.generation_config.language = \"hi\" # define your language of choice here\r\n> ```\r\n> \r\n> and the training should work!\r\n\r\nThank you @patrickvonplaten for getting back to me so quickly. I will give it a try. ", "Hi,everyone \r\nI have a problem with my program.\r\n\r\nI added this program\r\n\r\n`from transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\nmodel.generation_config.language = \"ja\" # define your language of choice here`\r\n\r\nThen,Erros occurred.\r\nMay you help me !\r\n\r\n`---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-30-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 trainer.train()\r\n\r\n17 frames\r\n[/usr/local/lib/python3.10/dist-packages/datasets/utils/_dill.py](https://localhost:8080/#) in save(self, obj, save_persistent_id)\r\n 39 import spacy # type: ignore\r\n 40 \r\n---> 41 if issubclass(obj_type, spacy.Language):\r\n 42 pklregister(obj_type)(_save_spacyLanguage)\r\n 43 if \"tiktoken\" in sys.modules:\r\n\r\nAttributeError: module 'spacy' has no attribute 'Language'`\r\n\r\n![2024-02-20 16 03 57 colab research google com 5c9c960678d5](https://github.com/huggingface/transformers/assets/120359597/9bfd5ba8-db38-4e86-ada6-dc5598a81292)\r\n", "Hey! The error seems to point to a `dataset` issue. Would recommend to upgrade that. Without a proper reproducer there is nothing we can do for you ๐Ÿค— " ]
1,706
1,708
null
NONE
null
### System Info Transformers version: 4.38.0.dev0 Python version: Python3.10 venv (local) Platform: MacOS Venture 13.5 ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Thank you for the amazing whisper finetuning tutorial at: https://huggingface.co/blog/fine-tune-whisper When I download the ipynb and run it locally it runs fine. However, when I change a single line (the last line) from: ``` trainer.train() ``` to: ``` eval_results = trainer.evaluate() ``` I get the following error: ``` ValueError: Multiple languages detected when trying to predict the most likely target language for transcription. ``` Full error log: ``` { "name": "ValueError", "message": "Multiple languages detected when trying to predict the most likely target language for transcription. It is currently not supported to transcribe to different languages in a single batch. Please make sure to either force a single language by passing `language='...'` or make sure all input audio is of the same language.", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[21], line 1 ----> 1 eval_results = trainer.evaluate() 2 print(eval_results) File ~some_path/venv/lib/python3.10/site-packages/transformers/trainer_seq2seq.py:166, in Seq2SeqTrainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, **gen_kwargs) 164 self.gather_function = self.accelerator.gather 165 self._gen_kwargs = gen_kwargs --> 166 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File ~some_path/venv/lib/python3.10/site-packages/transformers/trainer.py:3136, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix) 3133 start_time = time.time() 3135 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop -> 3136 output = eval_loop( 3137 eval_dataloader, 3138 description=\"Evaluation\", 3139 # No point gathering the predictions if there are no metrics, otherwise we defer to 3140 # self.args.prediction_loss_only 3141 prediction_loss_only=True if self.compute_metrics is None else None, 3142 ignore_keys=ignore_keys, 3143 metric_key_prefix=metric_key_prefix, 3144 ) 3146 total_batch_size = self.args.eval_batch_size * self.args.world_size 3147 if f\"{metric_key_prefix}_jit_compilation_time\" in output.metrics: File ~some_path/venv/lib/python3.10/site-packages/transformers/trainer.py:3325, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 3322 batch_size = observed_batch_size 3324 # Prediction step -> 3325 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) 3326 main_input_name = getattr(self.model, \"main_input_name\", \"input_ids\") 3327 inputs_decode = self._prepare_input(inputs[main_input_name]) if args.include_inputs_for_metrics else None File ~some_path/venv/lib/python3.10/site-packages/transformers/trainer_seq2seq.py:296, in Seq2SeqTrainer.prediction_step(self, model, inputs, prediction_loss_only, ignore_keys, **gen_kwargs) 288 if ( 289 \"labels\" in generation_inputs 290 and \"decoder_input_ids\" in generation_inputs 291 and generation_inputs[\"labels\"].shape == generation_inputs[\"decoder_input_ids\"].shape 292 ): 293 generation_inputs = { 294 k: v for k, v in inputs.items() if k not in (\"decoder_input_ids\", \"decoder_attention_mask\") 295 } --> 296 generated_tokens = self.model.generate(**generation_inputs, **gen_kwargs) 298 # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop 299 # TODO: remove this hack when the legacy code that initializes generation_config from a model config is 300 # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183 301 if self.model.generation_config._from_model_config: File ~some_path/venv/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:533, in WhisperGenerationMixin.generate(self, input_features, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_timestamps, task, language, is_multilingual, prompt_ids, prompt_condition_type, condition_on_prev_tokens, temperature, compression_ratio_threshold, logprob_threshold, no_speech_threshold, num_segment_frames, attention_mask, time_precision, return_token_timestamps, return_segments, return_dict_in_generate, **kwargs) 527 self._set_prompt_condition_type( 528 generation_config=generation_config, 529 prompt_condition_type=prompt_condition_type, 530 ) 532 # pass self.config for backward compatibility --> 533 init_tokens = self._retrieve_init_tokens( 534 input_features, 535 generation_config=generation_config, 536 config=self.config, 537 num_segment_frames=num_segment_frames, 538 kwargs=kwargs, 539 ) 540 # TODO(Sanchit) - passing `decoder_input_ids` is deprecated. One should use `prompt_ids` instead 541 # This function should be be removed in v4.39 542 self._check_decoder_input_ids( 543 prompt_ids=prompt_ids, init_tokens=init_tokens, is_shortform=is_shortform, kwargs=kwargs 544 ) File ~some_path/venv/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:1166, in WhisperGenerationMixin._retrieve_init_tokens(self, input_features, generation_config, config, num_segment_frames, kwargs) 1158 lang_ids = self.detect_language( 1159 input_features=input_features, 1160 encoder_outputs=kwargs.get(\"encoder_outputs\", None), 1161 generation_config=generation_config, 1162 num_segment_frames=num_segment_frames, 1163 ) 1165 if torch.unique(lang_ids).shape[0] > 1: -> 1166 raise ValueError( 1167 \"Multiple languages detected when trying to predict the most likely target language for transcription. It is currently not supported to transcribe to different languages in a single batch. Please make sure to either force a single language by passing `language='...'` or make sure all input audio is of the same language.\" 1168 ) 1170 lang_id = lang_ids[0].item() 1172 # append or replace lang_id to init_tokens ValueError: Multiple languages detected when trying to predict the most likely target language for transcription. It is currently not supported to transcribe to different languages in a single batch. Please make sure to either force a single language by passing `language='...'` or make sure all input audio is of the same language." } ``` Is this expected behaviour? Thank you kindly in advance. ### Expected behavior A normal evaluation run to evaluate the performance of the model on the language before starting to train it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28814/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28813/comments
https://api.github.com/repos/huggingface/transformers/issues/28813/events
https://github.com/huggingface/transformers/issues/28813
2,112,025,332
I_kwDOCUB6oc594vL0
28,813
Adding Gradient Checkpointing and Flash Attention 2 implementation to VisionTextDualEncoderModel
{ "login": "Syrinechen", "id": 127806472, "node_id": "U_kgDOB54sCA", "avatar_url": "https://avatars.githubusercontent.com/u/127806472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Syrinechen", "html_url": "https://github.com/Syrinechen", "followers_url": "https://api.github.com/users/Syrinechen/followers", "following_url": "https://api.github.com/users/Syrinechen/following{/other_user}", "gists_url": "https://api.github.com/users/Syrinechen/gists{/gist_id}", "starred_url": "https://api.github.com/users/Syrinechen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Syrinechen/subscriptions", "organizations_url": "https://api.github.com/users/Syrinechen/orgs", "repos_url": "https://api.github.com/users/Syrinechen/repos", "events_url": "https://api.github.com/users/Syrinechen/events{/privacy}", "received_events_url": "https://api.github.com/users/Syrinechen/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" } ]
open
false
null
[]
[]
1,706
1,706
null
NONE
null
VisionTextDualEncoderModel allows training any image and text encoders with a contrastive loss. Would be convenient to add gradient checkpointing as well as flash attention 2 to optimize training. Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28813/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28812/comments
https://api.github.com/repos/huggingface/transformers/issues/28812/events
https://github.com/huggingface/transformers/pull/28812
2,111,988,484
PR_kwDOCUB6oc5lrCYA
28,812
fixing a typo!
{ "login": "mohammad-gh009", "id": 75425392, "node_id": "MDQ6VXNlcjc1NDI1Mzky", "avatar_url": "https://avatars.githubusercontent.com/u/75425392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mohammad-gh009", "html_url": "https://github.com/mohammad-gh009", "followers_url": "https://api.github.com/users/mohammad-gh009/followers", "following_url": "https://api.github.com/users/mohammad-gh009/following{/other_user}", "gists_url": "https://api.github.com/users/mohammad-gh009/gists{/gist_id}", "starred_url": "https://api.github.com/users/mohammad-gh009/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mohammad-gh009/subscriptions", "organizations_url": "https://api.github.com/users/mohammad-gh009/orgs", "repos_url": "https://api.github.com/users/mohammad-gh009/repos", "events_url": "https://api.github.com/users/mohammad-gh009/events{/privacy}", "received_events_url": "https://api.github.com/users/mohammad-gh009/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
NONE
null
I fixed a typo in the tutorial. kinda --> kind of # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28812", "html_url": "https://github.com/huggingface/transformers/pull/28812", "diff_url": "https://github.com/huggingface/transformers/pull/28812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28812.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28811/comments
https://api.github.com/repos/huggingface/transformers/issues/28811/events
https://github.com/huggingface/transformers/pull/28811
2,111,791,218
PR_kwDOCUB6oc5lqWDF
28,811
fix dynamic_module import err
{ "login": "Fazziekey", "id": 55798671, "node_id": "MDQ6VXNlcjU1Nzk4Njcx", "avatar_url": "https://avatars.githubusercontent.com/u/55798671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fazziekey", "html_url": "https://github.com/Fazziekey", "followers_url": "https://api.github.com/users/Fazziekey/followers", "following_url": "https://api.github.com/users/Fazziekey/following{/other_user}", "gists_url": "https://api.github.com/users/Fazziekey/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fazziekey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fazziekey/subscriptions", "organizations_url": "https://api.github.com/users/Fazziekey/orgs", "repos_url": "https://api.github.com/users/Fazziekey/repos", "events_url": "https://api.github.com/users/Fazziekey/events{/privacy}", "received_events_url": "https://api.github.com/users/Fazziekey/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@ArthurZucker ", "> Hey! I think that is not too bad. IMO we should rather check if the imports are protected or not / have a list of packages that can be ignored because this will no longer warn users for every other packages that might be missing :(\r\n\r\nTheThe Ci is failed, I don't know why tf model will be failed\r\n![image](https://github.com/huggingface/transformers/assets/55798671/c04c0356-2a31-4602-af7e-156300a06acf)\r\n", "> Hey! I think that is not too bad. IMO we should rather check if the imports are protected or not / have a list of packages that can be ignored because this will no longer warn users for every other packages that might be mssing :(\r\n\r\nThatโ€˜s right, not all module is necessary" ]
1,706
1,706
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix the dynamic module import err for #28459, when user use custom model by `transformers.AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True)`, for example [Phi](https://huggingface.co/microsoft/phi-1_5/discussions/72), [DeepSeekMOE](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base), will get error ``` ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn` python-BaseException ``` Which caused by import flash attention, however, some device for example Mac, don't support CUDA, but can run model on CPU, user will be blocked by this problem. I think it should be a warning rather than error. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28811", "html_url": "https://github.com/huggingface/transformers/pull/28811", "diff_url": "https://github.com/huggingface/transformers/pull/28811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28811.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28810/comments
https://api.github.com/repos/huggingface/transformers/issues/28810/events
https://github.com/huggingface/transformers/pull/28810
2,111,771,321
PR_kwDOCUB6oc5lqRoN
28,810
Add `StableLM`
{ "login": "jon-tow", "id": 41410219, "node_id": "MDQ6VXNlcjQxNDEwMjE5", "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jon-tow", "html_url": "https://github.com/jon-tow", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "repos_url": "https://api.github.com/users/jon-tow/repos", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for contributing! \r\nWe usually recommend to start by adding the model on the hub, to allow for a quick distribution! \r\nSee the tutorial [here](https://huggingface.co/docs/transformers/custom_models)", "Hello!\r\n\r\n> We usually recommend to start by adding the model on the hub, to allow for a quick distribution!\r\n\r\nThe model is already on the hub [here](https://huggingface.co/stabilityai/stablelm-3b-4e1t/tree/main) but uses custom modeling code. Is your suggestion to simply rename the `model_type` in the config.json and remove the custom implementation? Sorry if I'm misinterpreting this!", "No what I mean is I think it's fine to keep it on the hub! ๐Ÿค— \r\nWe usually go for an integration if this is really asked by the community ( lots of activity on the repo / lots of activity on the issue for adding it here etc!) \r\nThought it's really great that you want to contribute!\r\nIf you still want to add it, I would recommend you to make it as close as possible to other modelling files like Llama or Persimmon, and otherwise good that you created a repo for dev ๐Ÿ‘๐Ÿป ", "Hi, @ArthurZucker; thanks for the quick review! I'd like to point out that the recent commit https://github.com/huggingface/transformers/pull/28810/commits/097272f2272f545cf275c23e46cf7706e9bfac1f removes a copied-from comment from `StableLmModel` because `PersimmonModel` does notย yet support `flash-attn` and the added `_attn_implementation` field breaks the `make repo-consistency` check. Let me know if you suggest a workaround ๐Ÿ™ ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28810). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Could you rebase on main and make sure CIs are all green? ๐Ÿค— I can help if you can't finish all of them", "Can you please help with the `test_hubs` workflow? It errors with `FAILED tests/trainer/test_trainer.py::TrainerIntegrationWithHubTester::test_push_to_hub_with_saves_each_epoch - AssertionError: 'Training in progress, epoch 1' not found in ['Training in progress, epoch 3', 'Training in progress, epoch 2', 'initial commit']` ([here](https://app.circleci.com/pipelines/github/huggingface/transformers/84263/workflows/f575df75-a763-4730-87af-5a533829762b/jobs/1088603?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-checks-link&utm_content=summary)). Not sure how to fix this one up ๐Ÿ˜… Thanks!" ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? This PR adds modeling support for [`StableLM 3B 4E1T`](https://huggingface.co/stabilityai/stablelm-3b-4e1t) (as well as [`StableLM 2 1.6B`](https://huggingface.co/stabilityai/stablelm-2-1_6b)) based models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> ## Notes **TODO**: The current online implementation uses an early naming scheme for the [`model_type`](https://huggingface.co/stabilityai/stablelm-3b-4e1t/blob/e3be657f4d1b78eb7520637ba922448a1ee456bd/config.json#L16) ```json "model_type": "stablelm_epoch", ``` I've temporarily created a development model repository https://huggingface.co/jon-tow/stablelm-3b-4e1t-dev for unit testing and [config archive mapping](https://github.com/jon-tow/transformers/blob/caf38d1659333ae826c0f64d2e3bec45e008081b/src/transformers/models/stablelm/configuration_stablelm.py#L24) which need to be updated before any merging. Is there a better way to handle this? I've noticed a similar issue in [this](https://github.com/huggingface/transformers/pull/26170#discussion_r1368903469) `Phi` model PR discussion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28810/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28810", "html_url": "https://github.com/huggingface/transformers/pull/28810", "diff_url": "https://github.com/huggingface/transformers/pull/28810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28810.patch", "merged_at": 1707891318000 }
https://api.github.com/repos/huggingface/transformers/issues/28809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28809/comments
https://api.github.com/repos/huggingface/transformers/issues/28809/events
https://github.com/huggingface/transformers/pull/28809
2,111,763,732
PR_kwDOCUB6oc5lqP8x
28,809
[`TorchFp8Quantizer`] Attempt to integrate `pytorch-labs/float8_experimental` in HF transformers
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28809). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
null
CONTRIBUTOR
null
# What does this PR do? As per title, this PR attempts to integrate https://github.com/pytorch-labs/float8_experimental in transformers for efficient FP8 inference. To potentially combine it with https://github.com/huggingface/transformers/pull/27931 for faster text generation on newer hardwares (compute capability >= 9.0) I need to address these points before - [ ]ย are fp8 layers serializable? - [ ] make sure the code runs - [ ] end-to-end training script - [ ] Docs - [ ] add tests Currently the target API is: ```python import torch from transformers import AutoModelForCausalLM, TorchFp8Config, AutoTokenizer quantization_config = TorchFp8Config( linear_type="dynamic", modules_to_not_convert=["lm_head"] ) model_id = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, device_map="auto", torch_dtype=torch.float16, ) tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello my name is" inputs = tokenizer(text, return_tensors='pt').to(0) out = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(out[0])) ``` cc @ArthurZucker @drisspg @vkuzo @pacman100 @SunMarc @Titus-von-Koeller
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28809/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28809/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28809", "html_url": "https://github.com/huggingface/transformers/pull/28809", "diff_url": "https://github.com/huggingface/transformers/pull/28809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28809.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28808/comments
https://api.github.com/repos/huggingface/transformers/issues/28808/events
https://github.com/huggingface/transformers/issues/28808
2,111,643,277
I_kwDOCUB6oc593R6N
28,808
It's an AlignModel or Deepspeed Zero3 bug.
{ "login": "necrophagists", "id": 120618287, "node_id": "U_kgDOBzB9Lw", "avatar_url": "https://avatars.githubusercontent.com/u/120618287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/necrophagists", "html_url": "https://github.com/necrophagists", "followers_url": "https://api.github.com/users/necrophagists/followers", "following_url": "https://api.github.com/users/necrophagists/following{/other_user}", "gists_url": "https://api.github.com/users/necrophagists/gists{/gist_id}", "starred_url": "https://api.github.com/users/necrophagists/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/necrophagists/subscriptions", "organizations_url": "https://api.github.com/users/necrophagists/orgs", "repos_url": "https://api.github.com/users/necrophagists/repos", "events_url": "https://api.github.com/users/necrophagists/events{/privacy}", "received_events_url": "https://api.github.com/users/necrophagists/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Could you give a reproducible snippet? ๐Ÿค— ", "> Hey! Could you give a reproducible snippet? ๐Ÿค—\r\n\r\nSorry, this is a company project so I can't provide you with the relevant code. I recently located here in `modeling_utils.py`๏ผš\r\n```\r\n if is_deepspeed_zero3_enabled():\r\n import deepspeed\r\n logger.info(\"Detected DeepSpeed ZeRO-3: activating zero.init() for this model\")\r\n init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts\r\n```\r\nthis should be the place that is causing the error to be reported, in addition I've noticed that the shape of the `module.text_projection.weight` is 0 when the error is being reported (normally it's 480ร—680). Can you give some clues?\r\n\r\nHere's my zero3 config๏ผš\r\n```\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"train_batch_size\": \"auto\",\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\",\r\n \"stage3_prefetch_bucket_size\": \"auto\",\r\n \"stage3_param_persistence_threshold\": \"auto\",\r\n \"stage3_max_live_parameters\": 1e9,\r\n \"stage3_max_reuse_distance\": 1e9,\r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n }\r\n}\r\n```", "I'll let @SunMarc and @pacman100 give you their insight on this!" ]
1,706
1,706
null
NONE
null
### System Info When I try to load the AlignModel weights locally and train them using zero3, I get the following error๏ผš ``` File "/opt/licy/MyVLM/model/builder.py", line 152, in load_model model =AlignModel.from_pretrained(self.args.vm_path) File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3307, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3559, in _load_pretrained_model model.apply(model._initialize_weights) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 885, in apply fn(self) File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 1388, in _initialize_weights self._init_weights(module) File "/usr/local/lib/python3.8/dist-packages/transformers/models/align/modeling_align.py", line 1189, in _init_weights nn.init.xavier_uniform_(module.text_projection.weight) File "/usr/local/lib/python3.8/dist-packages/torch/nn/init.py", line 323, in xavier_uniform_ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor) File "/usr/local/lib/python3.8/dist-packages/torch/nn/init.py", line 287, in _calculate_fan_in_and_fan_out raise ValueError("Fan in and fan out can not be computed for tensor with fewer than 2 dimensions") ``` Switching to zero2 doesn't produce an error; also, ConvnextModel and ClipVisionModel don't report an error when trained under zero3, so I'm thinking that maybe there's a bug in AlignModel? @amyeroberts @pacman100 @muellerz ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.`model =AlignModel.from_pretrained(path)` 2.use zero3 to train model 3.get error about xavier_init ### Expected behavior The expected behavior is to be able to load models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28808/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28807/comments
https://api.github.com/repos/huggingface/transformers/issues/28807/events
https://github.com/huggingface/transformers/issues/28807
2,111,597,249
I_kwDOCUB6oc593GrB
28,807
GPT2 minicons surprisal: IndexError: index out of range in self
{ "login": "joyce9936", "id": 119527282, "node_id": "U_kgDOBx_Xcg", "avatar_url": "https://avatars.githubusercontent.com/u/119527282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joyce9936", "html_url": "https://github.com/joyce9936", "followers_url": "https://api.github.com/users/joyce9936/followers", "following_url": "https://api.github.com/users/joyce9936/following{/other_user}", "gists_url": "https://api.github.com/users/joyce9936/gists{/gist_id}", "starred_url": "https://api.github.com/users/joyce9936/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joyce9936/subscriptions", "organizations_url": "https://api.github.com/users/joyce9936/orgs", "repos_url": "https://api.github.com/users/joyce9936/repos", "events_url": "https://api.github.com/users/joyce9936/events{/privacy}", "received_events_url": "https://api.github.com/users/joyce9936/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry but this seems unrelated to transformers and rather related to the `scorer` library. " ]
1,706
1,706
1,706
NONE
null
### System Info I am trying to calculating surprisal value by feeding in a txt file with about 5000 sentences. But there is a error message I encounter: **IndexError: index out of range in self** Can anyone help with this issue? Thank you! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is the code: <img width="1337" alt="Screenshot 2024-01-31 at 9 35 06โ€ฏPM" src="https://github.com/huggingface/transformers/assets/119527282/486ae5fc-2a9f-4f04-a518-99fba53a7775"> Here is the error message: <img width="1337" alt="Screenshot 2024-01-31 at 9 34 13โ€ฏPM" src="https://github.com/huggingface/transformers/assets/119527282/8e50c012-276c-4a4d-810f-39a212280e55"> ### Expected behavior I would like to have the surprisal value for each word for the whole text file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28807/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28806/comments
https://api.github.com/repos/huggingface/transformers/issues/28806/events
https://github.com/huggingface/transformers/pull/28806
2,111,564,467
PR_kwDOCUB6oc5lpkdt
28,806
[docs] fix some bugs about parameter description
{ "login": "zspo", "id": 26846598, "node_id": "MDQ6VXNlcjI2ODQ2NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zspo", "html_url": "https://github.com/zspo", "followers_url": "https://api.github.com/users/zspo/followers", "following_url": "https://api.github.com/users/zspo/following{/other_user}", "gists_url": "https://api.github.com/users/zspo/gists{/gist_id}", "starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zspo/subscriptions", "organizations_url": "https://api.github.com/users/zspo/orgs", "repos_url": "https://api.github.com/users/zspo/repos", "events_url": "https://api.github.com/users/zspo/events{/privacy}", "received_events_url": "https://api.github.com/users/zspo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28806). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Fixes 1. fix missing spaces in parameter descriptions 2. add parameter description @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28806/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28806", "html_url": "https://github.com/huggingface/transformers/pull/28806", "diff_url": "https://github.com/huggingface/transformers/pull/28806.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28806.patch", "merged_at": 1706806769000 }
https://api.github.com/repos/huggingface/transformers/issues/28805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28805/comments
https://api.github.com/repos/huggingface/transformers/issues/28805/events
https://github.com/huggingface/transformers/issues/28805
2,111,545,003
I_kwDOCUB6oc59256r
28,805
sequence_bias feature is not working for Whisper ASR model.
{ "login": "vchagari", "id": 10948110, "node_id": "MDQ6VXNlcjEwOTQ4MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/10948110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vchagari", "html_url": "https://github.com/vchagari", "followers_url": "https://api.github.com/users/vchagari/followers", "following_url": "https://api.github.com/users/vchagari/following{/other_user}", "gists_url": "https://api.github.com/users/vchagari/gists{/gist_id}", "starred_url": "https://api.github.com/users/vchagari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vchagari/subscriptions", "organizations_url": "https://api.github.com/users/vchagari/orgs", "repos_url": "https://api.github.com/users/vchagari/repos", "events_url": "https://api.github.com/users/vchagari/events{/privacy}", "received_events_url": "https://api.github.com/users/vchagari/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @vchagari ๐Ÿ‘‹ \r\n\r\nThe [same comment](https://github.com/huggingface/transformers/issues/28822) applies here :)" ]
1,706
1,707
null
NONE
null
### System Info Hi @sanchit-gandhi and @gante Sequence_bias feature is not working, I tried with similar example shown in the SequenceBiasLogitsProcessor class (https://github.com/huggingface/transformers/blob/7b2bd1fbbd50e57cf28013e2d0737912ecc0f2eb/src/transformers/generation/logits_process.py#L942) with fine-tuned Whisper ASR HF model, I recorded an audio and gave it to the Whisper ASR model with biasing terms like mentioned below, unfortunately i didn't see any effect in the output. **More details:** Transformers Commit: 1c7e5e236823cd38faac8115f96205a82c17fff9 Test-Case: Steps how to reproduce the issue. Audio contents: "The full name of Donald is Donald J. Trump Jr" sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0} model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda") feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir) processor = WhisperProcessor.from_pretrained(model_dir) input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features.to("cuda"), sequence_bias=sequence_bias, num_beams=4) text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids] transcript = text[0] The output is still came as "The full name of Donald is Donald J Trump Jr" ### Who can help? @sanchit-gandhi and @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Test-Case: Steps how to reproduce the issue. Audio contents: "The full name of Donald is Donald J. Trump Jr" sequence_bias = {get_tokens_as_tuple("Donald Duck"): 10.0} model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda") feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir) processor = WhisperProcessor.from_pretrained(model_dir) input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features predicted_ids = model.generate(input_features.to("cuda"), sequence_bias=sequence_bias, num_beams=4) text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids] transcript = text[0] ### Expected behavior The full name of Donald is Donald Duck.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28805/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28804/comments
https://api.github.com/repos/huggingface/transformers/issues/28804/events
https://github.com/huggingface/transformers/pull/28804
2,111,349,806
PR_kwDOCUB6oc5lo0Uj
28,804
Add missing None check for hf_quantizer
{ "login": "jganitkevitch", "id": 190837, "node_id": "MDQ6VXNlcjE5MDgzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/190837?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jganitkevitch", "html_url": "https://github.com/jganitkevitch", "followers_url": "https://api.github.com/users/jganitkevitch/followers", "following_url": "https://api.github.com/users/jganitkevitch/following{/other_user}", "gists_url": "https://api.github.com/users/jganitkevitch/gists{/gist_id}", "starred_url": "https://api.github.com/users/jganitkevitch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jganitkevitch/subscriptions", "organizations_url": "https://api.github.com/users/jganitkevitch/orgs", "repos_url": "https://api.github.com/users/jganitkevitch/repos", "events_url": "https://api.github.com/users/jganitkevitch/events{/privacy}", "received_events_url": "https://api.github.com/users/jganitkevitch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@younesbelkada @ArthurZucker @poedator ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28804). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Added a test. PTAL โ€“ I'm not familiar with HF's testing guidelines, so happy to take suggestions.", "@ArthurZucker the test seems unrelated & flaky?", "Fixes #28831", "@jganitkevitch thank you for the quick PR, as this is a bit sensitive we'll merge now! " ]
1,706
1,706
1,706
CONTRIBUTOR
null
Adds None check for hf_quantizer that otherwise can blow up when `from_pretrained` is called with `quanitization_config=None`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28804", "html_url": "https://github.com/huggingface/transformers/pull/28804", "diff_url": "https://github.com/huggingface/transformers/pull/28804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28804.patch", "merged_at": 1706862852000 }
https://api.github.com/repos/huggingface/transformers/issues/28803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28803/comments
https://api.github.com/repos/huggingface/transformers/issues/28803/events
https://github.com/huggingface/transformers/issues/28803
2,111,240,846
I_kwDOCUB6oc591vqO
28,803
DeepSpeed ZeRO3 errors on config initialization
{ "login": "matthewdeng", "id": 3967392, "node_id": "MDQ6VXNlcjM5NjczOTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3967392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matthewdeng", "html_url": "https://github.com/matthewdeng", "followers_url": "https://api.github.com/users/matthewdeng/followers", "following_url": "https://api.github.com/users/matthewdeng/following{/other_user}", "gists_url": "https://api.github.com/users/matthewdeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/matthewdeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matthewdeng/subscriptions", "organizations_url": "https://api.github.com/users/matthewdeng/orgs", "repos_url": "https://api.github.com/users/matthewdeng/repos", "events_url": "https://api.github.com/users/matthewdeng/events{/privacy}", "received_events_url": "https://api.github.com/users/matthewdeng/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @pacman100 and @SunMarc " ]
1,706
1,706
null
NONE
null
### System Info `transformers-cli env`: - `transformers` version: 4.37.2 - Platform: Linux-6.2.0-1017-aws-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes **Relevant Dependencies:** ``` accelerate==0.26.1 deepspeed==0.12.3 ray==2.9.1 transformers==4.37.2 ``` ### Who can help? @pacman100 @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm running the following script on a `g4dn.12xlarge` instance. ```python import torch.distributed from transformers import AutoModel, TrainingArguments from ray.train import ScalingConfig from ray.train.torch import TorchTrainer def train_func(): assert torch.distributed.is_initialized(), "Torch Distributed must be initialized." deepspeed_config = { "zero_optimization": { "stage": 3, }, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } train_args = TrainingArguments( output_dir="./", deepspeed=deepspeed_config, ) model = AutoModel.from_pretrained("bert-base-uncased") trainer = TorchTrainer( train_loop_per_worker=train_func, scaling_config=ScalingConfig( num_workers=2, use_gpu=True, ) ) trainer.fit() ``` This errors with: ``` File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/train/_internal/utils.py", line 118, in discard_return_wrapper train_func(*args, **kwargs) File "/home/ray/default/simple.py", line 22, in train_func model = AutoModel.from_pretrained("bert-base-uncased") File "/home/ray/anaconda3/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( File "/home/ray/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3583, in from_pretrained init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 859, in __init__ _ds_config = deepspeed.runtime.config.DeepSpeedConfig(config_dict_or_path, File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 781, in __init__ self._configure_train_batch_size() File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 959, in _configure_train_batch_size self._batch_assertion() File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 907, in _batch_assertion assert train_batch == micro_batch * grad_acc * self.world_size, ( AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 16 != 8 * 1 * 1 ``` I did some debugging and it seems like `world_size` is being set to 1 because `dist` is not initialized yet [here](https://github.com/microsoft/DeepSpeed/blob/24f20ef0a105d32f6085fe0d3b1c2f9324a6262c/deepspeed/runtime/config.py#L712-L720). I also did some bisection and saw that the error started occurring in `transformers==4.30.0` **Related Issues:** - https://github.com/microsoft/DeepSpeed/issues/3341 - this seems to be the exact same issue, but I haven't looked deep enough to understand if the issue lies in DeepSpeed or Transformers or Accelerate. ### Expected behavior The script should run without error and `DeepSpeed` distributed environment should be inherited from the existing Torch process group. The issue does not occur if I use ZeRO2. ```diff "zero_optimization": { - "stage": 3, + "stage": 2, }, ``` The issue can also be mitigated by manually initializing the DeepSpeed distributed environment with `deepspeed.init_distributed()`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28803/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28803/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28802/comments
https://api.github.com/repos/huggingface/transformers/issues/28802/events
https://github.com/huggingface/transformers/pull/28802
2,111,123,817
PR_kwDOCUB6oc5loDqD
28,802
[`BERT`] Add support for sdpa
{ "login": "hackyon", "id": 1557853, "node_id": "MDQ6VXNlcjE1NTc4NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackyon", "html_url": "https://github.com/hackyon", "followers_url": "https://api.github.com/users/hackyon/followers", "following_url": "https://api.github.com/users/hackyon/following{/other_user}", "gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackyon/subscriptions", "organizations_url": "https://api.github.com/users/hackyon/orgs", "repos_url": "https://api.github.com/users/hackyon/repos", "events_url": "https://api.github.com/users/hackyon/events{/privacy}", "received_events_url": "https://api.github.com/users/hackyon/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey @ArthurZucker @younesbelkada\r\n\r\nI was thinking SDPA (#28005) could be a good addition to BERT, so I drafted this change. It doesn't look too hairy so far.\r\n\r\nAs @ArthurZucker mentioned, BERT doesn't have a lot of params so there might not be much of a speedup, but this didn't look too difficult to implement so I figured whatever little improvement might still be helpful (as an aside, there's been some [benchmarking](https://github.com/Dao-AILab/flash-attention/blob/main/usage.md#mlperf-benchmarks) of Flash Attention on training other implementations of BERT, and it still shows decent improvements).\r\n\r\nCan you let me know if this is worth pursuing? If so, I'll add the tests and also fix the fix-copies dependencies. \r\n\r\nThanks!", "I think a good way to se if it is worth the shot is to benchmark your code and check if you have speedups in different contexts!", "Sounds good, lemme look into that", "@ArthurZucker I did some training and inference benchmarking for my change and posted the results in the PR description. \r\n\r\nIt looks like there are decent improvements across the board (percentage-wise, but I think the improvements would add up if we're doing a lot of training/inferencing). I think it could be a good addition. Thoughts?", "Sounds like a good addition then! I'll let @fxmarty review and will be doing the final pass!", "Just curious, is it similar to https://github.com/huggingface/transformers/pull/27478 ?\r\nSeems also https://github.com/huggingface/transformers/pull/28713 is highly related.", "re: @pommedeterresautee \r\n\r\nYes, it's similar. SDPA is built into pytorch, and can support Flash Attention (1) depending on the environment. AFAIK Flash Attention 2 isn't supported in SDPA yet, but there is a possibility for it to be supported down the road (but that should be built into pytorch already, and shouldn't need many changes from our end).\r\n\r\n", "Thanks, I think it is now\r\nhttps://pytorch.org/blog/pytorch2-2/\r\n[scaled_dot_product_attention](https://pytorch.org/docs/2.2/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA) now supports [FlashAttention-2](https://arxiv.org/abs/2307.08691), yielding around 2x speedups compared to previous versions.", "Oh nice, so I guess we could get FA2 for free eventually (when we upgrade pytorch).\r\n\r\nThanks for the links to similar work. I think they could cause some merge conflicts, so I'll message them and try to resolve it before it goes in.", "I've rebased off of head and marked as ready for review. I had to dig through a couple of issues to get the tests to pass, let me now if you want to chat about any of them.\r\n\r\nThanks! ", "@fxmarty @hackyon There's still several tests failing related to this PR. Once these are resolved you can ping me again for a final review", "The tests are passing now. I also verified that test_modeling_bert passes with RUN_SLOW=1 (which contains the tests to ensure model output is the same for eager and sdpa attentions)\r\n\r\nPlease take another look when you get a chance. Thanks!", "Thanks for reviewing!\r\n\r\n> Some general comments:\r\n> \r\n> * Let's wait for the merging of [Add tie_weights() to LM heads and set bias in set_output_embeddings()ย #28948](https://github.com/huggingface/transformers/pull/28948) before merging this in\r\n\r\nYup. I'll merge this PR to HEAD to get rid of the diffs once that other PR goes in.\r\n\r\n> * It would be good to add the performance numbers in the PR description to BERT's model page, similar to what's done for Flash Attention e.g. [here](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/gpt_neox#using-flash-attention-2.\r\n\r\nI'll look into it.\r\n\r\n> * `test_eager_matches_sdpa_inference` should be run for all existing models with SDPA implemented to confirm compatibility with the change in `processed_inputs`\r\n\r\nThis one is tricky. Locally, this method is already failing for some of the models on main/HEAD without my change (such as for FalconModelTest and Qwen2ModelTest). Any chance you can try to run this test on main/HEAD and see if you are seeing those failures on your machine as well?\r\n\r\n> * We shouldn't be setting `self._use_sdpa` that don't have an SDPA attention class. We can just about get away with it for the models which have an attention dict, but not for the other models.\r\n\r\nI've removed them.", "I added some documentation on SPDA to the BERT model page.\r\n\r\nFor the inference tests, I am seeing the same [failures](https://gist.github.com/hackyon/b7f679fb01b4f0446f8ce8a914dc7861) in FalconModelTest and Qwen2ModelTest with and without (ie. main/HEAD) my change. They should be unrelated to my changes. \r\n\r\nI think the Falcon failure is likely just an edge case problem (for some reason the difference is a little higher in this one case), whereas the Qwen2 failure is likely due to an incorrect SDPA implementation. ", "@hackyon can you update https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention as well?", "> @hackyon can you update https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention as well?\r\n\r\nUpdated! Thanks for reviewing again.\r\n\r\nWe're still waiting on #29024 to go in for the tests here to pass, but otherwise, the code here should be more or less complete. ", "> LGTM, great work!\r\n\r\nThanks!\r\n\r\n> One thing you could do to be sure: `CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv`\r\n\r\nRan this, and there was an unrelated test failure, but otherwise everything else passes.\r\n\r\n```\r\nFAILED tests/models/bert/test_tokenization_bert.py::BertTokenizationTest::test_saving_tokenizer_trainer - TypeError: Accelerator.__init__() got an unexpected keyword argument 'use_seedable_sampler'\r\n====================================================== 1 failed, 294 passed, 96 skipped, 409 warnings in 252.55s (0:04:12) =======================================================\r\n```" ]
1,706
1,708
null
CONTRIBUTOR
null
# What does this PR do? Adding support for SDPA (scaled dot product attention) for Bert. More context in #28005. Benchmarking Results on A100-80GB, CPUx12, RAM 96.6GB, OS Ubuntu 22.04, using BertLMHeadModel Training benchmark based on [fxmarty's script](https://gist.github.com/fxmarty/7e75cc3942d6974e4849093ebea0a331): |num_training_steps|batch_size|seq_len|Time per batch (eager - s)|Time per batch (sdpa - s)|Speedup (%)|Eager peak mem (MB)|sdpa peak mem (MB)|Mem saving (%)| |------------------|----------|-------|--------------------------|-------------------------|-----------|-------------------|------------------|--------------| |1000 |1 |256 |0.022 |0.018 |23.905 |1128.190 |1065.286 |5.905 | |1000 |1 |512 |0.034 |0.028 |20.473 |1345.791 |1093.933 |23.023 | |1000 |2 |256 |0.031 |0.026 |18.701 |1175.685 |1093.933 |7.473 | |1000 |2 |512 |0.057 |0.047 |21.315 |2123.874 |1370.097 |55.016 | |1000 |4 |256 |0.052 |0.044 |16.446 |1784.135 |1369.489 |30.277 | |1000 |4 |512 |0.106 |0.087 |21.524 |3706.609 |2196.791 |68.728 | Inference benchmark based on [fxmarty's script](https://gist.github.com/fxmarty/5113e4304fbdd38c9c3702ce44683f6a): |num_batches|batch_size|seq_len|Per token latency eager (ms)|Per token latency SDPA (ms)|Speedup (%)|Mem eager (MB)|Mem BT (MB)|Mem saved (%)| |-----------|----------|-------|----------------------------|---------------------------|-----------|--------------|-----------|-------------| |50 |1 |64 |5.906 |5.420 |8.962 |271.610 |271.407 |0.075 | |50 |1 |128 |5.825 |5.402 |7.834 |279.157 |279.718 |-0.200 | |50 |2 |64 |6.190 |5.349 |15.709 |291.489 |291.751 |-0.090 | |50 |2 |128 |6.168 |5.360 |15.066 |307.514 |307.776 |-0.085 | |50 |4 |64 |6.262 |5.392 |16.137 |332.177 |332.440 |-0.079 | |50 |4 |128 |6.201 |5.382 |15.215 |364.271 |364.742 |-0.129 | ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada (cc @fxmarty)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28802", "html_url": "https://github.com/huggingface/transformers/pull/28802", "diff_url": "https://github.com/huggingface/transformers/pull/28802.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28802.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28801/comments
https://api.github.com/repos/huggingface/transformers/issues/28801/events
https://github.com/huggingface/transformers/issues/28801
2,111,003,139
I_kwDOCUB6oc5901oD
28,801
Conversational Pipeline returns <|im_end|> in the assistant's output.
{ "login": "OfficialDelta", "id": 51007646, "node_id": "MDQ6VXNlcjUxMDA3NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/51007646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OfficialDelta", "html_url": "https://github.com/OfficialDelta", "followers_url": "https://api.github.com/users/OfficialDelta/followers", "following_url": "https://api.github.com/users/OfficialDelta/following{/other_user}", "gists_url": "https://api.github.com/users/OfficialDelta/gists{/gist_id}", "starred_url": "https://api.github.com/users/OfficialDelta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OfficialDelta/subscriptions", "organizations_url": "https://api.github.com/users/OfficialDelta/orgs", "repos_url": "https://api.github.com/users/OfficialDelta/repos", "events_url": "https://api.github.com/users/OfficialDelta/events{/privacy}", "received_events_url": "https://api.github.com/users/OfficialDelta/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for the detailed issue. I donโ€™t think the solution is to monkey patch pipeline. IMO the best practice is to define a single stop token, and not rely on strings (which as you mentioned is brittle because of previous tokens). ", "Thank you for your quick response.\r\n\r\nIs it possible to configure such a stop token without retraining the model? Fine tuning with LoRA was already expensive (becauase I used a vast expanse of data). I'm slightly experiencing PTSD from when I was configuring the fine tuning script, as I initially added a new token to the tokenizer for padding, which led to unhelpful CUDA errors and 12 hours of debugging to figure out the issue. \r\n\r\nThe initial issue with the padding was that the embedding layer of Mixtral didn't recognize the new token id, which would end up being the case here as well, correct?", "Hi @OfficialDelta, this is very good timing - it's actually something I was discussing with @gante yesterday, our generation guru! \r\n\r\nThe situation right now is that generation will end if the model outputs `tokenizer.eos_token`, and that token will not be included in the `generate()` output. However, as you have noticed, a sequence like `<|im_end|>` may get tokenized in different ways, or broken up into multiple tokens.\r\n\r\nIn transformers, the way we handle this is by adding control tokens like `<|im_end|>` as **special tokens** to the model tokenizer. Special tokens are always tokenized as a single token, which makes it possible to set them as the `eos_token`.\r\n\r\nHowever, we realize that there are several situations where this is not possible. For example, once a model has been trained, we can't add new tokens without retraining the model. Therefore, we're experimenting with allowing something like what you suggested - a list of \"termination strings\" that should halt generation. We'll keep you updated!", "@Rocketknight1 Thats great to hear! Let me know if you'd like any help, I'd be happy to contribute!", "No probs! Also, your diagnosis in your last post sounds correct - the CUDA errors were probably caused by adding a new token to the **tokenizer**, but not expanding the size of the model's embedding layer in sync with it, so the model tried to look up a token embedding that wasn't available in the embedding matrix. Try `model.resize_token_embeddings(len(tokenizer))` after you add the special token to the tokenizer.", "@Rocketknight1 and @gante any draft PRs you can link here? ", "None yet, but I'll link it once it's ready!", "Feel free to open it once you start working on it! No need for it to be ready" ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - use_cpu: False - debug: True - num_processes: 8 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - deepspeed_config: {'deepspeed_config_file': '/workspace/zero3.json', 'zero3_init_flag': True} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.2.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to inference on a custom fine-tuned `Mixtral-8x7B-Instruct-v0.1` model. The fine-tuning dataset I generated used the chatml format for tokenizing the data, and when I try inferencing, the conversational pipeline returns the `<|im_end|>` text at the end. Here is a minimal working example: ```py from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig ) from peft import PeftModelForCausalLM # load mixtral quantized because inferencing on a single GPU bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( "mistralai/Mixtral-8x7B-Instruct-v0.1", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", trust_remote_code=True, quantization_config=bnb_config, ) # load the custom LoRA adapter for the fine-tuned chatml model lora_model = PeftModelForCausalLM.from_pretrained(model, '/workspace/chatml-lora-checkpoint') # load the tokenizer with the custom chatml format tokenizer = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x7B-Instruct-v0.1') tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}" tokenizer.pad_token = tokenizer.eos_token # finally, load the pipeline and try inferencing generator = pipeline("conversational", model=lora_model, tokenizer=tokenizer) output = generator([ { 'role': 'user', 'content': 'Hello, how are you today?' } ]) print(output) ``` Output: ``` Conversation id: 7dc0e9fd-9d79-49c8-b4e1-a01b6ed63c98 user: Hello, how are you today? assistant: I'm an artificial intelligence. How can I assist you today?<|im_end|> ``` After troubleshooting, I noticed in `postprocess` function of the conversational pipeline ```py def postprocess(self, model_outputs, clean_up_tokenization_spaces=True): output_ids = model_outputs["output_ids"] answer = self.tokenizer.decode( output_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=clean_up_tokenization_spaces, ) conversation = model_outputs["conversation"] conversation.add_message({"role": "assistant", "content": answer}) return conversation ``` The decoded `answer` has `skip_special_tokens` as `True`. So, to solve this issue, I considered adding `<|im_end|>` as a special token. However, the model itself wasn't trained on this token, and <|im_end|> was originally encoded as multiple tokens. Before coming across this issue, I wanted to have the model consider <|im_end|> as a custom stopping token. In the process of implementing this, i realized that my model, which sometimes outputted `<|im_end|>` as `\n<|im_end|>` or `\n\n<|im_end|>` (variable number of `\n`'s), which were each tokenized differently than `<|im_end|>` by itself. ```py print({ 'no new line': tokenizer('<|im_end|>', add_special_tokens=False)['input_ids'], 'one new line': tokenizer('\n<|im_end|>', add_special_tokens=False)['input_ids'], 'two new lines': tokenizer('\n\n<|im_end|>', add_special_tokens=False)['input_ids'] }) ``` ``` { 'no new line': [523, 28766, 321, 28730, 416, 28766, 28767], 'one new line': [28705, 13, 28789, 28766, 321, 28730, 416, 28766, 28767], 'two new lines': [28705, 13, 13, 28789, 28766, 321, 28730, 416, 28766, 28767] } ``` Notice how with new lines, the 523 token becomes 28789, which is preceeded by 28705 and a number of 13's. This means that having this as a special token is nearly impossible to do with the intended functionality of it ignoring the end token when post processing despite new lines. The main way to make it work, at least to me, would be to add custom logic for processing the token which is capable of handling the new line tokens. In order to combat this for my early stopping, I decided to take the easy way out and decode the tokenized input_ids to see if the end contained my custom stop token: ```py from transformers import StoppingCriteria, StoppingCriteriaList class StoppingCriteriaSub(StoppingCriteria): def __init__(self, stops = [], encounters=1, tokenizer=None): super().__init__() self.stops = stops self.ENCOUNTERS = encounters self.tokenizer = tokenizer assert tokenizer is not None, "Tokenizer is required" def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): stop_count = 0 for input_ids_list in input_ids: for stop in self.stops: length = len(stop) + 5 # buffer for special tokens preceeding stop if len(input_ids_list) < length: continue last_elements = input_ids_list[-length:] decoded_elements = self.tokenizer.decode(last_elements) if stop in decoded_elements: stop_count += 1 if stop_count >= self.ENCOUNTERS: return True return False stop_words = ["<|im_end|>"] stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words, tokenizer=tokenizer)]) ``` The code above *works* but it doesn't feel like the best method of solving this. ### Expected behavior I would like for there to be the potential of custom removing the <|im_end|> text at the end, despite the tokenization differences with new lines.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28801/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28800/comments
https://api.github.com/repos/huggingface/transformers/issues/28800/events
https://github.com/huggingface/transformers/pull/28800
2,110,827,898
PR_kwDOCUB6oc5lnDE6
28,800
`erfinv_` and `clamp_` ops do not exist in float16+cpu
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28800). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
null
MEMBER
null
On cpu, some ops are not defined. This messes up a lot of init weights operations for siglip. I know fp16 + CPU is weird and probably never happening in practise. As such, feel free to ignore this PR. Reproduction case: ```python from transformers import AutoModel, AutoConfig import torch config = AutoConfig.from_pretrained("google/siglip-so400m-patch14-384") model = AutoModel.from_config(config, torch_dtype=torch.float16) print(sum([m.sum().item() for m in model.parameters()])) # sanity check ``` I am using torch==2.0.1 (+ cu118).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28800", "html_url": "https://github.com/huggingface/transformers/pull/28800", "diff_url": "https://github.com/huggingface/transformers/pull/28800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28800.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28799/comments
https://api.github.com/repos/huggingface/transformers/issues/28799/events
https://github.com/huggingface/transformers/issues/28799
2,110,630,743
I_kwDOCUB6oc59zatX
28,799
`token` parameter not respected for `AutoModel`
{ "login": "squidarth", "id": 850115, "node_id": "MDQ6VXNlcjg1MDExNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/850115?v=4", "gravatar_id": "", "url": "https://api.github.com/users/squidarth", "html_url": "https://github.com/squidarth", "followers_url": "https://api.github.com/users/squidarth/followers", "following_url": "https://api.github.com/users/squidarth/following{/other_user}", "gists_url": "https://api.github.com/users/squidarth/gists{/gist_id}", "starred_url": "https://api.github.com/users/squidarth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/squidarth/subscriptions", "organizations_url": "https://api.github.com/users/squidarth/orgs", "repos_url": "https://api.github.com/users/squidarth/repos", "events_url": "https://api.github.com/users/squidarth/events{/privacy}", "received_events_url": "https://api.github.com/users/squidarth/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for reporting. Could you provide the traceback? \r\n", "Hi @ArthurZucker -- it's this:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py\", line 261, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/usr/local/lib/python3.9/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/jinaai/jina-embeddings-v2-base-en/resolve/main/tokenizer_config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/utils/hub.py\", line 429, in cached_file\r\n resolved_file = hf_hub_download(\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 1346, in hf_hub_download\r\n raise head_call_error\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 1232, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/file_download.py\", line 1608, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/usr/local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py\", line 277, in hf_raise_for_status\r\n raise GatedRepoError(message, response) from e\r\nhuggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-65ba7816-484b8bb66a482da2430cf7b8;65c4fe9b-7a6a-4d35-93d3-e19fdf809080)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nCannot access gated repo for url https://huggingface.co/jinaai/jina-embeddings-v2-base-en/resolve/main/tokenizer_config.json.\r\nRepo model jinaai/jina-embeddings-v2-base-en is gated. You must be authenticated to access it.\r\nTraceback (most recent call last):\r\n File \"/app/model/model.py\", line 10, in load\r\n self._model = AutoModel.from_pretrained(\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py\", line 560, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 3085, in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/jinaai/jina-bert-implementation/c41d17d28431712f4b24b52bb83d426d7137a02f/modeling_bert.py\", line 1108, in __init__\r\n self.tokenizer = AutoTokenizer.from_pretrained(config._name_or_path)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py\", line 701, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py\", line 534, in get_tokenizer_config\r\n resolved_config_file = cached_file(\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/utils/hub.py\", line 444, in cached_file\r\n raise EnvironmentError(\r\nOSError: You are trying to access a gated repo.\r\nMake sure to request access at https://huggingface.co/jinaai/jina-embeddings-v2-base-en and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.\r\n```\r\n", "(and just to confirm, I do _not_ get this stacktrace when using the `HF_TOKEN` environment variable (it works great), just when using the `AutoModel(..., token=\"my_hf_token\")` parameter). thanks for the help here!", "Hi @squidarth \r\nI just tried your script on main with a new token + the same model and commit hash and was not able to repro - can you make sure you are using transformers and huggingface_hub from latest pypi ? `pip install -U transformers huggingface_hub`" ]
1,706
1,706
null
NONE
null
### System Info transformers version: 4.37.2 (`transformers-cli env` errored out for me) ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi there, I am trying to use https://huggingface.co/jinaai/jina-embeddings-v2-base-en, and noticed the following problem: When using the `token` parameter on `AutoModel`, I get the "You are trying to access a gated repo" error (I have accepted the terms for the model. I'm working around this by using the `HF_TOKEN` environment variable. Code: ``` # doesn't work model = AutoModel.from_pretrained( 'jinaai/jina-embeddings-v2-base-en', revision="0f472a4cde0e6e50067b8259a3a74d1110f4f8d8", trust_remote_code=True, token="MY_HF_TOKEN" ) # works os.environ["HF_TOKEN"] = "MY_HF_TOKEN" model = AutoModel.from_pretrained( 'jinaai/jina-embeddings-v2-base-en', revision="0f472a4cde0e6e50067b8259a3a74d1110f4f8d8", trust_remote_code=True, token="MY_HF_TOKEN" ) ``` thanks for any help here ### Expected behavior Using the `token` parameter should lead to the same behavior as using the `HF_TOKEN` environment variable.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28799/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28798/comments
https://api.github.com/repos/huggingface/transformers/issues/28798/events
https://github.com/huggingface/transformers/pull/28798
2,110,437,958
PR_kwDOCUB6oc5lltWB
28,798
fix some docs and tensor device bug
{ "login": "zspo", "id": 26846598, "node_id": "MDQ6VXNlcjI2ODQ2NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zspo", "html_url": "https://github.com/zspo", "followers_url": "https://api.github.com/users/zspo/followers", "following_url": "https://api.github.com/users/zspo/following{/other_user}", "gists_url": "https://api.github.com/users/zspo/gists{/gist_id}", "starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zspo/subscriptions", "organizations_url": "https://api.github.com/users/zspo/orgs", "repos_url": "https://api.github.com/users/zspo/repos", "events_url": "https://api.github.com/users/zspo/events{/privacy}", "received_events_url": "https://api.github.com/users/zspo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @zspo, thanks for opening this PR and contributing to the repo! \r\n\r\nThis PR contains a handle of unrelated changes. Let's split up the doc changes from the device placement ones. " ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Fixes 1. fix missing spaces in parameter descriptions 2. fix tensor device 3. add parameter description ## Who can review? @ArthurZucker @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28798", "html_url": "https://github.com/huggingface/transformers/pull/28798", "diff_url": "https://github.com/huggingface/transformers/pull/28798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28798.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28797/comments
https://api.github.com/repos/huggingface/transformers/issues/28797/events
https://github.com/huggingface/transformers/issues/28797
2,110,224,239
I_kwDOCUB6oc59x3dv
28,797
Segmentation fault when importing ESMFold and Tokenizers from transformers along with Pyrosetta
{ "login": "SIAndersson", "id": 117816326, "node_id": "U_kgDOBwW8Bg", "avatar_url": "https://avatars.githubusercontent.com/u/117816326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SIAndersson", "html_url": "https://github.com/SIAndersson", "followers_url": "https://api.github.com/users/SIAndersson/followers", "following_url": "https://api.github.com/users/SIAndersson/following{/other_user}", "gists_url": "https://api.github.com/users/SIAndersson/gists{/gist_id}", "starred_url": "https://api.github.com/users/SIAndersson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SIAndersson/subscriptions", "organizations_url": "https://api.github.com/users/SIAndersson/orgs", "repos_url": "https://api.github.com/users/SIAndersson/repos", "events_url": "https://api.github.com/users/SIAndersson/events{/privacy}", "received_events_url": "https://api.github.com/users/SIAndersson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @Rocketknight1 maybe we can reproduce? ", "Hi @SIAndersson this is quite an unusual bug! We'll see what we can figure out - in the meantime, if you have any other machines you can test on, can you try it there? \r\n\r\nAlso, the `EsmForProteinFolding` model in `Transformers` doesn't import anything unusual, so it should behave like any other model in the library. To help figure out the issue, can you try:\r\n\r\n1) Importing `AutoTokenizer` and `EsmForProteinFolding` separately to see which causes the issue\r\n2) If the issue is `EsmForProteinFolding`, can you try importing another language model class like `BertForSequenceClassification` and let me know if the same issue occurs?", "@Rocketknight1 Hi, thank you for the fast reply!\r\nI have tried importing both `EsmForProteinFolding` and `AutoTokenizer` separately, and changing which one I import first, but both result in the segmentation fault. I tried importing`BertForSequenceClassification` as well and it resulted in the same issue. Very strange issue! I even tried uninstalling and reinstalling both `Transformers` and `PyRosetta` to see if it would solve the issue, but it persists.", "Hi @SIAndersson, that's annoying! I tried, but unfortunately I can't actually get access to `pyrosetta` to reproduce the issue here - Hugging Face isn't an academic institution, so I can't get a free licence.\r\n\r\nAs a workaround, maybe you could run ESM and save the outputs, and then load them in another Python process to handle them with pyrosetta? I realize that's not very convenient, but I'm not sure what else to try because I'm kind of stuck when it comes to diagnosing the problem.", "@Rocketknight1 Ah, that's unfortunate!\r\n\r\nI tried calling the model from a separate script instead of directly in the code and it worked without issue. It is a bit slower, but as long as it works, it's not a huge issue. Thank you for the help!" ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.37.1 - Platform: Linux-4.18.0-513.11.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.3.3 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import pyrosetta from transformers import AutoTokenizer, EsmForProteinFolding ``` ### Expected behavior Expected behaviour: the module imports without issue. I can import pyrosetta on its own without issue. I can import the transformers modules without issue and run inference on PDB modules, as described in the protein structure prediction Jupyter notebook. I can do this without issue in a separate script. It is only when I import both that the segmentation fault occurs. The import order does not matter. Given that both work separately, I would expect them to work together as I cannot find any package conflicts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28797/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28796/comments
https://api.github.com/repos/huggingface/transformers/issues/28796/events
https://github.com/huggingface/transformers/pull/28796
2,110,160,851
PR_kwDOCUB6oc5lkv1u
28,796
Make `is_torch_bf16_available_on_device` more strict
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28796). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Yes, but we get the same set of tests to run as before. Nothing change. And if on GPU, fp16 test will be run anyway (it won't reach the line that I changed in this PR). The change here only affects CI on CPU (CircleCI)" ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? The layernorm op is required to be supported in order for this function to return `True`. ### detail Previously, the function `is_torch_bf16_available_on_device` check ```python x = torch.zeros(2, 2, dtype=torch.float16).to(device) _ = x @ x ``` With torch < 2.2 on CPU, this will give > RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' and `is_torch_bf16_available_on_device` returns `False`. With torch 2.2, this doesn't fail and the function return `True`. However, many models use `LayerNorm` and this is still not supported by torch 2.2 on CPU. We then get many failures for fp16 tests on CircleCI (cpu only)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28796/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28796", "html_url": "https://github.com/huggingface/transformers/pull/28796", "diff_url": "https://github.com/huggingface/transformers/pull/28796.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28796.patch", "merged_at": 1706774634000 }
https://api.github.com/repos/huggingface/transformers/issues/28795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28795/comments
https://api.github.com/repos/huggingface/transformers/issues/28795/events
https://github.com/huggingface/transformers/pull/28795
2,110,014,175
PR_kwDOCUB6oc5lkPbU
28,795
canonical repos moves
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Already done :)", "too quick", "test is flaky no @ArthurZucker ?", "Yes merging without it", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28795). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "thanks friends!" ]
1,706
1,706
1,706
MEMBER
null
we are phasing out canonical models & datasets (as a reminder, "canonical" repos are those that were not under an org or user namespace) and moving them under ad hoc organization namespaces Note that this move should be backward compatible i.e. old versions of transformers that do `AutoModel.from_pretrained("gpt2")` should still work. Download stats should also be backward-compatible. Thanks for reading!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28795", "html_url": "https://github.com/huggingface/transformers/pull/28795", "diff_url": "https://github.com/huggingface/transformers/pull/28795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28795.patch", "merged_at": 1706707111000 }
https://api.github.com/repos/huggingface/transformers/issues/28794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28794/comments
https://api.github.com/repos/huggingface/transformers/issues/28794/events
https://github.com/huggingface/transformers/issues/28794
2,109,842,049
I_kwDOCUB6oc59waKB
28,794
BART-base flash_attention_2 causes CUDA error
{ "login": "Kripner", "id": 9218121, "node_id": "MDQ6VXNlcjkyMTgxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/9218121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kripner", "html_url": "https://github.com/Kripner", "followers_url": "https://api.github.com/users/Kripner/followers", "following_url": "https://api.github.com/users/Kripner/following{/other_user}", "gists_url": "https://api.github.com/users/Kripner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kripner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kripner/subscriptions", "organizations_url": "https://api.github.com/users/Kripner/orgs", "repos_url": "https://api.github.com/users/Kripner/repos", "events_url": "https://api.github.com/users/Kripner/events{/privacy}", "received_events_url": "https://api.github.com/users/Kripner/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for raising the issue, this seems to be a hardware related issue see example issues: https://github.com/nerfstudio-project/nerfacc/issues/207 nothing much we can do on our side I believe. FYI @younesbelkada ", "Hmm not sure what is wrong here, can you try to train the model without padd tokens by packing examples together?", "I had the same problem. Did you solve it" ]
1,706
1,707
null
CONTRIBUTOR
null
### System Info - `transformers` version: 4.37.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/bart-base", attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, ) model.to("cuda") tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base") def preprocess_function(examples): inputs = examples["document"] outputs = examples["summary"] tokenized_inputs = tokenizer(inputs, max_length=1024, padding="max_length", truncation=True) tokenized_outputs = tokenizer(outputs, max_length=64, padding="max_length", truncation=True) return { "input_ids": tokenized_inputs["input_ids"], "attention_mask": tokenized_inputs["attention_mask"], "labels": tokenized_outputs["input_ids"], } train_data = datasets.load_dataset("xsum", split="train[:100]", trust_remote_code=True) train_data = train_data.map(preprocess_function, batched=True, remove_columns=["document", "summary"]) training_args = Seq2SeqTrainingArguments( output_dir="output", ) trainer = Seq2SeqTrainer( model=model, args=training_args, tokenizer=tokenizer, train_dataset=train_data, ) trainer.train() ``` ### Expected behavior Expected behavior as per https://huggingface.co/docs/transformers/en/perf_train_gpu_one#flash-attention-2: > You can speedup the training throughput by using Flash Attention 2 integration in transformers. Check out the appropriate section in the [single GPU section](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#Flash-Attention-2) to learn more about how to load a model with Flash Attention 2 modules. With Bart being listed as supported in the quoted link. However, the script triggers CUDA error. The full output is (with `CUDA_LAUNCH_BLOCKING=1`): ``` You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. WARNING:dvclive:Can't save experiment without a Git Repo. Create a Git repo (`git init`) and commit (`git commit`). 3%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 1/39 [00:00<00:13, 2.80it/s]Traceback (most recent call last): File "/app/pt/experiments/playground.py", line 100, in <module> trainer.train() File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2768, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2791, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1731, in forward outputs = self.model( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1617, in forward decoder_outputs = self.decoder( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1470, in forward layer_outputs = decoder_layer( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 779, in forward hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 403, in forward attn_output = self._flash_attention_forward( File "/opt/conda/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 454, in _flash_attention_forward attn_output_unpad = flash_attn_varlen_func( File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 1059, in flash_attn_varlen_func return FlashAttnVarlenFunc.apply( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 576, in forward out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward( File "/opt/conda/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 85, in _flash_attn_varlen_forward out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd( RuntimeError: CUDA error: invalid configuration argument Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 3%|โ–Ž | 1/39 [00:00<00:20, 1.82it/s] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28794/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28793/comments
https://api.github.com/repos/huggingface/transformers/issues/28793/events
https://github.com/huggingface/transformers/issues/28793
2,109,683,159
I_kwDOCUB6oc59vzXX
28,793
BART-base save_pretrained triggers a warning about GenerationConfig
{ "login": "Kripner", "id": 9218121, "node_id": "MDQ6VXNlcjkyMTgxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/9218121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kripner", "html_url": "https://github.com/Kripner", "followers_url": "https://api.github.com/users/Kripner/followers", "following_url": "https://api.github.com/users/Kripner/following{/other_user}", "gists_url": "https://api.github.com/users/Kripner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kripner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kripner/subscriptions", "organizations_url": "https://api.github.com/users/Kripner/orgs", "repos_url": "https://api.github.com/users/Kripner/repos", "events_url": "https://api.github.com/users/Kripner/events{/privacy}", "received_events_url": "https://api.github.com/users/Kripner/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey that is expected as it is a pretty old model! You can safely ignore this for now, otherwise they should be removed. Feel free to open an issue on the repository's discussion tab ! " ]
1,706
1,706
null
CONTRIBUTOR
null
### System Info - `transformers` version: 4.37.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-base") model.save_pretrained("model") ``` ### Expected behavior Expected behavior: The model is saved without warnings. Actual behavior: Following warning is triggered before saving the model: > Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. > Non-default generation parameters: {'early_stopping': True, 'num_beams': 4, 'no_repeat_ngram_size': 3, 'forced_bos_token_id': 0, 'forced_eos_token_id': 2}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28793/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28792/comments
https://api.github.com/repos/huggingface/transformers/issues/28792/events
https://github.com/huggingface/transformers/issues/28792
2,109,525,178
I_kwDOCUB6oc59vMy6
28,792
Add InternLM1 & InternLM2 model
{ "login": "PommesPeter", "id": 54879512, "node_id": "MDQ6VXNlcjU0ODc5NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/54879512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PommesPeter", "html_url": "https://github.com/PommesPeter", "followers_url": "https://api.github.com/users/PommesPeter/followers", "following_url": "https://api.github.com/users/PommesPeter/following{/other_user}", "gists_url": "https://api.github.com/users/PommesPeter/gists{/gist_id}", "starred_url": "https://api.github.com/users/PommesPeter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PommesPeter/subscriptions", "organizations_url": "https://api.github.com/users/PommesPeter/orgs", "repos_url": "https://api.github.com/users/PommesPeter/repos", "events_url": "https://api.github.com/users/PommesPeter/events{/privacy}", "received_events_url": "https://api.github.com/users/PommesPeter/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hey! Thanks, seems like it's already supported: #26302 ", "> Hey! Thanks, seems like it's already supported: #26302\r\n\r\nAlright. I will find another \"good\" model to implement. Closed." ]
1,706
1,706
1,706
NONE
null
### Model description Hey, the recently released [InternLM](https://github.com/InternLM/InternLM) seems like it would be a nice addition to transformers. Basically, the model has achieved performance that currently exceeds LLaMA2, Mistral, and other models on many Benchmarks. Adding the model to transformers makes it easier to use the model. It has two parameter sizes of 7B and 20B, for v1 and v2 versions. In addition, it also includes three types of models: base, sft, and chat. Maybe there are already plans of integrating it @NielsRogge ? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Project Page: https://internlm.intern-ai.org.cn/ GitHub Repo: https://github.com/InternLM/InternLM
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28791/comments
https://api.github.com/repos/huggingface/transformers/issues/28791/events
https://github.com/huggingface/transformers/issues/28791
2,109,431,587
I_kwDOCUB6oc59u18j
28,791
Inappropriate reduce operation of "num_input_tokens_seen" is prone to get training stuck.
{ "login": "YouliangHUANG", "id": 56789071, "node_id": "MDQ6VXNlcjU2Nzg5MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/56789071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YouliangHUANG", "html_url": "https://github.com/YouliangHUANG", "followers_url": "https://api.github.com/users/YouliangHUANG/followers", "following_url": "https://api.github.com/users/YouliangHUANG/following{/other_user}", "gists_url": "https://api.github.com/users/YouliangHUANG/gists{/gist_id}", "starred_url": "https://api.github.com/users/YouliangHUANG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YouliangHUANG/subscriptions", "organizations_url": "https://api.github.com/users/YouliangHUANG/orgs", "repos_url": "https://api.github.com/users/YouliangHUANG/repos", "events_url": "https://api.github.com/users/YouliangHUANG/events{/privacy}", "received_events_url": "https://api.github.com/users/YouliangHUANG/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "```\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1851, in _inner_training_loop\r\n self.state.num_input_tokens_seen += torch.sum(self.accelerator.gather(\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py\", line 2159, in gather\r\n return gather(tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 344, in wrapper\r\n return function(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 405, in gather\r\n return _gpu_gather(tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 324, in _gpu_gather\r\n return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 129, in recursively_apply\r\n return func(data, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 321, in _gpu_gather_one\r\n torch.distributed.all_gather(output_tensors, tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py\", line 2806, in all_gather\r\n work = default_pg.allgather([tensor_list], [tensor])\r\nRuntimeError: No backend type associated with device type cpu\r\n```\r\n\r\nPatched transformer with above hotfix, it seems the error still happened. Could you help have a look ? thanks.", "> ```\r\n> File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1851, in _inner_training_loop\r\n> self.state.num_input_tokens_seen += torch.sum(self.accelerator.gather(\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py\", line 2159, in gather\r\n> return gather(tensor)\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 344, in wrapper\r\n> return function(*args, **kwargs)\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 405, in gather\r\n> return _gpu_gather(tensor)\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 324, in _gpu_gather\r\n> return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 129, in recursively_apply\r\n> return func(data, *args, **kwargs)\r\n> File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 321, in _gpu_gather_one\r\n> torch.distributed.all_gather(output_tensors, tensor)\r\n> File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\n> return func(*args, **kwargs)\r\n> File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py\", line 2806, in all_gather\r\n> work = default_pg.allgather([tensor_list], [tensor])\r\n> RuntimeError: No backend type associated with device type cpu\r\n> ```\r\n> \r\n> Patched transformer with above hotfix, it seems the error still happened. Could you help have a look ? thanks.\r\n\r\nI also encountered the same problem in the first place, and that's why I added a statement to assign the device using `input_device = inputs[main_input_name].device`.\r\nAs the original code works, assigning the new tensor to the same device should also work as it was. Can you double-check the device assigned to the tensor?", "Thank you @YouliangHUANG for the issue as well as the suggestion to fix it. It makes sense, it would be great if you want to open a PR with the suggested fix.", "> I also encountered the same problem in the first place, and that's why I added a statement to assign the device using `input_device = inputs[main_input_name].device`. As the original code works, assigning the new tensor to the same device should also work as it was. Can you double-check the device assigned to the tensor?\r\n\r\n@YouliangHUANG \r\n\r\n```\r\ninputs[main_input_name].numel(): 1336\r\ninputs[main_input_name].device: cpu\r\n```\r\n\r\nAfter applied the fix, same error happened (transformers==4.37.2):\r\n\r\n```\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1855, in _inner_training_loop\r\n self.accelerator.gather(\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py\", line 2161, in gather\r\n return gather(tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 376, in wrapper\r\n return function(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 437, in gather\r\n return _gpu_gather(tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 356, in _gpu_gather\r\n return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 135, in recursively_apply\r\n return func(data, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 353, in _gpu_gather_one\r\n torch.distributed.all_gather(output_tensors, tensor)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/c10d_logger.py\", line 47, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py\", line 2806, in all_gather\r\n work = default_pg.allgather([tensor_list], [tensor])\r\nRuntimeError: No backend type associated with device type cpu\r\n```\r\n", "it works now after force set the device as 'cuda', so it seems that original error is caused by the allgather op not supported in cpu device ?", "> it works now after force set the device as 'cuda', so it seems that original error is caused by the allgather op not supported in cpu device ?\r\n\r\n@thincal Please check your backend type, and refer to https://pytorch.org/docs/stable/distributed.html for more details.", "> > it works now after force set the device as 'cuda', so it seems that original error is caused by the allgather op not supported in cpu device ?\r\n> \r\n> @thincal Please check your backend type, and refer to https://pytorch.org/docs/stable/distributed.html for more details.\r\n\r\nYes, it's the nccl backend used, which doesn't support cpu device.", "> The length of \"inputs[main_input_name]\" is not guaranteed to be the same when using ddp, which may make the training process hang.\r\n\r\nso what change is solving this problem ?", "> > The length of \"inputs[main_input_name]\" is not guaranteed to be the same when using ddp, which may make the training process hang.\r\n> \r\n> so what change is solving this problem ?\r\n\r\n``torch.tensor(inputs[main_input_name].numel(), device=input_device, dtype=torch.int64)``\r\n@thincal This code will create a tensor with the size of 1, which records how many input tokens there are in the local worker. Therefore the tensor length is aligned and can be gathered through ``self.accelerator.gather`` and then sum into the total number.", "> > > The length of \"inputs[main_input_name]\" is not guaranteed to be the same when using ddp, which may make the training process hang.\r\n> > \r\n> > \r\n> > so what change is solving this problem ?\r\n> \r\n> `torch.tensor(inputs[main_input_name].numel(), device=input_device, dtype=torch.int64)` @thincal This code will create a tensor with the size of 1, which records how many input tokens there are in the local worker. Therefore the tensor length is aligned and can be gathered through `self.accelerator.gather` and then sum into the total number.\r\n\r\nOK, that's great. But it seems that the device should be decided according to the ddp backend ? " ]
1,706
1,708
null
NONE
null
### System Info Trivial ### Who can help? @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction See [src/transformers/trainer.py line 1870](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1870) `self.state.num_input_tokens_seen += self.accelerator.gather(inputs[main_input_name]).numel()` The length of "inputs[main_input_name]" is not guaranteed to be the same when using ddp, which may make the training process hang. Besides, in a distributed setup, it costs a lot to gather the WHOLE input tensors on different workers. It is better to call .numel() first and then .gather(). Ref: [Stuck when using model.generate() and acclerator.gather() in the distributed setting](https://github.com/huggingface/accelerate/issues/1326#issuecomment-1513145864) ### Expected behavior Fix: input_device = inputs[main_input_name].device self.state.num_input_tokens_seen += torch.sum(self.accelerator.gather(torch.tensor(inputs[main_input_name].numel(), device=input_device, dtype=torch.int64))).item()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28791/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28791/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28790/comments
https://api.github.com/repos/huggingface/transformers/issues/28790/events
https://github.com/huggingface/transformers/pull/28790
2,109,295,655
PR_kwDOCUB6oc5lhzmV
28,790
๐ŸŒ [i18n-ZH] Translate chat_templating.md into Chinese
{ "login": "shibing624", "id": 10249622, "node_id": "MDQ6VXNlcjEwMjQ5NjIy", "avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shibing624", "html_url": "https://github.com/shibing624", "followers_url": "https://api.github.com/users/shibing624/followers", "following_url": "https://api.github.com/users/shibing624/following{/other_user}", "gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}", "starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shibing624/subscriptions", "organizations_url": "https://api.github.com/users/shibing624/orgs", "repos_url": "https://api.github.com/users/shibing624/repos", "events_url": "https://api.github.com/users/shibing624/events{/privacy}", "received_events_url": "https://api.github.com/users/shibing624/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28790). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", " @yyLeaves @jiaqiw09 @bojone @yuanzhoulvpi please help to reviewing the translate work." ]
1,706
1,708
null
NONE
null
# What does this PR do? Translate chat_templating.md into Chinese part of https://github.com/huggingface/transformers/issues/20095 ## Who can review? Documentation: @stevhliu Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28790/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28790", "html_url": "https://github.com/huggingface/transformers/pull/28790", "diff_url": "https://github.com/huggingface/transformers/pull/28790.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28790.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28789/comments
https://api.github.com/repos/huggingface/transformers/issues/28789/events
https://github.com/huggingface/transformers/pull/28789
2,109,140,665
PR_kwDOCUB6oc5lhS3S
28,789
[`HFQuantizer`] Remove `check_packages_compatibility` logic
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28789). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Fixes the currently failing tests for AWQ: https://github.com/huggingface/transformers/actions/runs/7705429360/job/21003940543 I propose to remove the `check_package_compatiblity` logic in the `HfQuantizer` as:1 1- it is a duplicate of `validate_environment` 2- For some packages such as awq, `_is_package_available()` returns False because `importlib.util.find_spec(pkg_name) is not None` retruns correctly `True` but `importlib.metadata.version(pkg_name)` fails since autoawq is registered as `awq` module but the pypi package name is `autoawq`. As I expect to face similar behaviour in future quantization packages I propose to simply remove that logic and handle everything in `validate_environment` cc @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28789/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28789", "html_url": "https://github.com/huggingface/transformers/pull/28789", "diff_url": "https://github.com/huggingface/transformers/pull/28789.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28789.patch", "merged_at": 1706667688000 }
https://api.github.com/repos/huggingface/transformers/issues/28788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28788/comments
https://api.github.com/repos/huggingface/transformers/issues/28788/events
https://github.com/huggingface/transformers/pull/28788
2,109,011,519
PR_kwDOCUB6oc5lg3IY
28,788
[`bnb`] Fix bnb slow tests
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28788). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Fixes the current failing BNB slow tests on main: https://github.com/huggingface/transformers/actions/runs/7705429360/job/21003940543 https://github.com/huggingface/transformers/pull/28266 broke the tests which has been merged right before the quantizer refactoring PR. Since the attributes `load_in_4bit` and `load_in_8bit` have been removed in favor of a property method, the fix is simply to explicitly pass them in the `to_dict` method. cc @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28788/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28788", "html_url": "https://github.com/huggingface/transformers/pull/28788", "diff_url": "https://github.com/huggingface/transformers/pull/28788.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28788.patch", "merged_at": 1706661080000 }
https://api.github.com/repos/huggingface/transformers/issues/28787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28787/comments
https://api.github.com/repos/huggingface/transformers/issues/28787/events
https://github.com/huggingface/transformers/issues/28787
2,108,486,087
I_kwDOCUB6oc59rPHH
28,787
Converting TF2 SavedModel models to Huggingface
{ "login": "jhyuklee", "id": 7017152, "node_id": "MDQ6VXNlcjcwMTcxNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7017152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jhyuklee", "html_url": "https://github.com/jhyuklee", "followers_url": "https://api.github.com/users/jhyuklee/followers", "following_url": "https://api.github.com/users/jhyuklee/following{/other_user}", "gists_url": "https://api.github.com/users/jhyuklee/gists{/gist_id}", "starred_url": "https://api.github.com/users/jhyuklee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jhyuklee/subscriptions", "organizations_url": "https://api.github.com/users/jhyuklee/orgs", "repos_url": "https://api.github.com/users/jhyuklee/repos", "events_url": "https://api.github.com/users/jhyuklee/events{/privacy}", "received_events_url": "https://api.github.com/users/jhyuklee/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @Rocketknight1 ", "Hi @jhyuklee, take a look at the `convert-gtr` notebook in the HuggingFace sentence transformers repos [here](https://huggingface.co/sentence-transformers/gtr-t5-base/blob/main/convert-gtr.ipynb). This shows the process of converting a model from the TF Hub, so you should be able to adapt that to convert other SavedModels.", "Hi @Rocketknight1, this is exactly what I was looking for. Will close this once I try that out." ]
1,706
1,706
null
NONE
null
Hi, I'd like to know how I can convert a TF2 SavedModel (e.g. [gtr-base-1](https://www.kaggle.com/models/google/gtr/frameworks/tensorFlow2/variations/gtr-base/versions/1?tfhub-redirect=true)) to a Huggingface PyTorch model as described in the [README](https://huggingface.co/sentence-transformers/gtr-t5-base). We have a similar model in the same format and would like to use it in Huggingface. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28787/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28786/comments
https://api.github.com/repos/huggingface/transformers/issues/28786/events
https://github.com/huggingface/transformers/pull/28786
2,108,463,918
PR_kwDOCUB6oc5le-7o
28,786
[docs] Correct the statement in the docstirng of compute_transition_scores in generation/utils.py
{ "login": "Ki-Seki", "id": 60967965, "node_id": "MDQ6VXNlcjYwOTY3OTY1", "avatar_url": "https://avatars.githubusercontent.com/u/60967965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ki-Seki", "html_url": "https://github.com/Ki-Seki", "followers_url": "https://api.github.com/users/Ki-Seki/followers", "following_url": "https://api.github.com/users/Ki-Seki/following{/other_user}", "gists_url": "https://api.github.com/users/Ki-Seki/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ki-Seki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ki-Seki/subscriptions", "organizations_url": "https://api.github.com/users/Ki-Seki/orgs", "repos_url": "https://api.github.com/users/Ki-Seki/repos", "events_url": "https://api.github.com/users/Ki-Seki/events{/privacy}", "received_events_url": "https://api.github.com/users/Ki-Seki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? In the `compute_transition_scores` function's docstring, the table in Example 1 should refer to `log probability` instead of `logits`. This is because setting `normalize_logits=True` transforms the `logits` into `log probability`, according: to [generation/utils.py#L1012-L1015](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1012-L1015) Below is the table. ```text ... # | token | token string | logits | probability ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.414 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.010 | 13.40% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14% ``` You can also easily check this. It's quite clear that the values in the third column are the logarithms of the probability values in the fourth column, i.e., $ln(\text{forth column}) = \text{third column}$. This PR updates the term from `logits` to `log probability` in the table. Without this change, users could become confused when utilizing this feature without referring to the source code. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28786/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28786", "html_url": "https://github.com/huggingface/transformers/pull/28786", "diff_url": "https://github.com/huggingface/transformers/pull/28786.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28786.patch", "merged_at": 1706720850000 }
https://api.github.com/repos/huggingface/transformers/issues/28785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28785/comments
https://api.github.com/repos/huggingface/transformers/issues/28785/events
https://github.com/huggingface/transformers/pull/28785
2,108,441,688
PR_kwDOCUB6oc5le6GM
28,785
Pin Torch to <2.2.0
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28785). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "thanks . merged now to prevent people crying on their PRs" ]
1,706
1,706
1,706
MEMBER
null
PyTorch 2.2.0 was pushed to `pip` about 30 minutes ago and is causing our CI to fail. It isn't showing up on Pytorch.org yet, so this may be an accidental push from the maintainers (the same thing happened with TF 2.16 last week) For now, we pin `torch<2.2.0` to fix the CI.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28785/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28785", "html_url": "https://github.com/huggingface/transformers/pull/28785", "diff_url": "https://github.com/huggingface/transformers/pull/28785.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28785.patch", "merged_at": 1706652072000 }
https://api.github.com/repos/huggingface/transformers/issues/28784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28784/comments
https://api.github.com/repos/huggingface/transformers/issues/28784/events
https://github.com/huggingface/transformers/pull/28784
2,108,418,715
PR_kwDOCUB6oc5le1F4
28,784
Backbone kwargs in config
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28784). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker Sure! \r\n\r\nWe want to remove the hard-coded conditional logic inside of models like DETR and be able to fully configure the backbone's behaviour. The use case is: image I want to create a new model to train. Instead of the default architecture of DETR, I want to use a different timm backbone, and I want the backbone to return different feature maps from the default. \r\n\r\nAt the moment this isn't possible because: \r\n* `use_timm_backbone` behaviour is hard coded, so we can't load different timm backbones easily e.g. [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/detr/modeling_detr.py#L341) i.e. I can't pass timm specific arguments to configure the [backbone e.g. `output_stride`](https://github.com/huggingface/transformers/blob/abf8f54a019ce14b5eaffa68c6dd883be13fe66e/src/transformers/models/detr/modeling_detr.py#L349). \r\n* I can't configure the backbone by passing in `out_indices` to set the feature maps. \r\n\r\nI completely agree that it would be better to have this all as one argument! For context: when the backbones were first added, there were four arguments: \r\n* `backbone` which specifies a checkpoint e.g. `\"facebook/detr-resnet-50\"`\r\n* `use_timm_backbone` - whether or not to load the backbone from timm. \r\n* `use_pretrained_backbone` which was ill-defined, but controlled the behaviour JUST for the timm backbones\r\n* `backbone_config` a model config which defines a backbone e.g. ResNet. \r\n\r\nThe `backbone_config` is just a model config, and used to load backbone models from config:\r\n\r\n```py\r\nbackbone = AutoBackbone.from_config(backbone_config)\r\n```\r\n\r\nand `backbone` is a checkpoint used to load from pretrained:\r\n\r\n```py\r\nbackbone = AutoBackbone.from_pretrained(backbone)\r\n```\r\n\r\nSo `backbone` and `backbone_config` are mutually exclusive. And `backbone_config` is of type `PretrainedConfig` so can only configure transformers models. \r\n\r\nAdding `backbone_kwargs` isn't ideal, but I think it's the simplest solution. The alternative is creating a new backbone config which contains everything. Because the old values are used in 100s of configs, we wouldn't be able to immediately deprecate, so it'd be a case of having code which ingested old config arguments. \r\n\r\nIf you want, I'm happy to code something up and we can decide which is best from the two :) \r\n\r\nFor more context, here's an example of the final step for removing timm in the modeling files: https://github.com/amyeroberts/transformers/pull/114/files#diff-bdc82cb85f491576a99a341adabaf42260eac9cd797d70d0a2c564b0d4ee2930\r\n\r\nYou can see how all we need is a single call `backbone = load_backbone(config)` in a modules init. \r\n\r\n" ]
1,706
1,707
1,707
COLLABORATOR
null
# What does this PR do? This enables configuring the backbones through the config directly e.g. passing in `out_indices` to the backbone. This enables configuring a model's backbone when it's loaded from a pretrained checkpoint. At the moment, this is only possible when loading from a `backbone_config`. Example: ```py model = MaskFormer.from_pretrained( "facebook/maskformer-swin-base-ade", backbone="facebook/maskformer-swin-large-ade", backbone_kwargs={"out_indices": (-2, -1)} ) ``` This is necessary to replace th `timm` code currently there for models like DETR e.g. [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/detr/modeling_detr.py#L341), which is often hard coded. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28784", "html_url": "https://github.com/huggingface/transformers/pull/28784", "diff_url": "https://github.com/huggingface/transformers/pull/28784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28784.patch", "merged_at": 1707943604000 }
https://api.github.com/repos/huggingface/transformers/issues/28783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28783/comments
https://api.github.com/repos/huggingface/transformers/issues/28783/events
https://github.com/huggingface/transformers/pull/28783
2,108,360,963
PR_kwDOCUB6oc5leomi
28,783
Ability to override clean_code_for_run
{ "login": "w4ffl35", "id": 25737761, "node_id": "MDQ6VXNlcjI1NzM3NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/25737761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/w4ffl35", "html_url": "https://github.com/w4ffl35", "followers_url": "https://api.github.com/users/w4ffl35/followers", "following_url": "https://api.github.com/users/w4ffl35/following{/other_user}", "gists_url": "https://api.github.com/users/w4ffl35/gists{/gist_id}", "starred_url": "https://api.github.com/users/w4ffl35/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/w4ffl35/subscriptions", "organizations_url": "https://api.github.com/users/w4ffl35/orgs", "repos_url": "https://api.github.com/users/w4ffl35/repos", "events_url": "https://api.github.com/users/w4ffl35/events{/privacy}", "received_events_url": "https://api.github.com/users/w4ffl35/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28783). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thanks for the PR, looks alright, make sure to heck the evaluate agent as well! The cleqn code for run is also called there\r\n\r\nThanks will check this today", "Modified call to clean_code_for_run in evaluate here:\r\n\r\nhttps://github.com/huggingface/transformers/pull/28783/commits/e1e7b74ccc23f8811d5047461e537fe38db55096\r\n\r\n@ArthurZucker " ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Adds an interface function called `clean_code_for_run` to the `Agent` class. This function simply returns the results of `clean_code_for_run()`, which was happening on line 349. The reason for this change is to allow developers to override the results of the `clean_code_for_run` function in an easy way. Prior to this change, when I use `Agent` with the Mistral Instruct model, the following results are returned: ``` ==Code generated by the agent== result = add_tool(a=5, b=7) print(f"The result is {result}") ```</s> ``` This would result in an eval error when executing the function. In order to work around this, I did the following: ``` from transformers import LocalAgent as LocalAgentBase class LocalAgent(LocalAgentBase): def format_prompt(self, task, chat_mode=False): task = task.replace("```", "").replace("</s>", "") return task def run(self, task, *, return_code=False, remote=False, **kwargs): prompt = self.format_prompt(task) result = self.generate_one(prompt, stop=["Task:"]) explanation, code = clean_code_for_run(result) self.log(f"==Explanation from the agent==\n{explanation}") """ This entire class exists as a work around in order to run the following line of code. Without this, the evaluation will fail with Mistral Instruct (possibly with other models as well) """ code = code.replace("```", "").replace("</s>", "") self.log(f"\n\n==Code generated by the agent==\n{code}") if not return_code: self.log("\n\n==Result==") self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_tools=self.cached_tools) return evaluate(code, self.cached_tools, state=kwargs.copy()) else: tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) return f"{tool_code}\n{code}" ``` As you can see from the comments, this line: `code = code.replace("```", "").replace("</s>", "")` Is required in order to strip the undesired EOS characters. This may be an issue with Mistral Instruct specifically. Rather than overriding the entire run method as shown above, this new update would allow me to do this instead: ``` from transformers import LocalAgent as LocalAgentBase class LocalAgent(LocalAgentBase): def format_prompt(self, task, chat_mode=False): task = super().format_prompt(task, chat_mode=chat_mode) return task.replace("```", "").replace("</s>", "") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28783/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28783", "html_url": "https://github.com/huggingface/transformers/pull/28783", "diff_url": "https://github.com/huggingface/transformers/pull/28783.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28783.patch", "merged_at": 1707101321000 }
https://api.github.com/repos/huggingface/transformers/issues/28782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28782/comments
https://api.github.com/repos/huggingface/transformers/issues/28782/events
https://github.com/huggingface/transformers/pull/28782
2,108,349,430
PR_kwDOCUB6oc5lemGG
28,782
Add ability to override clean code for run
{ "login": "w4ffl35", "id": 25737761, "node_id": "MDQ6VXNlcjI1NzM3NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/25737761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/w4ffl35", "html_url": "https://github.com/w4ffl35", "followers_url": "https://api.github.com/users/w4ffl35/followers", "following_url": "https://api.github.com/users/w4ffl35/following{/other_user}", "gists_url": "https://api.github.com/users/w4ffl35/gists{/gist_id}", "starred_url": "https://api.github.com/users/w4ffl35/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/w4ffl35/subscriptions", "organizations_url": "https://api.github.com/users/w4ffl35/orgs", "repos_url": "https://api.github.com/users/w4ffl35/repos", "events_url": "https://api.github.com/users/w4ffl35/events{/privacy}", "received_events_url": "https://api.github.com/users/w4ffl35/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Adds an interface function called `clean_code_for_run` to the `Agent` class. This function simply returns the results of `clean_code_for_run()`, which was happening on line 349. The reason for this change is to allow developers to override the results of the `clean_code_for_run` function in an easy way. Prior to this change, when I use `Agent` with the Mistral Instruct model, the following results are returned: ``` ==Code generated by the agent== result = add_tool(a=5, b=7) print(f"The result is {result}") ```</s> ``` This would result in an eval error when executing the function. In order to work around this, I did the following: ``` from transformers import LocalAgent as LocalAgentBase class LocalAgent(LocalAgentBase): def format_prompt(self, task, chat_mode=False): task = task.replace("```", "").replace("</s>", "") return task def run(self, task, *, return_code=False, remote=False, **kwargs): prompt = self.format_prompt(task) result = self.generate_one(prompt, stop=["Task:"]) explanation, code = clean_code_for_run(result) self.log(f"==Explanation from the agent==\n{explanation}") """ This entire class exists as a work around in order to run the following line of code. Without this, the evaluation will fail with Mistral Instruct (possibly with other models as well) """ code = code.replace("```", "").replace("</s>", "") self.log(f"\n\n==Code generated by the agent==\n{code}") if not return_code: self.log("\n\n==Result==") self.cached_tools = resolve_tools(code, self.toolbox, remote=remote, cached_tools=self.cached_tools) return evaluate(code, self.cached_tools, state=kwargs.copy()) else: tool_code = get_tool_creation_code(code, self.toolbox, remote=remote) return f"{tool_code}\n{code}" ``` As you can see from the comments, this line: `code = code.replace("```", "").replace("</s>", "")` Is required in order to strip the undesired EOS characters. This may be an issue with Mistral Instruct specifically. Rather than overriding the entire run method as shown above, this new update would allow me to do this instead: ``` from transformers import LocalAgent as LocalAgentBase class LocalAgent(LocalAgentBase): def format_prompt(self, task, chat_mode=False): task = super().format_prompt(task, chat_mode=chat_mode) return task.replace("```", "").replace("</s>", "") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28782/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28782", "html_url": "https://github.com/huggingface/transformers/pull/28782", "diff_url": "https://github.com/huggingface/transformers/pull/28782.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28782.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28781/comments
https://api.github.com/repos/huggingface/transformers/issues/28781/events
https://github.com/huggingface/transformers/issues/28781
2,108,336,306
I_kwDOCUB6oc59qqiy
28,781
Unable to use torch scripting to export Mask2Former model
{ "login": "rayryeng", "id": 765375, "node_id": "MDQ6VXNlcjc2NTM3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/765375?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rayryeng", "html_url": "https://github.com/rayryeng", "followers_url": "https://api.github.com/users/rayryeng/followers", "following_url": "https://api.github.com/users/rayryeng/following{/other_user}", "gists_url": "https://api.github.com/users/rayryeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/rayryeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rayryeng/subscriptions", "organizations_url": "https://api.github.com/users/rayryeng/orgs", "repos_url": "https://api.github.com/users/rayryeng/repos", "events_url": "https://api.github.com/users/rayryeng/events{/privacy}", "received_events_url": "https://api.github.com/users/rayryeng/received_events", "type": "User", "site_admin": false }
[ { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" } ]
open
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[ "Hi @rayryeng, thanks for raising this issue and for the detailed PR description, it really helps us address issues ๐Ÿค— \r\n\r\nNo, the model isn't compatible with `torch.script` yet unfortunately. I think the correct type is being passed, it's just the model has several assumptions / logic flows / incorrect typing which are OK for eager execution but are not when trying to compile the functions. \r\n\r\nI don't have the bandwidth at the moment to convert the modeling files to make them compatible, but very happy to review any PRs from anyone in the community who would like to tackle this issue. ", "Hi @amyeroberts - Thanks so much! My original intent in using torch scripting was because I am working on a real-time system, and it's a bit prohibitive for us to use a traced model as it will inevitably require some warmup for the model to compile before it can be used. I figured that scripting would eliminate the warmup required. For now, I think I can develop a workaround where we can warm up the model in a separate thread while we launch other things. Thankfully, panoptic segmentation is not the first thing to be done when we run our application. This is not a blocker for us, but more of a nice to have now.\r\n\r\nIf there's currently no bandwidth to do this, I'd be happy to try a first attempt, as it would benefit me as well as the community. I appreciate you having a look at this either way!", "@rayryeng Great! ๐Ÿค— Having this enabled for this model will be very impactful for lots of users in the community. Feel free to open a PR and ping me for review or any other questions in the meantime " ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.34.1 - Platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.17 - Python version: 3.8.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, a GTX 1080 with 8 GB of VRAM - Using distributed or parallel set-up in script?: No. ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am attempting to export the [Mask2Former model available in huggingface](https://huggingface.co/facebook/mask2former-swin-base-coco-panoptic) through [`torch.jit.script`](https://pytorch.org/docs/stable/generated/torch.jit.script.html). Here's a minimal reproducible example: ```python import torch from transformers import Mask2FormerForUniversalSegmentation device = "cuda" if torch.cuda.is_available() else "cpu" model = Mask2FormerForUniversalSegmentation.from_pretrained( "facebook/mask2former-swin-base-coco-panoptic", torchscript=True ).to(device) scripted_model = torch.jit.script(model) torch.jit.save(scripted_model, 'mask2former.pt') ``` By doing this, I get the following error using torch scripting (path to the offending file has been obfuscated for brevity): ``` torch.jit.frontend.NotSupportedError: Comprehension ifs are not supported yet: File "/home/.../huggingface/lib/python3.8/site-packages/transformers/models/mask2former/modeling_mask2former.py", line 2559 if not return_dict: output = tuple(v for v in output.values() if v is not None) if loss is not None: output = ((loss)) + output ``` As a hack, I've changed my local installation so that comprehension ifs are removed: ``` if not return_dict: outputs = [] for v in output.values(): if v is not None: outputs.append(v) output = tuple(outputs) ``` This also occurs at line 2306 in the same file, so I've made the same changes there. Once I fix this, there is an error in the forward method for the SWIN backbone: ``` RuntimeError: 'Optional[Tensor]' object has no attribute or method 'shape'.: File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 313 def forward(self, pixel_values: Optional[torch.FloatTensor]) -> Tuple[torch.Tensor, Tuple[int]]: _, num_channels, height, width = pixel_values.shape ~~~~~~~~~~~~~~~~~~ <--- HERE if num_channels != self.num_channels: raise ValueError( ``` The forward method for the SWIN backbone is confusing, as the input type is declared to be `Optional` but the output type is not. The definition of this method clearly indicates that a concrete tuple is to be returned. As a final experiment, I've removed the `Optional` type declaration and tried to export it one more time: ``` aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor: Expected a value of type 'List[int]' for argument 'pad' but instead found type 'Tuple[int, Tensor]'. : File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 306 if width % self.patch_size[1] != 0: pad_values = (0, self.patch_size[1] - width % self.patch_size[1]) pixel_values = nn.functional.pad(pixel_values, pad_values) ~~~~~~~~~~~~~~~~~ <--- HERE if height % self.patch_size[0] != 0: pad_values = (0, 0, 0, self.patch_size[0] - height % self.patch_size[0]) 'SwinPatchEmbeddings.maybe_pad' is being compiled since it was called from 'SwinPatchEmbeddings.forward' File "/home/.../anaconda3/envs/huggingface/lib/python3.8/site-packages/transformers/models/swin/modeling_swin.py", line 319 ) # pad the input to be divisible by self.patch_size, if needed pixel_values = self.maybe_pad(pixel_values, height, width) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE embeddings = self.projection(pixel_values) _, _, height, width = embeddings.shape ``` It seems that what is being put into the forward pass is not, in fact, a `torch.Tensor` when being scripted. Is torch scripting this model not supported at this time or am I missing something? ### Expected behavior The model successfully being exported to disk with torch scripting.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28781/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28781/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28780/comments
https://api.github.com/repos/huggingface/transformers/issues/28780/events
https://github.com/huggingface/transformers/pull/28780
2,108,249,014
PR_kwDOCUB6oc5leQT9
28,780
Further pin pytest version (in a temporary way)
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A [run](https://app.circleci.com/pipelines/github/huggingface/transformers/83431/workflows/145ee093-f7cb-4912-a64c-6b7f6ca74e98/jobs/1075839) shows we do get the desired version (pytest==7.4.4) now. (see `Show installed libraries and their versions`)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28780). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? #28758 tried to pin `pytest<8.0.0` version, however, in `doc_test_job` job, there is a command > pip install --upgrade --upgrade-strategy eager pytest pytest-sugar which bring it back to `8.0.0` again after the desired version being installed with `pip install -e .[dev]`. This PR changes it to > pip install --upgrade --upgrade-strategy eager 'pytest<8.0.0' pytest-sugar this is not a very ideal solution, but let's fix it quickly and I will see if I can have a better solution for the long term.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28780/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28780", "html_url": "https://github.com/huggingface/transformers/pull/28780", "diff_url": "https://github.com/huggingface/transformers/pull/28780.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28780.patch", "merged_at": 1706633329000 }
https://api.github.com/repos/huggingface/transformers/issues/28779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28779/comments
https://api.github.com/repos/huggingface/transformers/issues/28779/events
https://github.com/huggingface/transformers/pull/28779
2,108,242,406
PR_kwDOCUB6oc5leO29
28,779
Prevent MLflow exception from disrupting training
{ "login": "codiceSpaghetti", "id": 71273533, "node_id": "MDQ6VXNlcjcxMjczNTMz", "avatar_url": "https://avatars.githubusercontent.com/u/71273533?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codiceSpaghetti", "html_url": "https://github.com/codiceSpaghetti", "followers_url": "https://api.github.com/users/codiceSpaghetti/followers", "following_url": "https://api.github.com/users/codiceSpaghetti/following{/other_user}", "gists_url": "https://api.github.com/users/codiceSpaghetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/codiceSpaghetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codiceSpaghetti/subscriptions", "organizations_url": "https://api.github.com/users/codiceSpaghetti/orgs", "repos_url": "https://api.github.com/users/codiceSpaghetti/repos", "events_url": "https://api.github.com/users/codiceSpaghetti/events{/privacy}", "received_events_url": "https://api.github.com/users/codiceSpaghetti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
This PR prevents a training in progress from **being interrupted** due to a problem with the MLflow server (e.g., lack of connectivity) and will make the code more **faul-tolerant**, as also discussed in [this](https://github.com/mlflow/mlflow/issues/1550) MLflow issue. This can be achieved by simply changing the `synchronous` parameter of `mlflow.log_metrics` to `False` (previously the default value was left, which is `True`) In this way, as described in the [MLflow documentation](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_metrics), the function logs the metrics asynchronously and return a future representing the logging operation, instead of blocking until the log is successful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28779/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28779", "html_url": "https://github.com/huggingface/transformers/pull/28779", "diff_url": "https://github.com/huggingface/transformers/pull/28779.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28779.patch", "merged_at": 1706663444000 }
https://api.github.com/repos/huggingface/transformers/issues/28778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28778/comments
https://api.github.com/repos/huggingface/transformers/issues/28778/events
https://github.com/huggingface/transformers/issues/28778
2,108,209,721
I_kwDOCUB6oc59qLo5
28,778
OWL-VIT Finetuning code for custom dataset in Hugging Face
{ "login": "solomonmanuelraj", "id": 25194971, "node_id": "MDQ6VXNlcjI1MTk0OTcx", "avatar_url": "https://avatars.githubusercontent.com/u/25194971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/solomonmanuelraj", "html_url": "https://github.com/solomonmanuelraj", "followers_url": "https://api.github.com/users/solomonmanuelraj/followers", "following_url": "https://api.github.com/users/solomonmanuelraj/following{/other_user}", "gists_url": "https://api.github.com/users/solomonmanuelraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/solomonmanuelraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/solomonmanuelraj/subscriptions", "organizations_url": "https://api.github.com/users/solomonmanuelraj/orgs", "repos_url": "https://api.github.com/users/solomonmanuelraj/repos", "events_url": "https://api.github.com/users/solomonmanuelraj/events{/privacy}", "received_events_url": "https://api.github.com/users/solomonmanuelraj/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada ", "Hi @solomonmanuelraj \r\nThanks for the issue, OwlViT cannot be used out of the box with HF trainer, you need either to subclass it with a new class and overwrite the method that computes the loss or design your own custom training loop.\r\nYou can have a look at this notebook from @NielsRogge : https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb and start from it", "@younesbelkada thanks for your quick update. ", "Hi @younesbelkada \r\n\r\nThanks for your help. As per your input, i created a sub class for HF trainer and implemented the custom loss function ( referred https://www.kaggle.com/code/bibhasmondal96/detr-from-scratch & https://github.com/facebookresearch/detr/blob/main/models/matcher.py for the custom loss implementation) when i am running with CPPE dataset i am receiving the following error. I am running this code in a single GPU machine. \r\n\r\nprogram code and error list attached.\r\n\r\n```python\r\nimport torch\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nprint(device)\r\n\r\nfrom datasets import load_dataset\r\ncppe5 = load_dataset(\"cppe-5\")\r\n\r\ncategories = cppe5[\"train\"].features[\"objects\"].feature[\"category\"].names\r\nid2label = {index: x for index, x in enumerate(categories, start=0)}\r\nlabel2id = {v: k for k, v in id2label.items()}\r\n\r\nid2label\r\ntext_inputs =list(id2label.values())\r\nprint(text_inputs)\r\n\r\nremove_idx = [590, 821, 822, 875, 876, 878, 879]\r\nkeep = [i for i in range(len(cppe5[\"train\"])) if i not in remove_idx]\r\n\r\n## considering only 100 training images\r\nkeep = keep[0:100]\r\ncppe5[\"train\"] = cppe5[\"train\"].select(keep)\r\n\r\nfrom transformers import AutoImageProcessor\r\nfrom transformers import AutoProcessor\r\n\r\nimage_processor = AutoImageProcessor.from_pretrained(\"facebook/detr-resnet-50\")\r\n\r\ncheckpoint = \"google/owlvit-base-patch32\"\r\nprocessor = AutoProcessor.from_pretrained(checkpoint)\r\n\r\nfrom transformers import AutoModelForZeroShotObjectDetection\r\n\r\nmodel = AutoModelForZeroShotObjectDetection.from_pretrained(\r\n checkpoint,\r\n id2label=id2label,\r\n label2id=label2id,\r\n ignore_mismatched_sizes=True,\r\n)\r\nmodel.to(device)\r\n\r\n\r\n\r\nimport albumentations\r\nimport numpy as np\r\n\r\ntransform = albumentations.Compose(\r\n [\r\n albumentations.Resize(480, 480),\r\n albumentations.HorizontalFlip(p=1.0),\r\n albumentations.RandomBrightnessContrast(p=1.0),\r\n ],\r\n bbox_params=albumentations.BboxParams(format=\"coco\", label_fields=[\"category\"]),\r\n)\r\n\r\ndef formatted_anns(image_id, category, area, bbox):\r\n annotations = []\r\n for i in range(0, len(category)):\r\n new_ann = {\r\n \"image_id\": image_id,\r\n \"category_id\": category[i],\r\n \"isCrowd\": 0,\r\n \"area\": area[i],\r\n \"bbox\": list(bbox[i]),\r\n }\r\n annotations.append(new_ann)\r\n\r\n return annotations\r\n\r\n# transforming a batch\r\ndef transform_aug_ann(examples):\r\n image_ids = examples[\"image_id\"]\r\n images, bboxes, area, categories = [], [], [], []\r\n transformed_data = []\r\n for image, objects in zip(examples[\"image\"], examples[\"objects\"]):\r\n image = np.array(image.convert(\"RGB\"))[:, :, ::-1]\r\n out = transform(image=image, bboxes=objects[\"bbox\"], category=objects[\"category\"])\r\n\r\n area.append(objects[\"area\"])\r\n images.append(out[\"image\"])\r\n bboxes.append(out[\"bboxes\"])\r\n categories.append(out[\"category\"])\r\n transformed_data.append(processor(text=text_inputs, images=image, return_tensors=\"pt\"))\r\n\r\n \r\n return {\"transformed_data\":transformed_data}\r\n\r\n# transforming a batch\r\ndef transform_aug_ann_labels(examples):\r\n image_ids = examples[\"image_id\"]\r\n images, bboxes, area, categories = [], [], [], []\r\n for image, objects in zip(examples[\"image\"], examples[\"objects\"]):\r\n image = np.array(image.convert(\"RGB\"))[:, :, ::-1]\r\n out = transform(image=image, bboxes=objects[\"bbox\"], category=objects[\"category\"])\r\n\r\n area.append(objects[\"area\"])\r\n images.append(out[\"image\"])\r\n bboxes.append(out[\"bboxes\"])\r\n categories.append(out[\"category\"])\r\n\r\n targets = [\r\n {\"image_id\": id_, \"annotations\": formatted_anns(id_, cat_, ar_, box_)}\r\n for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes)\r\n ]\r\n\r\n return image_processor(images=images, annotations=targets, return_tensors=\"pt\")\r\n\r\ntransform_1 = cppe5[\"train\"].with_transform(transform_aug_ann)\r\ntransform_2 = cppe5[\"train\"].with_transform(transform_aug_ann_labels)\r\n\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndata = []\r\nfor i in range(len(transform_1)):\r\n dict_ = {}\r\n dict_[\"input_ids\"] = transform_1[i][\"transformed_data\"][\"input_ids\"]\r\n dict_[\"attention_mask\"] = transform_1[i][\"transformed_data\"][\"attention_mask\"]\r\n dict_[\"pixel_values\"] = transform_1[i][\"transformed_data\"][\"pixel_values\"][0]\r\n dict_[\"labels\"] = transform_2[i][\"labels\"]\r\n data.append(dict_)\r\n\r\n# Preprocessed Training Data\r\ntrain_dataset = Dataset.from_list(data)\r\ntrain_dataset.features\r\n\r\n# Using Detr-Loss calculation https://github.com/facebookresearch/detr/blob/main/models/matcher.py\r\n# https://www.kaggle.com/code/bibhasmondal96/detr-from-scratch\r\n\r\nclass BoxUtils(object):\r\n @staticmethod\r\n def box_cxcywh_to_xyxy(x):\r\n x_c, y_c, w, h = x.unbind(-1)\r\n b = [(x_c - 0.5 * w), (y_c - 0.5 * h),\r\n (x_c + 0.5 * w), (y_c + 0.5 * h)]\r\n return torch.stack(b, dim=-1)\r\n\r\n @staticmethod\r\n def box_xyxy_to_cxcywh(x):\r\n x0, y0, x1, y1 = x.unbind(-1)\r\n b = [(x0 + x1) / 2, (y0 + y1) / 2,\r\n (x1 - x0), (y1 - y0)]\r\n return torch.stack(b, dim=-1)\r\n\r\n @staticmethod\r\n def rescale_bboxes(out_bbox, size):\r\n img_h, img_w = size\r\n b = BoxUtils.box_cxcywh_to_xyxy(out_bbox)\r\n b = b * torch.tensor([img_w, img_h, img_w, img_h], dtype=torch.float32)\r\n return b\r\n\r\n @staticmethod\r\n def box_area(boxes):\r\n \"\"\"\r\n Computes the area of a set of bounding boxes, which are specified by its\r\n (x1, y1, x2, y2) coordinates.\r\n Arguments:\r\n boxes (Tensor[N, 4]): boxes for which the area will be computed. They\r\n are expected to be in (x1, y1, x2, y2) format\r\n Returns:\r\n area (Tensor[N]): area for each box\r\n \"\"\"\r\n return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\r\n \r\n @staticmethod\r\n # modified from torchvision to also return the union\r\n def box_iou(boxes1, boxes2):\r\n area1 = BoxUtils.box_area(boxes1)\r\n area2 = BoxUtils.box_area(boxes2)\r\n\r\n lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\r\n rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\r\n\r\n wh = (rb - lt).clamp(min=0) # [N,M,2]\r\n inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\r\n\r\n union = area1[:, None] + area2 - inter\r\n\r\n iou = inter / union\r\n return iou, union\r\n\r\n @staticmethod\r\n def generalized_box_iou(boxes1, boxes2):\r\n \"\"\"\r\n Generalized IoU from https://giou.stanford.edu/\r\n The boxes should be in [x0, y0, x1, y1] format\r\n Returns a [N, M] pairwise matrix, where N = len(boxes1)\r\n and M = len(boxes2)\r\n \"\"\"\r\n # degenerate boxes gives inf / nan results\r\n # so do an early check\r\n assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\r\n assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\r\n iou, union = BoxUtils.box_iou(boxes1, boxes2)\r\n\r\n lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])\r\n rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\r\n\r\n wh = (rb - lt).clamp(min=0) # [N,M,2]\r\n area = wh[:, :, 0] * wh[:, :, 1]\r\n\r\n return iou - (area - union) / area\r\n```\r\n\r\n```\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nfrom scipy.optimize import linear_sum_assignment\r\n\r\nclass HungarianMatcher(nn.Module):\r\n \"\"\"This class computes an assignment between the targets and the predictions of the network\r\n For efficiency reasons, the targets don't include the no_object. Because of this, in general,\r\n there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,\r\n while the others are un-matched (and thus treated as non-objects).\r\n \"\"\"\r\n\r\n def __init__(self, cost_class: float = 1, cost_bbox: float = 1, cost_giou: float = 1):\r\n \"\"\"Creates the matcher\r\n Params:\r\n cost_class: This is the relative weight of the classification error in the matching cost\r\n cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost\r\n cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost\r\n \"\"\"\r\n super().__init__()\r\n self.cost_class = cost_class\r\n self.cost_bbox = cost_bbox\r\n self.cost_giou = cost_giou\r\n assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, \"all costs cant be 0\"\r\n\r\n @torch.no_grad()\r\n def forward(self, outputs, targets):\r\n \"\"\" Performs the matching\r\n Params:\r\n outputs: This is a dict that contains at least these entries:\r\n \"pred_logits\": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits\r\n \"pred_boxes\": Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates\r\n targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:\r\n \"labels\": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth\r\n objects in the target) containing the class labels\r\n \"boxes\": Tensor of dim [num_target_boxes, 4] containing the target box coordinates\r\n Returns:\r\n A list of size batch_size, containing tuples of (index_i, index_j) where:\r\n - index_i is the indices of the selected predictions (in order)\r\n - index_j is the indices of the corresponding selected targets (in order)\r\n For each batch element, it holds:\r\n len(index_i) = len(index_j) = min(num_queries, num_target_boxes)\r\n \"\"\"\r\n print(outputs.keys())\r\n bs, num_queries = outputs[\"logits\"].shape[:2]\r\n\r\n # We flatten to compute the cost matrices in a batch\r\n out_prob = outputs[\"logits\"].flatten(0, 1).softmax(-1) # [batch_size * num_queries, num_classes]\r\n out_bbox = outputs[\"pred_boxes\"].flatten(0, 1) # [batch_size * num_queries, 4]\r\n\r\n # Also concat the target labels and boxes\r\n tgt_ids = torch.cat([v[\"class_labels\"] for v in targets])\r\n print(\"Index \",type(tgt_ids))\r\n print(tgt_ids)\r\n tgt_ids = tgt_ids.int()\r\n print(\"Index \",type(tgt_ids))\r\n print(tgt_ids)\r\n\r\n tgt_bbox = torch.cat([v[\"boxes\"] for v in targets])\r\n\r\n # Compute the classification cost. Contrary to the loss, we don't use the NLL,\r\n # but approximate it in 1 - proba[target class].\r\n # The 1 is a constant that doesn't change the matching, it can be ommitted.\r\n cost_class = -out_prob[:, tgt_ids]\r\n\r\n # Compute the L1 cost between boxes\r\n cost_bbox = torch.cdist(out_bbox, tgt_bbox, p=1)\r\n\r\n # Compute the giou cost betwen boxes\r\n cost_giou = -BoxUtils.generalized_box_iou(\r\n BoxUtils.box_cxcywh_to_xyxy(out_bbox),\r\n BoxUtils.box_cxcywh_to_xyxy(tgt_bbox)\r\n )\r\n\r\n # Final cost matrix\r\n C = self.cost_bbox * cost_bbox + self.cost_class * cost_class + self.cost_giou * cost_giou\r\n C = C.view(bs, num_queries, -1).cpu()\r\n\r\n sizes = [len(v[\"boxes\"]) for v in targets]\r\n indices = [linear_sum_assignment(c[i]) for i, c in enumerate(C.split(sizes, -1))]\r\n return [(torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64)) for i, j in indices]\r\n\r\nclass SetCriterion(nn.Module):\r\n \"\"\" This class computes the loss for DETR.\r\n The process happens in two steps:\r\n 1) we compute hungarian assignment between ground truth boxes and the outputs of the model\r\n 2) we supervise each pair of matched ground-truth / prediction (supervise class and box)\r\n \"\"\"\r\n def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses):\r\n \"\"\" Create the criterion.\r\n Parameters:\r\n num_classes: number of object categories, omitting the special no-object category\r\n matcher: module able to compute a matching between targets and proposals\r\n weight_dict: dict containing as key the names of the losses and as values their relative weight.\r\n eos_coef: relative classification weight applied to the no-object category\r\n losses: list of all the losses to be applied. See get_loss for list of available losses.\r\n \"\"\"\r\n super().__init__()\r\n self.num_classes = num_classes\r\n self.matcher = matcher\r\n self.weight_dict = weight_dict\r\n self.eos_coef = eos_coef\r\n self.losses = losses\r\n empty_weight = torch.ones(self.num_classes + 1)\r\n empty_weight[-1] = self.eos_coef\r\n self.register_buffer('empty_weight', empty_weight)\r\n\r\n def loss_labels(self, outputs, targets, indices, num_boxes):\r\n \"\"\"Classification loss (NLL)\r\n targets dicts must contain the key \"labels\" containing a tensor of dim [nb_target_boxes]\r\n \"\"\"\r\n print(\"loss_labels\",outputs.keys())\r\n assert 'logits' in outputs\r\n src_logits = outputs['logits']\r\n\r\n idx = self._get_src_permutation_idx(indices)\r\n target_classes_o = torch.cat([t[\"class_labels\"][J] for t, (_, J) in zip(targets, indices)]).to(torch.int64)\r\n target_classes = torch.full(src_logits.shape[:2], self.num_classes,\r\n dtype=torch.int64, device=src_logits.device).to(torch.int64)\r\n target_classes[idx] = target_classes_o\r\n\r\n loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)\r\n losses = {'loss_ce': loss_ce}\r\n return losses\r\n\r\n @torch.no_grad()\r\n def loss_cardinality(self, outputs, targets, indices, num_boxes):\r\n \"\"\" Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes\r\n This is not really a loss, it is intended for logging purposes only. It doesn't propagate gradients\r\n \"\"\"\r\n pred_logits = outputs['logits']\r\n device = pred_logits.device\r\n tgt_lengths = torch.as_tensor([len(v[\"class_labels\"]) for v in targets], device=device)\r\n # Count the number of predictions that are NOT \"no-object\" (which is the last class)\r\n card_pred = (pred_logits.argmax(-1) != pred_logits.shape[-1] - 1).sum(1)\r\n card_err = F.l1_loss(card_pred.float(), tgt_lengths.float())\r\n losses = {'cardinality_error': card_err}\r\n return losses\r\n\r\n def loss_boxes(self, outputs, targets, indices, num_boxes):\r\n \"\"\"Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss\r\n targets dicts must contain the key \"boxes\" containing a tensor of dim [nb_target_boxes, 4]\r\n The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size.\r\n \"\"\"\r\n assert 'pred_boxes' in outputs\r\n idx = self._get_src_permutation_idx(indices)\r\n src_boxes = outputs['pred_boxes'][idx]\r\n target_boxes = torch.cat([t['boxes'][i] for t, (_, i) in zip(targets, indices)], dim=0)\r\n\r\n loss_bbox = F.l1_loss(src_boxes, target_boxes, reduction='none')\r\n\r\n losses = {}\r\n losses['loss_bbox'] = loss_bbox.sum() / num_boxes\r\n\r\n loss_giou = 1 - torch.diag(BoxUtils.generalized_box_iou(\r\n BoxUtils.box_cxcywh_to_xyxy(src_boxes),\r\n BoxUtils.box_cxcywh_to_xyxy(target_boxes))\r\n )\r\n losses['loss_giou'] = loss_giou.sum() / num_boxes\r\n return losses\r\n\r\n def _get_src_permutation_idx(self, indices):\r\n # permute predictions following indices\r\n batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])\r\n src_idx = torch.cat([src for (src, _) in indices])\r\n return batch_idx, src_idx\r\n\r\n def _get_tgt_permutation_idx(self, indices):\r\n # permute targets following indices\r\n batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])\r\n tgt_idx = torch.cat([tgt for (_, tgt) in indices])\r\n return batch_idx, tgt_idx\r\n\r\n def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):\r\n loss_map = {\r\n 'labels': self.loss_labels,\r\n 'cardinality': self.loss_cardinality,\r\n 'boxes': self.loss_boxes,\r\n }\r\n assert loss in loss_map, f'do you really want to compute {loss} loss?'\r\n return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs)\r\n\r\n def forward(self, outputs, targets):\r\n \"\"\" This performs the loss computation.\r\n Parameters:\r\n outputs: dict of tensors, see the output specification of the model for the format\r\n targets: list of dicts, such that len(targets) == batch_size.\r\n The expected keys in each dict depends on the losses applied, see each loss' doc\r\n \"\"\"\r\n print(\"output type\")\r\n print(type(outputs))\r\n print(\"target type\")\r\n print(targets)\r\n outputs_without_aux = {k: v for k, v in outputs.items() if k != 'aux_outputs'}\r\n\r\n # Retrieve the matching between the outputs of the last layer and the targets\r\n indices = self.matcher(outputs_without_aux, targets)\r\n\r\n # Compute the average number of target boxes accross all nodes, for normalization purposes\r\n num_boxes = sum(len(t[\"class_labels\"]) for t in targets)\r\n num_boxes = torch.as_tensor([num_boxes], dtype=torch.float, device=next(iter(outputs.values())).device)\r\n\r\n # Compute all the requested losses\r\n losses = {}\r\n for loss in self.losses:\r\n losses.update(self.get_loss(loss, outputs, targets, indices, num_boxes))\r\n\r\n return losses\r\n \r\ndef collate_fn(batch):\r\n #input_ids = torch.Tensor([item[\"input_ids\"].tolist() for item in batch]).int()\r\n input_ids = torch.Tensor([item[\"input_ids\"] for item in batch]).int()\r\n input_ids = input_ids.to(device)\r\n # attention_mask = torch.Tensor([item[\"attention_mask\"].tolist() for item in batch]).int()\r\n attention_mask = torch.Tensor([item[\"attention_mask\"] for item in batch]).int()\r\n attention_mask = attention_mask.to(device)\r\n # pixel_values = torch.Tensor([item[\"pixel_values\"].tolist() for item in batch])\r\n pixel_values = torch.Tensor([item[\"pixel_values\"] for item in batch])\r\n pixel_values = pixel_values.to(device)\r\n labels = []\r\n for item in batch:\r\n for (key, value) in item[\"labels\"].items():\r\n item[\"labels\"][key] = torch.Tensor(value).to(device)\r\n labels.append(item[\"labels\"])\r\n \r\n batch = {}\r\n batch[\"input_ids\"] = input_ids\r\n batch[\"attention_mask\"] = attention_mask\r\n batch[\"pixel_values\"] = pixel_values\r\n batch[\"labels\"] = labels\r\n #print(batch)\r\n return batch\r\n\r\n\r\n\r\nfrom transformers import TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"owlvit-base-patch32_FT_cppe5\",\r\n per_device_train_batch_size=1,\r\n num_train_epochs=2,\r\n fp16=True,\r\n save_steps=200,\r\n logging_steps=50,\r\n learning_rate=1e-5,\r\n weight_decay=1e-4,\r\n save_total_limit=2,\r\n remove_unused_columns=False,\r\n push_to_hub=False,\r\n dataloader_pin_memory=False,\r\n gradient_accumulation_steps=1\r\n)\r\n\r\n# custom loss\r\ndef custom_loss(logits, labels):\r\n num_classes = 4\r\n matcher = HungarianMatcher(cost_class = 1, cost_bbox = 5, cost_giou = 2)\r\n weight_dict = {'loss_ce': 1, 'loss_bbox': 5, 'loss_giou': 2}\r\n losses = ['labels', 'boxes', 'cardinality']\r\n criterion = SetCriterion(num_classes, matcher=matcher, weight_dict=weight_dict, eos_coef=0.1, losses=losses)\r\n criterion.to(device)\r\n print(\"logits\",type(logits))\r\n print(\"labels\",type(labels))\r\n loss = criterion(logits, labels)\r\n return loss\r\n\r\n# subclass trainer\r\nfrom transformers import Trainer\r\n\r\nclass CustomTrainer(Trainer):\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n labels = inputs.pop(\"labels\")\r\n\r\n inputs[\"input_ids\"] = inputs[\"input_ids\"][0]\r\n inputs[\"attention_mask\"] = inputs[\"attention_mask\"][0]\r\n print(inputs[\"attention_mask\"].shape)\r\n outputs = model(**inputs, return_dict=True)\r\n\r\n print(outputs.keys())\r\n print(outputs.logits)\r\n print(labels)\r\n print(\"before custom loss calling\")\r\n loss = custom_loss(outputs, labels)\r\n print(\"after custom loss calling\")\r\n return (loss, outputs) if return_outputs else loss\r\n\r\n# use new trainer\r\ntrainer = CustomTrainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=collate_fn,\r\n train_dataset=train_dataset,\r\n tokenizer=processor\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\n\r\n**Error List**\r\n\r\n```\r\nTypeError Traceback (most recent call last)\r\nCell In[25], [line 1](vscode-notebook-cell:?execution_count=25&line=1)\r\n----> [1](vscode-notebook-cell:?execution_count=25&line=1) trainer.train()\r\n\r\nFile [~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1537](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1537), in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n [1535](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1535) hf_hub_utils.enable_progress_bars()\r\n [1536](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1536) else:\r\n-> [1537](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1537) return inner_training_loop(\r\n [1538](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1538) args=args,\r\n [1539](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1539) resume_from_checkpoint=resume_from_checkpoint,\r\n [1540](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1540) trial=trial,\r\n [1541](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1541) ignore_keys_for_eval=ignore_keys_for_eval,\r\n [1542](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1542) )\r\n\r\nFile [~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1854](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1854), in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n [1851](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1851) self.control = self.callback_handler.on_step_begin(args, self.state, self.control)\r\n [1853](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1853) with self.accelerator.accumulate(model):\r\n-> [1854](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1854) tr_loss_step = self.training_step(model, inputs)\r\n [1856](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1856) if (\r\n [1857](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1857) args.logging_nan_inf_filter\r\n [1858](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1858) and not is_torch_tpu_available()\r\n [1859](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1859) and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))\r\n [1860](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1860) ):\r\n [1861](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1861) # if loss is nan or inf simply add the average of previous logged losses\r\n [1862](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1862) tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)\r\n\r\nFile [~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2744](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2744), in Trainer.training_step(self, model, inputs)\r\n [2742](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2742) scaled_loss.backward()\r\n [2743](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2743) else:\r\n-> [2744](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2744) self.accelerator.backward(loss)\r\n [2746](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2746) return loss.detach() / self.args.gradient_accumulation_steps\r\n\r\nFile [~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1901](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1901), in Accelerator.backward(self, loss, **kwargs)\r\n [1899](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1899) print(\"self.gradient_accumulation_steps :- \",self.gradient_accumulation_steps)\r\n [1900](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1900) print(\"self.gradient_accumulation_steps type:- \",type(self.gradient_accumulation_steps))\r\n-> [1901](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1901) loss = loss / self.gradient_accumulation_steps\r\n [1902](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1902) if self.distributed_type == DistributedType.DEEPSPEED:\r\n [1903](https://vscode-remote+ssh-002dremote-002bsolomon-002ddesktop.vscode-resource.vscode-cdn.net/home/lfo2kor/foundation_models/lora/vision_models/~/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py:1903) self.deepspeed_engine_wrapped.backward(loss, **kwargs)\r\n\r\nTypeError: unsupported operand type(s) for /: 'dict' and 'int'\r\n\r\n\r\n#######################################################################################\r\n\r\n$ python /home/lfo2kor/foundation_models/lora/vision_models/owl_vit_cus_trainer.py\r\ncuda\r\n['Coverall', 'Face_Shield', 'Gloves', 'Goggles', 'Mask']\r\nCould not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.\r\nThe `max_size` parameter is deprecated and will be removed in v4.26. Please specify in `size['longest_edge'] instead`.\r\n 0%| | 0/200 [00:00<?, ?it/s]torch.Size([5, 16])\r\nodict_keys(['logits', 'pred_boxes', 'text_embeds', 'image_embeds', 'class_embeds', 'text_model_output', 'vision_model_output'])\r\ntensor([[[ -8.9141, -8.8359, -8.8750, -8.7734, -9.2188],\r\n [ -9.8984, -9.9844, -9.4922, -9.6719, -10.5625],\r\n [-10.1406, -10.2344, -9.9375, -9.8516, -10.8438],\r\n ...,\r\n [-11.0312, -10.8125, -9.4922, -10.1797, -11.2500],\r\n [-10.9844, -10.6875, -10.7109, -9.8047, -10.7266],\r\n [-16.1094, -15.5938, -15.7812, -15.3125, -15.8438]]], device='cuda:0',\r\n grad_fn=<ToCopyBackward0>)\r\n[{'area': tensor([32777.7773, 21233.3320, 21000.0000], device='cuda:0'), 'boxes': tensor([[0.4872, 0.2912, 0.0625, 0.1105],\r\n [0.4328, 0.7111, 0.0525, 0.0852],\r\n [0.5979, 0.3900, 0.0450, 0.0983]], device='cuda:0'), 'class_labels': tensor([4., 2., 2.], device='cuda:0'), 'image_id': tensor([429.], device='cuda:0'), 'iscrowd': tensor([0., 0., 0.], device='cuda:0'), 'orig_size': tensor([480., 480.], device='cuda:0'), 'size': tensor([800., 800.], device='cuda:0')}]\r\nbefore custom loss calling\r\nlogits <class 'transformers.models.owlvit.modeling_owlvit.OwlViTObjectDetectionOutput'>\r\nlabels <class 'list'>\r\noutput type\r\n<class 'transformers.models.owlvit.modeling_owlvit.OwlViTObjectDetectionOutput'>\r\ntarget type\r\n[{'area': tensor([32777.7773, 21233.3320, 21000.0000], device='cuda:0'), 'boxes': tensor([[0.4872, 0.2912, 0.0625, 0.1105],\r\n [0.4328, 0.7111, 0.0525, 0.0852],\r\n [0.5979, 0.3900, 0.0450, 0.0983]], device='cuda:0'), 'class_labels': tensor([4., 2., 2.], device='cuda:0'), 'image_id': tensor([429.], device='cuda:0'), 'iscrowd': tensor([0., 0., 0.], device='cuda:0'), 'orig_size': tensor([480., 480.], device='cuda:0'), 'size': tensor([800., 800.], device='cuda:0')}]\r\ndict_keys(['logits', 'pred_boxes', 'text_embeds', 'image_embeds', 'class_embeds', 'text_model_output', 'vision_model_output'])\r\nIndex <class 'torch.Tensor'>\r\ntensor([4., 2., 2.], device='cuda:0')\r\nIndex <class 'torch.Tensor'>\r\ntensor([4, 2, 2], device='cuda:0', dtype=torch.int32)\r\nloss_labels odict_keys(['logits', 'pred_boxes', 'text_embeds', 'image_embeds', 'class_embeds', 'text_model_output', 'vision_model_output'])\r\ncuda:0\r\ntorch.Size([1, 5, 576])\r\ncuda:0\r\ntorch.Size([1, 576])\r\nweight status ----------\r\ncuda:0\r\ntensor([1.0000, 1.0000, 1.0000, 1.0000, 0.1000], device='cuda:0')\r\ncuda:0\r\n<class 'torch.Tensor'>\r\ntorch.Size([5])\r\nreduction status ----\r\nmean\r\nignore_index status -----\r\n-100\r\nlabel_smoothing status -------\r\n0.0\r\nafter custom loss calling\r\nloss value :- {'loss_ce': tensor(2.0748, device='cuda:0', grad_fn=<NllLoss2DBackward0>), 'loss_bbox': tensor([0.0386], device='cuda:0', grad_fn=<DivBackward0>), 'loss_giou': tensor([0.4936], device='cuda:0', grad_fn=<DivBackward0>), 'cardinality_error': tensor(563., device='cuda:0')}\r\nloss type :- <class 'dict'>\r\nself.gradient_accumulation_steps :- 1\r\nself.gradient_accumulation_steps type:- <class 'int'>\r\nTraceback (most recent call last):\r\n File \"/home/lfo2kor/foundation_models/lora/vision_models/owl_vit_cus_trainer.py\", line 499, in <module>\r\n trainer.train()\r\n File \"/home/lfo2kor/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py\", line 1537, in train\r\n return inner_training_loop(\r\n File \"/home/lfo2kor/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py\", line 1854, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/lfo2kor/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py\", line 2744, in training_step\r\n self.accelerator.backward(loss)\r\n File \"/home/lfo2kor/miniconda3/envs/testenv/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1901, in backward\r\n loss = loss / self.gradient_accumulation_steps\r\nTypeError: unsupported operand type(s) for /: 'dict' and 'int'\r\n 0%| | 0/200 [00:02<?, ?it/s] \r\n```", "Thanks a lot for sharing the snippet @solomonmanuelraj ! ๐Ÿ™ \r\nI think to fix that you need to return a tensor instead of a dictionary for the loss, can you try to return a dict by somehow aggregating all inidivual losses from your loss dict and return that instead ? ", "Hi @younesbelkada \r\n yes. changed the code to return the loss tensor (loss = sum(loss.values()) instead of a loss dictionary. Now it is working fine. thanks for your quick feedback.", "Thanks very much @solomonmanuelraj !" ]
1,706
1,708
null
NONE
null
### Feature request Hi team, when i am trying to finetune owl-vit base 32 model with custom data cppe-5 i am receiving the following error in the time of trainer.train() function. ###################################################################################################################### ValueError Traceback (most recent call last) Cell In[40], line 11 1 from transformers import Trainer 3 trainer = Trainer( 4 model=lora_model, 5 args=training_args, (...) 8 tokenizer=processor, 9 ) ---> 11 trainer.train() File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1537, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1535 hf_hub_utils.enable_progress_bars() 1536 else: -> 1537 return inner_training_loop( 1538 args=args, 1539 resume_from_checkpoint=resume_from_checkpoint, 1540 trial=trial, 1541 ignore_keys_for_eval=ignore_keys_for_eval, 1542 ) File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:1854, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1851 self.control = self.callback_handler.on_step_begin(args, self.state, self.control) 1853 with self.accelerator.accumulate(model): -> 1854 tr_loss_step = self.training_step(model, inputs) 1856 if ( 1857 args.logging_nan_inf_filter 1858 and not is_torch_tpu_available() 1859 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)) 1860 ): 1861 # if loss is nan or inf simply add the average of previous logged losses 1862 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged) File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2735, in Trainer.training_step(self, model, inputs) 2732 return loss_mb.reduce_mean().detach().to(self.args.device) 2734 with self.compute_loss_context_manager(): -> 2735 loss = self.compute_loss(model, inputs) 2737 if self.args.n_gpu > 1: 2738 loss = loss.mean() # mean() to average on multi-gpu parallel training File ~/miniconda3/envs/testenv/lib/python3.10/site-packages/transformers/trainer.py:2776, in Trainer.compute_loss(self, model, inputs, return_outputs) 2774 else: 2775 if isinstance(outputs, dict) and "loss" not in outputs: -> 2776 raise ValueError( 2777 "The model did not return a loss from the inputs, only the following keys: " 2778 f"{','.join(outputs.keys())}. For reference, the inputs it received are {','.join(inputs.keys())}." 2779 ) 2780 # We don't use .loss here since the model may return tuples instead of ModelOutput. 2781 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0] ValueError: The model did not return a loss from the inputs, only the following keys: logits,pred_boxes,text_embeds,image_embeds,class_embeds,text_model_output,vision_model_output. For reference, the inputs it received are input_ids,attention_mask,pixel_values ##################################################################################################################### Collate_fn() definition ####################################################################################################################### def collate_fn(batch): input_ids = torch.Tensor([item["input_ids"].tolist() for item in batch]).int() input_ids = input_ids.to(device) attention_mask = torch.Tensor([item["attention_mask"].tolist() for item in batch]).int() attention_mask = attention_mask.to(device) pixel_values = torch.Tensor([item["pixel_values"].tolist() for item in batch]) pixel_values = pixel_values.to(device) batch = {} batch["input_ids"] = input_ids batch["attention_mask"] = attention_mask batch["pixel_values"] = pixel_values print(batch) return batch #################################################################################################################### i am using cppe-5 dataset from HF for custom training and testing. let me know your feedback comments. ### Motivation Fine tuning the owl-vit model in custom dataset using HF Trainer. It will help to PEFT fine tune the model with Lora ### Your contribution Will test this feature with the custom dataset
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28778/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28777/comments
https://api.github.com/repos/huggingface/transformers/issues/28777/events
https://github.com/huggingface/transformers/pull/28777
2,108,139,886
PR_kwDOCUB6oc5ld4rb
28,777
Adds LlamaForQuestionAnswering class in modeling_llama.py along with AutoModel Support
{ "login": "nakranivaibhav", "id": 67785830, "node_id": "MDQ6VXNlcjY3Nzg1ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nakranivaibhav", "html_url": "https://github.com/nakranivaibhav", "followers_url": "https://api.github.com/users/nakranivaibhav/followers", "following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}", "gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}", "starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions", "organizations_url": "https://api.github.com/users/nakranivaibhav/orgs", "repos_url": "https://api.github.com/users/nakranivaibhav/repos", "events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}", "received_events_url": "https://api.github.com/users/nakranivaibhav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker Alright the capitalization in the copy statement affects the test. Thank you for pointing it out.", "Now letโ€™s make sure the CIs are green! Feel free to rebase on main! ", "@ArthurZucker I did rebase on main, still the tests are failing. What's wrong?\r\nI did \r\n`git fetch upstream ->\r\ngit rebase upstream/main ->\r\ngit push -u origin llama-for-qa -f`\r\nP.S: I make stupid mistakes. Do Forgive", "failing tests are flaky ! ", "@NielsRogge or @ArthurZucker What is to be done kindly guide further?", "Just waiting for you to answer my last comment ๐Ÿ˜‰ ", "@ArthurZucker **Why not use this for the entire class?** This one or\r\n**failing tests are flaky !**\r\nThe former i have given some explanation I hope you have seen it, the latter i can just pray the flaky gods ๐Ÿ˜", "@ArthurZucker Alright i will get to work ", "@ArthurZucker \r\nI did update the PR title.\r\n\r\nI tried adding `# Copied from transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering with Bloom->Llama` at the top of the class.\r\nIssues I am facing:\r\nIt wants the entire class code to be the same as Bloom when I make fix-copies\r\nBloom has a `head_mask` param in the forward function which Llama does not seem to handle anywhere.\r\nBloom does not have `past_key_values` in its forward params.\r\nNot including `past_key_values` in llama fails some tests.\r\nI did add `past_key_values` in this commit [c90f9b3] for this reason.\r\n\r\nFor the tests I have this error \r\n`FAILED examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer - AssertionError: 0.3 not greater than or equal to 0.8`\r\n\r\nHow do I figure out the flaky tests besides being in the exotic model test? ", "Alright no worries then no need to use the copied from for everything, thanks for explaining ๐Ÿ˜‰ ", "<img width=\"720\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/48595927/15de76e6-1ddd-4abd-80a6-c4b25e64b497\">\r\nFlaky tests are marked as flaky automatically ๐Ÿ˜‰ ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28777). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker If the community demands. I can do the same for mistral as well. :)", "If there is demand for it sure, have not seen that yet!", "Feel free to open an issue, to track this feature request" ]
1,706
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Adds AutoModelForQuestionAnswering support for llama. Fixes # (issue) #28265 ## Who can review? @ArthurZucker @NielsRogge I haven't added a copy statement. When I try to add the following copy statement `# Copied from transformers.models.bloom.modeling_bloom.BloomForQuestionAnswering.__init__ with Bloom->LLama` It throws an error even though both the init functions are same. When I try to make fix-copies. The fix copy wants to capitalize both the L's in Llamamodel on this line `self.transformer = LlamaModel(config)` -> `self.transformer = LLamaModel(config` Perhaps I am missing something and @ArthurZucker can guide me further. I tried training the model on a subset of SQUAD and it does train fine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28777", "html_url": "https://github.com/huggingface/transformers/pull/28777", "diff_url": "https://github.com/huggingface/transformers/pull/28777.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28777.patch", "merged_at": 1707187302000 }
https://api.github.com/repos/huggingface/transformers/issues/28776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28776/comments
https://api.github.com/repos/huggingface/transformers/issues/28776/events
https://github.com/huggingface/transformers/issues/28776
2,108,044,294
I_kwDOCUB6oc59pjQG
28,776
Allow disabling of deletion of leading SPIECE_UNDERLINE during llama decoding (tokenizer).
{ "login": "JoshC8C7", "id": 32071009, "node_id": "MDQ6VXNlcjMyMDcxMDA5", "avatar_url": "https://avatars.githubusercontent.com/u/32071009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoshC8C7", "html_url": "https://github.com/JoshC8C7", "followers_url": "https://api.github.com/users/JoshC8C7/followers", "following_url": "https://api.github.com/users/JoshC8C7/following{/other_user}", "gists_url": "https://api.github.com/users/JoshC8C7/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoshC8C7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoshC8C7/subscriptions", "organizations_url": "https://api.github.com/users/JoshC8C7/orgs", "repos_url": "https://api.github.com/users/JoshC8C7/repos", "events_url": "https://api.github.com/users/JoshC8C7/events{/privacy}", "received_events_url": "https://api.github.com/users/JoshC8C7/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I believe this is somewhat related to #28010 ", "Yes, will be fixed by #28010 ! ๐Ÿค— " ]
1,706
1,706
null
NONE
null
### Feature request The `LlamaTokenizer`'s `convert_tokens_to_string` method used during decoding [has the statement](https://github.com/huggingface/transformers/blob/6f7d5db58c7c149c75642b5a4647b5cbc6c55643/src/transformers/models/llama/tokenization_llama.py#L286): ``` if tokens[0].startswith(SPIECE_UNDERLINE): tokens[0] = tokens[0][1:] ``` which deletes a space if it falls at the start of the first token being decoded. There are cases where this is undesirable - namely in building streaming applications where an output is decoded in chunks and so a decoded sequence may begin with a space, which is unhelpfully deleted here. AFAIK there is then no way of knowing if the space was deleted without looking up the first token of each chunk in the vocabulary, and thus no way to faithfully recombine the chunks into a complete output. **Ideally this deletion should be parameterised so it can be turned off in cases like these.** My current workaround is to prefix a 'fake' token to the start of every sequence, before deleting it from the outputted text. I believe TGI [have a similar workaround](https://github.com/huggingface/text-generation-inference/blob/2d56f106a60c7b698705494e7539f8a7e4c85dd9/server/text_generation_server/models/model.py#L86). ### Motivation When writing streaming applications with llama/codellama, you may want to decode in chunks. Where this chunk boundary falls between two words (i.e. the last token of the previous chunk is a word, and the first token of the next chunk is a word), then when decoding the second chunk the output string does not have a preceding space (even if its mapping in `tokenizer.json` has a SPIECE_UNDERLINE at the start). This information loss means the outputted chunks cannot faithfully be joined up, as it isn't known whether a space is deleted. ### Your contribution I could submit a PR - this requires changing individual tokenizer files so I would have to read up on the procedure for that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28776/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28775/comments
https://api.github.com/repos/huggingface/transformers/issues/28775/events
https://github.com/huggingface/transformers/pull/28775
2,107,766,333
PR_kwDOCUB6oc5lcnIP
28,775
add regnet chinese doc
{ "login": "a-strong-python", "id": 65645246, "node_id": "MDQ6VXNlcjY1NjQ1MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/65645246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/a-strong-python", "html_url": "https://github.com/a-strong-python", "followers_url": "https://api.github.com/users/a-strong-python/followers", "following_url": "https://api.github.com/users/a-strong-python/following{/other_user}", "gists_url": "https://api.github.com/users/a-strong-python/gists{/gist_id}", "starred_url": "https://api.github.com/users/a-strong-python/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/a-strong-python/subscriptions", "organizations_url": "https://api.github.com/users/a-strong-python/orgs", "repos_url": "https://api.github.com/users/a-strong-python/repos", "events_url": "https://api.github.com/users/a-strong-python/events{/privacy}", "received_events_url": "https://api.github.com/users/a-strong-python/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@vanpelt ", "cc @stevhliu ", "> Thanks for translating this! ๐Ÿค— \n> \n> Everything after the `## Resources` section can be left untranslated. These classes/methods are automatically documented with our special doc-builder syntax, [`[[autodoc]]`](https://github.com/huggingface/transformers/blob/4830f2696575988faee4af78b6049b62a750ecd4/docs/source/en/model_doc/regnet.md?plain=1#L47).\n\nI know that the `## Resources` section can automatically generate documents, but in this case most of the documentation comments will still be in English and cannot be translated into Chinese." ]
1,706
1,707
null
NONE
null
Add Chinese reference documents to the regnet model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28775", "html_url": "https://github.com/huggingface/transformers/pull/28775", "diff_url": "https://github.com/huggingface/transformers/pull/28775.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28775.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28774/comments
https://api.github.com/repos/huggingface/transformers/issues/28774/events
https://github.com/huggingface/transformers/pull/28774
2,107,572,860
PR_kwDOCUB6oc5lb7xE
28,774
Fix transformers.utils.fx compatibility with torch<2.0
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28774). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
Fixes https://github.com/huggingface/transformers/issues/28690 Tested on 1.13 that `pytest tests/models/opt/ -k "test_torch_fx" -s -vvvvv` passes, while it is currently failing with torch<2.0 following https://github.com/huggingface/transformers/pull/28447 (`AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'`).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28774/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28774", "html_url": "https://github.com/huggingface/transformers/pull/28774", "diff_url": "https://github.com/huggingface/transformers/pull/28774.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28774.patch", "merged_at": 1706622882000 }
https://api.github.com/repos/huggingface/transformers/issues/28773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28773/comments
https://api.github.com/repos/huggingface/transformers/issues/28773/events
https://github.com/huggingface/transformers/pull/28773
2,107,424,909
PR_kwDOCUB6oc5lbbbd
28,773
Split daily CI using 2 level matrix
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28773). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? This PR aims to bypass the 256 jobs limitation (in a matrix) on GitHub Actions, as we are approaching that limit soon. The idea: - move the model job logic into a new workflow file (it **still uses matrix**) - call the new workflow file in the original workflow file, but pass some inputs to it - a (nested) list: each element is a list: a subset of model names - a slice id When the new workflow file is called with the inputs, it will use the slice id to get the corresponding subset of model names, and uses the matrix on that to generate the jobs to run. In the original workflow file, we **generate the slice ids by a matrix**. See example runs **Full version** https://github.com/huggingface/transformers/actions/runs/7701814182 <img width="512" alt="Screenshot 2024-01-30 111539" src="https://github.com/huggingface/transformers/assets/2521628/465554f6-ff92-4ac2-bf27-658353a982b3"> **Demo version** https://github.com/huggingface/transformers/actions/runs/7702628211 <img width="512" alt="Screenshot 2024-01-30 110004" src="https://github.com/huggingface/transformers/assets/2521628/9262b176-4955-4215-a6c5-977fa124a4a8">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28773/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28773", "html_url": "https://github.com/huggingface/transformers/pull/28773", "diff_url": "https://github.com/huggingface/transformers/pull/28773.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28773.patch", "merged_at": 1706720683000 }
https://api.github.com/repos/huggingface/transformers/issues/28772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28772/comments
https://api.github.com/repos/huggingface/transformers/issues/28772/events
https://github.com/huggingface/transformers/pull/28772
2,107,269,695
PR_kwDOCUB6oc5la5br
28,772
doc: fix a typo
{ "login": "ThibaultLengagne", "id": 11950126, "node_id": "MDQ6VXNlcjExOTUwMTI2", "avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThibaultLengagne", "html_url": "https://github.com/ThibaultLengagne", "followers_url": "https://api.github.com/users/ThibaultLengagne/followers", "following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}", "gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions", "organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs", "repos_url": "https://api.github.com/users/ThibaultLengagne/repos", "events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}", "received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28772/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28772", "html_url": "https://github.com/huggingface/transformers/pull/28772", "diff_url": "https://github.com/huggingface/transformers/pull/28772.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28772.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28771/comments
https://api.github.com/repos/huggingface/transformers/issues/28771/events
https://github.com/huggingface/transformers/issues/28771
2,107,131,812
I_kwDOCUB6oc59mEek
28,771
Mistral with FlashAttention2
{ "login": "khalil-Hennara", "id": 90086758, "node_id": "MDQ6VXNlcjkwMDg2NzU4", "avatar_url": "https://avatars.githubusercontent.com/u/90086758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khalil-Hennara", "html_url": "https://github.com/khalil-Hennara", "followers_url": "https://api.github.com/users/khalil-Hennara/followers", "following_url": "https://api.github.com/users/khalil-Hennara/following{/other_user}", "gists_url": "https://api.github.com/users/khalil-Hennara/gists{/gist_id}", "starred_url": "https://api.github.com/users/khalil-Hennara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khalil-Hennara/subscriptions", "organizations_url": "https://api.github.com/users/khalil-Hennara/orgs", "repos_url": "https://api.github.com/users/khalil-Hennara/repos", "events_url": "https://api.github.com/users/khalil-Hennara/events{/privacy}", "received_events_url": "https://api.github.com/users/khalil-Hennara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think the problem related to @ArthurZucker and @stevhliu ", "It looks like โ€˜attn_implementationโ€™ is supported in version 4.36. Maybe you need to try it after upgrading the transfromers library version\r\n![image](https://github.com/huggingface/transformers/assets/13672523/12703f91-a6b4-4688-9b67-50cd5896aa2e)\r\n", "Yes, as @IYoreI mentions, feel free to upgrade the transformers version! ", "Thanks @IYoreI , @ArthurZucker for your time ", "Closing as it's resolved! " ]
1,706
1,706
1,706
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.1 - Accelerate version: 0.27.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2")` The code line has taken from the official website [Mistral](https://huggingface.co/docs/transformers/v4.37.2/model_doc/mistral#model-details) TypeError: MistralForCausalLM.__init__() got an unexpected keyword argument 'attn_implementation' when using `use_flash_attention_2=True` it's work fine ### Expected behavior The model should be loaded without error, using flash attention2 in the background.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28771/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28770/comments
https://api.github.com/repos/huggingface/transformers/issues/28770/events
https://github.com/huggingface/transformers/issues/28770
2,107,013,041
I_kwDOCUB6oc59lnex
28,770
Lora + DeepSpeed non-trainer integration does not work
{ "login": "hrushikesh198", "id": 6188036, "node_id": "MDQ6VXNlcjYxODgwMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/6188036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hrushikesh198", "html_url": "https://github.com/hrushikesh198", "followers_url": "https://api.github.com/users/hrushikesh198/followers", "following_url": "https://api.github.com/users/hrushikesh198/following{/other_user}", "gists_url": "https://api.github.com/users/hrushikesh198/gists{/gist_id}", "starred_url": "https://api.github.com/users/hrushikesh198/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hrushikesh198/subscriptions", "organizations_url": "https://api.github.com/users/hrushikesh198/orgs", "repos_url": "https://api.github.com/users/hrushikesh198/repos", "events_url": "https://api.github.com/users/hrushikesh198/events{/privacy}", "received_events_url": "https://api.github.com/users/hrushikesh198/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada ", "Hello @hrushikesh198 ,\r\n\r\nPR https://github.com/huggingface/peft/pull/1450 fixes support for DeepSpeed Z3 with zero init and `modules_to_save` config of PEFT. In the PR, we show how you can run non trainer example with Accelerate and normal training loop with an official example. \r\n\r\nI don't have experience with Lightning, we could look into this if you can provide a minimal but end-to-end example that we can run. Above code is missing a lot of things to just run it as is." ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Linux-4.19.0-24-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, trying to run deepspeed zero 3 on 8 A100 gpus ### Who can help? cc: @pacman100 Tagging few folks who were discussing a similar issue before https://github.com/huggingface/transformers/issues/24445 @1ytic, @don-tpanic, @olegsinavski ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to finetune Mistral-7B using LoRa, deepspeed zero 3, pytorch-lightning. As per the deepspeed non trainer integration, I have created the `dschf` object and kept it alive. ```python class Module(LightningModule): def configure_model(self) -> None: if self.model is not None: return deepspeed_config = self.trainer.strategy.config self.dschf = HfDeepSpeedConfig(deepspeed_config) self.model = AutoModelForSequenceClassification.from_pretrained(...) self.model = get_peft_model( self.model, LoraConfig( task_type=TaskType.SEQ_CLS, inference_mode=False, target_modules=target_modules, r=256, lora_alpha=256, lora_dropout=0.5, ), ) def main(): trainer = lightning.Trainer( ... strategy = DeepSpeedStrategy(stage=3) ) model=Module() trainer.fit(model, datamodules) ``` The training script throws an error on the `get_peft_model` line ``` Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 528, in __getattr__ return super().__getattr__(name) # defer to nn.Module's logic File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'PeftModelForSequenceClassification' object has no attribute '_ds_child_entered' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 528, in __getattr__ return super().__getattr__(name) # defer to nn.Module's logic File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'PeftModelForSequenceClassification' object has no attribute 'base_model'``` ``` the seconds one continues until it reaches max recursion depth. ### Expected behavior Lora model should initialize seamlessly and train as it works for deepspeed stage 2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28770/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28769/comments
https://api.github.com/repos/huggingface/transformers/issues/28769/events
https://github.com/huggingface/transformers/pull/28769
2,106,899,551
PR_kwDOCUB6oc5lZqM9
28,769
Trainer - add cache clearing and the option for batched eval metrics computation
{ "login": "FoamoftheSea", "id": 50897218, "node_id": "MDQ6VXNlcjUwODk3MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/50897218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FoamoftheSea", "html_url": "https://github.com/FoamoftheSea", "followers_url": "https://api.github.com/users/FoamoftheSea/followers", "following_url": "https://api.github.com/users/FoamoftheSea/following{/other_user}", "gists_url": "https://api.github.com/users/FoamoftheSea/gists{/gist_id}", "starred_url": "https://api.github.com/users/FoamoftheSea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FoamoftheSea/subscriptions", "organizations_url": "https://api.github.com/users/FoamoftheSea/orgs", "repos_url": "https://api.github.com/users/FoamoftheSea/repos", "events_url": "https://api.github.com/users/FoamoftheSea/events{/privacy}", "received_events_url": "https://api.github.com/users/FoamoftheSea/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28769). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @pacman100 and @muellerzr ", "Hey everyone, I tried to look at the logs for the failed tests, but I don't see any actionable error reports. Can anyone help me figure out what needs to be done for them to pass?", "The main CI is a bit broken because of `pytest` package. Let's wait a bit here", "just re-ran the ci, you should actually rebase to main should be alright ", "BTW @SunMarc would be nice if you can have a look as well! ", "CIs are green after merging main โœ”๏ธ " ]
1,706
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR does two things which are necessary for using the Trainer in resource constrained environments (like my RTX-3070Ti machine): 1. Add cache clearing in training and evaluation loops - This reduces peak GPU load and prevents CUDA OOM errors when running near capacity. 2. Add Trainer arg `batch_eval_metrics` for batched eval metrics computation. - When working with limited RAM, storing all logits across the entire evaluation set may not be feasible. A user working in this condition can pass `True` to `batch_eval_metrics` and construct a `compute_metrics` function which can update average metrics at a batch level to prevent OOM errors with large eval sets. Particularly useful for vision transformers. - Previous functionality is unaltered if option is not set to `True` @muellerzr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28769/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28769/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28769", "html_url": "https://github.com/huggingface/transformers/pull/28769", "diff_url": "https://github.com/huggingface/transformers/pull/28769.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28769.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28768/comments
https://api.github.com/repos/huggingface/transformers/issues/28768/events
https://github.com/huggingface/transformers/pull/28768
2,106,808,476
PR_kwDOCUB6oc5lZXU-
28,768
[`HfQuantizer`] Move it to "Developper guides"
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28768). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Move the "How to add a new quantization method" tutorial into "Developper Guide" which seems to be a better appropriate place for that tutorial rather than in "Performance and scalability" cc @stevhliu @ArthurZucker @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28768", "html_url": "https://github.com/huggingface/transformers/pull/28768", "diff_url": "https://github.com/huggingface/transformers/pull/28768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28768.patch", "merged_at": 1706595621000 }
https://api.github.com/repos/huggingface/transformers/issues/28767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28767/comments
https://api.github.com/repos/huggingface/transformers/issues/28767/events
https://github.com/huggingface/transformers/pull/28767
2,106,654,935
PR_kwDOCUB6oc5lY2kq
28,767
added support for llama v2 and codellama in weight conversion for issue #28241
{ "login": "christoukmaji", "id": 51040574, "node_id": "MDQ6VXNlcjUxMDQwNTc0", "avatar_url": "https://avatars.githubusercontent.com/u/51040574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/christoukmaji", "html_url": "https://github.com/christoukmaji", "followers_url": "https://api.github.com/users/christoukmaji/followers", "following_url": "https://api.github.com/users/christoukmaji/following{/other_user}", "gists_url": "https://api.github.com/users/christoukmaji/gists{/gist_id}", "starred_url": "https://api.github.com/users/christoukmaji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/christoukmaji/subscriptions", "organizations_url": "https://api.github.com/users/christoukmaji/orgs", "repos_url": "https://api.github.com/users/christoukmaji/repos", "events_url": "https://api.github.com/users/christoukmaji/events{/privacy}", "received_events_url": "https://api.github.com/users/christoukmaji/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey @christoukmaji, thanks for opening the PR! Seems like #28754 was opened a bit earlier so we'll try to get it merged! ๐Ÿค— ", "Hi @ArthurZucker, thanks for the response. I would like to avoid the duplication of work for future contributions.\r\n\r\nWhat is the PR selection process for HuggingFace contributions; is it first comment or first PR? I thought it was the first comment as outlined in the [Contribution documentation](https://huggingface.co/docs/transformers/contributing) and how [other PR's have been handled](https://github.com/huggingface/transformers/issues/28265).", "Hey, pretty sure that if you look at the PR, it's first PR first, then if there is no activity anyone can take it. \r\nI'll update the contribution guidelines as commenting is not really enough as we can't track the progress / if you are stuck of if you even started. Sorry for that! ๐Ÿค— " ]
1,706
1,707
null
NONE
null
# What does this PR do? This PR adds support for LLaMa V2 and CodeLLaMa while maintaining backwards compatibility for LLaMa V1 in the LLaMa-HuggingFace weight conversion script `src/transformers/models/llama/convert_llama_weights_to_hf.py`. This PR changes the max_position_embeddings for LLaMa V2 to 4096, and for CodeLLaMa to 16384, while maintaining a default max_position_embeddings of 2048 for LLaMa V1. Fixes #28241 ## Who can review? @ArthurZucker @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28767/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28767", "html_url": "https://github.com/huggingface/transformers/pull/28767", "diff_url": "https://github.com/huggingface/transformers/pull/28767.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28767.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28766/comments
https://api.github.com/repos/huggingface/transformers/issues/28766/events
https://github.com/huggingface/transformers/pull/28766
2,106,480,912
PR_kwDOCUB6oc5lYQHT
28,766
#27237
{ "login": "oublalkhalid", "id": 76509145, "node_id": "MDQ6VXNlcjc2NTA5MTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/76509145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oublalkhalid", "html_url": "https://github.com/oublalkhalid", "followers_url": "https://api.github.com/users/oublalkhalid/followers", "following_url": "https://api.github.com/users/oublalkhalid/following{/other_user}", "gists_url": "https://api.github.com/users/oublalkhalid/gists{/gist_id}", "starred_url": "https://api.github.com/users/oublalkhalid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oublalkhalid/subscriptions", "organizations_url": "https://api.github.com/users/oublalkhalid/orgs", "repos_url": "https://api.github.com/users/oublalkhalid/repos", "events_url": "https://api.github.com/users/oublalkhalid/events{/privacy}", "received_events_url": "https://api.github.com/users/oublalkhalid/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,707
null
NONE
null
#27237 Solved โœ… I suggest a resolution for situations where ``sequence_lags=[0]``. In such instances, the length of the context past sequence aligns with the overall sequence length. Let me provide some clarification on this matter The issue stems from line `1230` in `modeling_time_series_transformer.py.` In cases where the lag parameter is set to 0, the index is assigned a value of -1. This results in only one data point being lagged, creating a discrepancy when using model.generate(). For example, if the size is 48, selecting a lag of [0] produces 49, rendering it unsuitable for model.generate(). ``` sequence_length = sequence.shape[1] indices = [lag - shift for lag in self.config.lags_sequence] if max(indices) + subsequences_length > sequence_length: raise ValueError( f"lags cannot go further than history length, found lag {max(indices)} " f"while history length is only {sequence_length}" ) ``` We can modify the code as shown below to rectify index 0 when negative values are encountered โœ…: ``` sequence_length = sequence.shape[1] # (Khalid Oublal) -> addressed the issue regarding the scenario where lag equals 0. # The previous implementation was: indices = [lag - shift for lag in self.config.lags_sequence] indices = [lag - shift if lag > 0 else 0 for lag in self.config.lags_sequence] if max(indices) + subsequences_length > sequence_length: raise ValueError( f"lags cannot go further than history length, found lag {max(indices)} " f"while history length is only {sequence_length}" ) ``` ### Check DataLoader In the analysis below, it's evident that there are no lags indicated by `sequence_lags=[0]`. The length of the context in this batch matches the provided context length. ![Screenshot 2024-01-29 at 21 58 42](https://github.com/huggingface/transformers/assets/76509145/0079c3a8-eed6-4d82-bb41-fc6dcaf04a5a) ### Confirming Training Status Below, it's apparent that the training is progressing smoothly. Some additional print statements were added to verify that the lags are 0, implying the indices should be `[0]`. ![Screenshot 2024-01-29 at 21 54 03](https://github.com/huggingface/transformers/assets/76509145/f159c725-ff81-4a11-89e0-5ef59ac0763e) ### Generating with `model.generate()` Now, with `sequence_lags=[0]`, we observe that predictions can be made without any issues. ![Screenshot 2024-01-29 at 21 55 31](https://github.com/huggingface/transformers/assets/76509145/dfe7915b-8859-41ad-9664-f72f6c4754c7) Best, khalid oublal
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28766/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28766", "html_url": "https://github.com/huggingface/transformers/pull/28766", "diff_url": "https://github.com/huggingface/transformers/pull/28766.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28766.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28765/comments
https://api.github.com/repos/huggingface/transformers/issues/28765/events
https://github.com/huggingface/transformers/pull/28765
2,106,444,478
PR_kwDOCUB6oc5lYIGC
28,765
Added model Sigma-MoE
{ "login": "jubueche", "id": 30778073, "node_id": "MDQ6VXNlcjMwNzc4MDcz", "avatar_url": "https://avatars.githubusercontent.com/u/30778073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jubueche", "html_url": "https://github.com/jubueche", "followers_url": "https://api.github.com/users/jubueche/followers", "following_url": "https://api.github.com/users/jubueche/following{/other_user}", "gists_url": "https://api.github.com/users/jubueche/gists{/gist_id}", "starred_url": "https://api.github.com/users/jubueche/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jubueche/subscriptions", "organizations_url": "https://api.github.com/users/jubueche/orgs", "repos_url": "https://api.github.com/users/jubueche/repos", "events_url": "https://api.github.com/users/jubueche/events{/privacy}", "received_events_url": "https://api.github.com/users/jubueche/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thrilled to see this! Feel free to ping me for reviews! ", "Thanks, I will.", "@ArthurZucker in the tests, I am getting\r\n```\r\nRuntimeError: Failed to import transformers.models.sigma_moe.modeling_sigma_moe because of the following error (look up to see its traceback):\r\n'str' object has no attribute 'impl'\r\n```\r\nbut I am not sure how to debug that. Also regarding the tests that I have, they are very scarce right now. I also got an error that I don't have `all_model_classes`. What do you recommend for the test structure?\r\n\r\nAlso, the MoE layer has a special triton and CUDA implementation (CUDA is a fallback in case triton is not installed). Will that create any problems with the tests? How should I address this?\r\n", "Alright few things here! I realized I am not sure there exists any publicly available checkpoints (checked the original repo and does not seem like they released weights no?), so would actually not be a great addition to `transformers` ๐Ÿ˜ข That's not a problem! The easiest way to distribute this model is to add it to the hub using [this tutorial ](https://huggingface.co/docs/transformers/custom_models)! You won't face all the issue with the CI and can make it work a lot faster! \r\nWDYT? ๐Ÿค— ", "I was actually in contact with Robert Csordas (author of the paper) while I was implementing this and he could probably provide some checkpoints. But I can also take a look at distributing it via the hub, no problem.\r\nLet me know what you think: If we have checkpoints would the hub still be better or should we then proceed with a PR?", "If we have checkpoints it's a bit up to you! It's easier to first share on the hub and if it is popular in the community we add support for it here! ", "Ok, then letโ€™s do the hub! Thanks\r\n________________________________\r\nFrom: Arthur ***@***.***>\r\nSent: Wednesday, January 31, 2024 8:09:35 AM\r\nTo: huggingface/transformers ***@***.***>\r\nCc: Julian Bรผchel ***@***.***>; Author ***@***.***>\r\nSubject: [EXTERNAL] Re: [huggingface/transformers] Added model Sigma-MoE (PR #28765)\r\n\r\nIf we have checkpoints it's a bit up to you! It's easier to first share on the hub and if it is popular in the community we add support for it here! โ€” Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving\r\nZjQcmQRYFpfptBannerStart\r\nThis Message Is From an External Sender\r\nThis message came from outside your organization.\r\n<https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2G-qyhSVoZJQUO-RsiKtBhwlIPf4lEj8plGSPope49wMTO_79upgdiJE8U-wHDT3HjuN-RRgBD_NiGHnuboAftpDMFAi7G9cAkto5IZ_eMvOUyGX$>\r\nReport Suspicious\r\n\r\nZjQcmQRYFpfptBannerEnd\r\n\r\nIf we have checkpoints it's a bit up to you! It's easier to first share on the hub and if it is popular in the community we add support for it here!\r\n\r\nโ€”\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/28765#issuecomment-1918513124>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHK2FWM56KKVRC4DRHP6BZTYRHU27AVCNFSM6AAAAABCQEEG5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJYGUYTGMJSGQ>.\r\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\r\n", "> First, make sure your model is fully defined in a .py file. It can rely on relative imports to some other files as long as all the files are in the same directory (we donโ€™t support submodules for this feature yet).\r\n\r\nMy model has CUDA and triton folders. Will that be a problem? ", "Should be alright even on the hub! I would do something similar to what was done for `RWKV`which has custom kernels ๐Ÿค— ", "So I followed [this](https://huggingface.co/docs/transformers/custom_models) tutorial on how to also push the code to the hub, but when I try to load it, it can't find some of the files. When I looked at the code being pushed, it was just the modeling and the config one. This is my code structure:\r\n```\r\n.\r\n- push_to_hub.py\r\n- configuration_sigma_moe.py\r\n- modeling_sigma_moe.py\r\n- modeling_outputs.py\r\n- moe_layer.py\r\n- __init__.py\r\n- triton_src/cvmm.py\r\n- cuda_src/cvmm/cvmm.cu\r\n- cuda_src/cvmm/cvmm.py\r\n```\r\n\r\nI saw in the tutorial that pushing packages is not supported, which is why I asked my previous question and it doesn't seem to work.\r\nPutting all the code in the modeling file seems weird. I looked at the RWKV case, but I see that they are integrated into HF directly. They have the kernel in `src/transformers/kernels`, so I don't understand what you mean. Do you want me to put my kernels there?", "Ok I am creating a private package for now that I will put on github.", "Sorry didnโ€™t have time to come back to this! \r\nโ€˜I think you can push kernels but not with model.push to hub ", "Feel free to ping me for any additional help will be glad to help! " ]
1,706
1,706
1,706
NONE
null
# What does this PR do? Added model from "[Approximating Two-Layer Feedforward Networks for Efficient Transformers](https://openreview.net/pdf?id=zM3mlyflTt)" and replicated experiments on WikiText-103. The Sigma-MoE is different to the conventional Switch-like architecture when it comes to initialisation of the expert-weights, the routing function and the load balancing loss. Using the sigmoid function instead of the softmax avoids competition between the experts and leads to better training behaviour. Furthermore, Sigma-MoE employs a simple load balancing function that simply uses the entropy of the router outputs as a regulariser. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28765/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28765", "html_url": "https://github.com/huggingface/transformers/pull/28765", "diff_url": "https://github.com/huggingface/transformers/pull/28765.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28765.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28764/comments
https://api.github.com/repos/huggingface/transformers/issues/28764/events
https://github.com/huggingface/transformers/pull/28764
2,106,228,159
PR_kwDOCUB6oc5lXY1T
28,764
Add tip on setting tokenizer attributes
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28764). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
MEMBER
null
This PR adds a quick tip to the chat template docs on setting tokenizer attributes (after some discussion on Slack)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28764/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28764", "html_url": "https://github.com/huggingface/transformers/pull/28764", "diff_url": "https://github.com/huggingface/transformers/pull/28764.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28764.patch", "merged_at": 1706798698000 }
https://api.github.com/repos/huggingface/transformers/issues/28763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28763/comments
https://api.github.com/repos/huggingface/transformers/issues/28763/events
https://github.com/huggingface/transformers/issues/28763
2,106,179,792
I_kwDOCUB6oc59icDQ
28,763
Allow setting different decoder_start_token_ids for each item in a batch in the generate function.
{ "login": "dpernes", "id": 25008929, "node_id": "MDQ6VXNlcjI1MDA4OTI5", "avatar_url": "https://avatars.githubusercontent.com/u/25008929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dpernes", "html_url": "https://github.com/dpernes", "followers_url": "https://api.github.com/users/dpernes/followers", "following_url": "https://api.github.com/users/dpernes/following{/other_user}", "gists_url": "https://api.github.com/users/dpernes/gists{/gist_id}", "starred_url": "https://api.github.com/users/dpernes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dpernes/subscriptions", "organizations_url": "https://api.github.com/users/dpernes/orgs", "repos_url": "https://api.github.com/users/dpernes/repos", "events_url": "https://api.github.com/users/dpernes/events{/privacy}", "received_events_url": "https://api.github.com/users/dpernes/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @zucchini-nlp", "@dpernes Hi, if you want to specify in different decoder_start_token_ids for each element, you can do it by passing a tensor of shape `(batch_size, seq_len)`. In your case adding this line before the `generate` is called will solve the issue:\r\n\r\n`decoder_start_token_id = decoder_start_token_id.unsqueeze(1) # shape (num_target_languages, 1)`", "Great, thank you @zucchini-nlp! This behavior is not documented, though:\r\n```\r\ndecoder_start_token_id (`int`, *optional*):\r\n If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token.\r\n```\r\n\r\nYou may want to change it to something like:\r\n```\r\ndecoder_start_token_id (`Union[int, torch.LongTensor]`, *optional*):\r\n If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token. Optionally, use a `torch.LongTensor` of shape `(batch_size, sequence_length)` to specify a prompt for the decoder.\r\n```\r\n\r\nBut why isn't this the same as passing `decoder_input_ids` to `generate`? I tried passing the same tensor as `decoder_input_ids` instead of `decoder_start_token_id` and the results do not match.", "Thanks, I added a PR extending the docs. \r\n\r\nRegarding your question, there is a subtle difference between them. The `decoder_start_token_id` is used as the very first token in generation, `BOS` token in most cases. But `decoder_input_ids` are used to start/continue the sentence from them. In most cases you do not provide `decoder_input_ids` yourself when calling `generate`, so they will be filled with `decoder_start_token_id` to start generation from `BOS`.\r\n\r\nThe general format is `[decoder_start_token_id, decoder_input_ids]` and the `generate` automatically fills in `decoder_start_token_id` from config if you do not provide them. ", "Hi,\r\nIs there any way to specify `decoder_start_token_id` during training as well?\r\nLike\r\n```\r\noutputs = model(\r\n input_ids=batch[\"input_ids\"],\r\n attention_mask=batch[\"attention_mask\"],\r\n labels=batch[\"labels\"],\r\n decoder_start_token_id=decoder_start_token_id,\r\n )\r\nloss = outputs.loss\r\n```\r\nEach batch may require a different decoder_start_token_id during training. This is because each batch has a specific input language and output language. Sometimes, the output language is <ENG> and some other times it is <FRE>.\r\nChanging `model.config.decoder_start_token_id` per each batch doesn't seem to be a good approach. Specifically, it seems it causes lots of inconsistency when using Accelerator with DeepSpeed.", "Hey @tehranixyz , you do not need to specify `decoder_start_token_ids` while training. All you need is to prepare the `decoder_input_ids` and pass it to the forward. We use the start token from model config only when we do not find `decoder_input_ids` from the user (see [code snippet](https://github.com/huggingface/transformers/blob/2f1003be86f11c8d97d7c2e6a7739dbb6fa795f2/src/transformers/models/mt5/modeling_mt5.py#L1760-L1762) for preparing decoder input ids from labels)", "Gotcha!\r\nI was a bit confused by the warning saying\r\n`The decoder_input_ids are now created based on the \"labels\", no need to pass them yourself anymore.` when using EncoderDecoderModel.\r\nSo in my case, I guess, as you said, I have to prepare `decoder_input_ids` myself by shifting labels and adding the appropriate `start_token` at the beginning. \r\nMany thanks!" ]
1,706
1,708
null
NONE
null
### Feature request @gante The `generate` function has a `decoder_start_token_id` argument that allows the specification of the decoder start token when generating from an encoder-decoder model (e.g. mT5). Currently, `decoder_start_token_id` must be an integer, which means that the same start token is used for all elements in the batch. I request that you allow the specification of different start tokens for each element of the batch. For this purpose, `decoder_start_token_id` must be a tensor with shape `(batch_size,)`. ### Motivation Some multilingual encoder-decoder models use the `decoder_start_token_id` to indicate the target language. Thus, this change would allow generation into multiple target languages in parallel, as illustrated in the code below. ### Your contribution ``` import re import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2m_crossSum_enhanced" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) get_lang_id = lambda lang: tokenizer._convert_token_to_id( model.config.task_specific_params["langid_map"][lang][1] ) target_langs = ["portuguese", "spanish"] input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] input_ids = input_ids.expand(len(target_langs), -1) # shape (num_target_languages, num_input_tokens) decoder_start_token_id = torch.tensor( [get_lang_id(t) for t in target_langs], dtype=input_ids.dtype, device=input_ids.device ) # shape (num_target_languages,) output_ids = model.generate( input_ids=input_ids, decoder_start_token_id=decoder_start_token_id, max_length=84, no_repeat_ngram_size=2, num_beams=4, ) summaries = tokenizer.batch_decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summaries) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28763/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28762/comments
https://api.github.com/repos/huggingface/transformers/issues/28762/events
https://github.com/huggingface/transformers/issues/28762
2,106,086,540
I_kwDOCUB6oc59iFSM
28,762
Update the default depth estimation model of the pipeline
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil @amyeroberts I would propose to update the default depth estimation model to https://huggingface.co/LiheYoung/depth-anything-base-hf. Any objections?\r\n\r\n", "In principle, I think this might be a good idea: people want to just get good predictions, and serving the \"best\" model by default is probably improving user experience. However, it does change things silently for the users. \r\n\r\nThere's a few questions this brings up: \r\n* Backwards compatibility - users who were calling the default pipeline will see their predictions change and possibly compatibility with the environment and hardware. If things suddenly stop working then it's a bad user experience. \r\n* What's the reason for not upgrading to different models for the other pipelines? I can see they're all using older / more basic models like vit and distilbert. Simplicity and size might be a reason for having a default model: a pipeline call anyone can run is better than one that needs e.g. a large machine. \r\n* What are the rules for updating models? Is anyone responsible? What are the criterea for a new model to become the default? ", "Afaik we don't have rules around changing default models.\r\n\r\nSo far, I think the main thing has been routing users to call pipeline on specific models rather than task themselves (so they are responsible for upgrading their model, and we are not so we don't have to worry about backward compatibility).\r\n\r\nI think the best course of action is to promote \r\n\r\n```python\r\npipeline(model=\"LiheYoung/depth-anything-base-hf\")\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\npipeline(\"depth-estimation\")\r\n```\r\n\r\nUpdating the docs (So the default model within the docs) would be non breaking and still be up to date for new users.", "cc @Rocketknight1 " ]
1,706
1,707
1,707
CONTRIBUTOR
null
### Feature request The current depth estimation pipeline leverages https://huggingface.co/Intel/dpt-large by default. However, with recent models like Depth Anything, it might make sense to update the default model. For reference, DPT-large (also called DPT 3.0) was released in 2021, we do have better models in 2024 :) ### Motivation Having better depth estimation models by default would be great. ### Your contribution I could contribute this, but let's perhaps first discuss
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28761/comments
https://api.github.com/repos/huggingface/transformers/issues/28761/events
https://github.com/huggingface/transformers/issues/28761
2,106,008,347
I_kwDOCUB6oc59hyMb
28,761
requests.exceptions.SSLError: HTTPSConnectionPool(host='api-inference.huggingface.co', port=443)
{ "login": "shoang22", "id": 54875725, "node_id": "MDQ6VXNlcjU0ODc1NzI1", "avatar_url": "https://avatars.githubusercontent.com/u/54875725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shoang22", "html_url": "https://github.com/shoang22", "followers_url": "https://api.github.com/users/shoang22/followers", "following_url": "https://api.github.com/users/shoang22/following{/other_user}", "gists_url": "https://api.github.com/users/shoang22/gists{/gist_id}", "starred_url": "https://api.github.com/users/shoang22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shoang22/subscriptions", "organizations_url": "https://api.github.com/users/shoang22/orgs", "repos_url": "https://api.github.com/users/shoang22/repos", "events_url": "https://api.github.com/users/shoang22/events{/privacy}", "received_events_url": "https://api.github.com/users/shoang22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed by removing os.environ[\"*_PROXY\"] lines and setting `verify=False` in `requests.post()`" ]
1,706
1,706
1,706
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-1103-aws-x86_64-with-debian-buster-sid - Python version: 3.7.16 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.4.0 but is ignored because of PyTorch version too old. - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @SunMarc ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running an example in the [documentation](https://huggingface.co/blog/getting-started-with-embeddings) produces the following error: ``` requests.exceptions.ProxyError: HTTPSConnectionPool(host='api-inference.huggingface.co', port=443): Max retries exceeded with url: /pipeline/feature-extraction/sentence-transformers/all-MiniLM-L6-v2 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f7c78487cd0>: Failed to establish a new connection: [Errno 111] Connection refused'))) ``` I tried installing`requests==2.27.1` as described [here](https://github.com/huggingface/transformers/issues/17611), but I'm still getting the same error. Code to reproduce error: ``` import os import requests os.environ['CURL_CA_BUNDLE'] = '' os.environ['HTTP_PROXY'] = "http://127.0.0.1:7890" os.environ['HTTPS_PROXY'] = "http://127.0.0.1:7890" os.environ['ALL_PROXY'] = "socks5://127.0.0.1:7890" hf_token = os.environ["HUGGINGFACE_TOKEN"] model_id = "sentence-transformers/all-MiniLM-L6-v2" api_url = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}" headers = {"Authorization": f"Bearer {hf_token}"} def query(texts): response = requests.post(api_url, headers=headers, json={"inputs": texts, "options":{"wait_for_model":True}}) return response.json() texts = ["How do I get a replacement Medicare card?", "What is the monthly premium for Medicare Part B?", "How do I terminate my Medicare Part B (medical insurance)?", "How do I sign up for Medicare?", "Can I sign up for Medicare Part B if I am working and have health insurance through an employer?", "How do I sign up for Medicare Part B if I already have Part A?", "What are Medicare late enrollment penalties?", "What is Medicare and who can get it?", "How can I get help with my Medicare Part A and Part B premiums?", "What are the different parts of Medicare?", "Will my Medicare premiums be higher because of my higher income?", "What is TRICARE ?", "Should I sign up for Medicare Part B if I have Veterans' Benefits?"] output = query(texts) ``` ### Expected behavior Produce embeddings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28761/timeline
completed
null
null