url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28660/comments | https://api.github.com/repos/huggingface/transformers/issues/28660/events | https://github.com/huggingface/transformers/pull/28660 | 2,095,691,451 | PR_kwDOCUB6oc5k0Nln | 28,660 | `tensor_size` - fix copy/paste error msg typo | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28660). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,706 | 1,706 | 1,706 | CONTRIBUTOR | null | ## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28660",
"html_url": "https://github.com/huggingface/transformers/pull/28660",
"diff_url": "https://github.com/huggingface/transformers/pull/28660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28660.patch",
"merged_at": 1706008922000
} |
https://api.github.com/repos/huggingface/transformers/issues/28659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28659/comments | https://api.github.com/repos/huggingface/transformers/issues/28659/events | https://github.com/huggingface/transformers/issues/28659 | 2,095,647,125 | I_kwDOCUB6oc586QmV | 28,659 | The newer tokenizer can not tokenize pad_token to pad_token_id | {
"login": "Magicalyz",
"id": 56778660,
"node_id": "MDQ6VXNlcjU2Nzc4NjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/56778660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Magicalyz",
"html_url": "https://github.com/Magicalyz",
"followers_url": "https://api.github.com/users/Magicalyz/followers",
"following_url": "https://api.github.com/users/Magicalyz/following{/other_user}",
"gists_url": "https://api.github.com/users/Magicalyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Magicalyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Magicalyz/subscriptions",
"organizations_url": "https://api.github.com/users/Magicalyz/orgs",
"repos_url": "https://api.github.com/users/Magicalyz/repos",
"events_url": "https://api.github.com/users/Magicalyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Magicalyz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hello! Thanks for raising this issue! This is using a custom code, I would recommend you to open this issue on the repo's discussion tab! 🤗 \r\nThere were a few changes, but I think the tokenizer's remote code is the issue here. With LlamaTokenizer there is no such issue! (The token is normalized) "
] | 1,706 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.36.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu122 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Recently I upgrade my transformer from 4.31.0 to 4.36.0, then I found out the newer tokenizer can not tokenize pad_token to pad_token_id due to this code.https://github.com/huggingface/transformers/blob/039866094cb1c72f224049d4d006154ad0d6eda7/src/transformers/tokenization_utils.py#L600
```python
if tok_extended.single_word and left and left[-1] != " ":
tokens[i - 1] += token
tokens[i] = ""
elif tok_extended.single_word and right and right[0] != " ":
tokens[i + 1] = token + tokens[i + 1]
tokens[i] = ""
```
Here is my test code:
```python
tokenizer = AutoTokenizer.from_pretrained(
"baichuan-inc/Baichuan-13B-Chat", trust_remote_code=True
)
print(tokenizer("<s>This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1557, 31114, 31219, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 1170, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1]}
print(tokenizer("<s> This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
I'm wondering why there must be blank spaces between special_tokens, is there any rules to follow about add pad_token in text?
Thanks for your help!
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
tokenizer = AutoTokenizer.from_pretrained(
"baichuan-inc/Baichuan-13B-Chat", trust_remote_code=True
)
print(tokenizer("<s>This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1557, 31114, 31219, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 1170, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1]}
print(tokenizer("<s> This is a test sentence"))
#output(transformers==4.36.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
#output(transformers==4.31.0): {'input_ids': [1, 99, 1523, 715, 650, 1801, 10583], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
### Expected behavior
tokenize pad_token to pad_token_id | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28659/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28658/comments | https://api.github.com/repos/huggingface/transformers/issues/28658/events | https://github.com/huggingface/transformers/issues/28658 | 2,095,474,798 | I_kwDOCUB6oc585mhu | 28,658 | OSError: ../../../../models/Yi-VL-34B/ does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/../../../../models/Yi-VL-34B//main' for available files. | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This is because of new transformer version",
"Hi @lucasjinreal, thanks for raising an issue! \r\n\r\nWithout knowing how or where the error is triggered, or the full error message there's not much we can do. So that we can be best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet we can run to reproduce the error \r\n* All relevant details about the error including the full error traceback",
"@amyeroberts this mode can not using AWQ to load for quantization: https://huggingface.co/01-ai/Yi-VL-34B",
"Hi @lucasjinreal, as noted in my previous comment, we cannot help you without knowing what code your running and on which versions of important libraries. We get many requests per day - in order for us to be able to respond in a timely manner it's necessary for you to help us. Please provide the information requested:\r\n\r\n* The running environment: run transformers-cli env in the terminal and copy-paste the output\r\n* A minimal code snippet we can run to reproduce the error\r\n* All relevant details about the error including the full error traceback",
"@amyeroberts Above is the model, this is the code:\r\n\r\n```python\r\nimport argparse\r\nimport logging\r\n\r\nfrom awq import AutoAWQForCausalLM\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\ndef run_quantization(args):\r\n # Load model\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n args.model, trust_remote_code=True\r\n )\r\n model = AutoAWQForCausalLM.from_pretrained(args.model)\r\n\r\n quant_config = {\r\n \"zero_point\": True,\r\n \"q_group_size\": args.group_size,\r\n \"w_bit\": args.bits,\r\n }\r\n # Quantize\r\n model.quantize(tokenizer, quant_config=quant_config)\r\n\r\n # Save quantized model\r\n model.save_quantized(args.output_dir, safetensors=True)\r\n tokenizer.save_pretrained(args.output_dir)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n logger = logging.getLogger()\r\n logging.basicConfig(\r\n format=\"%(asctime)s %(levelname)s [%(name)s] %(message)s\",\r\n level=logging.INFO,\r\n datefmt=\"%Y-%m-%d %H:%M:%S\",\r\n )\r\n\r\n parser = argparse.ArgumentParser(description=\"AutoAWQ quantize\")\r\n parser.add_argument(\r\n \"--model\",\r\n type=str,\r\n default=\"01-ai/Yi-6b\",\r\n help=\"Pretrained model path locally or name on huggingface\",\r\n )\r\n parser.add_argument(\"--output_dir\", type=str, help=\"Output base folder\")\r\n parser.add_argument(\r\n \"--trust_remote_code\", action=\"store_true\", help=\"Trust remote code\"\r\n )\r\n parser.add_argument(\"--bits\", type=int, default=4, help=\"Quantize bit(s)\")\r\n parser.add_argument(\r\n \"--group_size\", type=int, default=128, help=\"Quantize group size(s)\"\r\n )\r\n\r\n args = parser.parse_args()\r\n run_quantization(args)\r\n\r\n```\r\n\r\nthe transformers version is 4.35.1",
"@lucasjinreal Can you provide the full error traceback please?"
] | 1,705 | 1,706 | null | NONE | null | OSError: ../../../../models/Yi-VL-34B/ does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/../../../../models/Yi-VL-34B//main' for available files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28658/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28657/comments | https://api.github.com/repos/huggingface/transformers/issues/28657/events | https://github.com/huggingface/transformers/pull/28657 | 2,095,315,464 | PR_kwDOCUB6oc5ky8YW | 28,657 | Add token_type_ids to Esm tokenizer | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @Rocketknight1 ",
"Hi @lhallee - I appreciate the work but I'm not sure we can accept this! ESM was not trained with `token_type_ids` and will not accept them by default, so even if we get the tokenizer to return them then the ESM model code will also have to be altered. In general at Hugging Face we aim to reproduce models precisely, and adding an extra input that the original paper/checkpoints/etc. do not support and do not have weights for doesn't really fit that philosophy.\r\n\r\nI completely agree that there are valid use-cases for adding `token_type_ids` to protein models, though, but I think the right place to do that is with a custom code model, rather than by modifying the built-in ESM class. There's a guide [here](https://huggingface.co/docs/transformers/main/en/custom_models) - you can use the ESM code as a base and make the changes you want to add token type IDs, and then train and share the model as a custom code checkpoint. This is the approach used by e.g. the [Nucleotide Transformer v2 models](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-50m-multi-species).",
"Could we just set return token type ids as false by default? It won't break the code, return token token ids is already an option for the ESM tokenizer because it inherits the base, but it's just currently wrong if you do use it.",
"Here's the modified version (using the same suggested code) not returning any token_type_ids when not prompted to do so.\r\n\r\n```\r\ntokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')\r\ntokens = tokenizer(seq, seq)\r\n{'input_ids': [0, 14, 10, 28, 11, 9, 12, 17, 2, 14, 10, 28, 11, 9, 12, 17, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n```\r\n\r\nwhile being correct when needed\r\n\r\n```\r\ntokens = tokenizer(seq, seq, return_token_type_ids=True)\r\n[print(len(v)) for k, v in tokens.items()]\r\n17\r\n17\r\n17\r\n{'input_ids': [0, 14, 10, 28, 11, 9, 12, 17, 2, 14, 10, 28, 11, 9, 12, 17, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n\r\n```",
"Hi @lhallee - I understand that it won't break the code, but we still prefer to stick to the principle of faithfully reproducing the original model. Unfortunately, we can't accept this one!\r\n\r\nOn a personal level, though, I'm definitely interested to see if people can add auxiliary information to DNA/protein models using inputs like `token_type_ids` - if you'd like any help or advice with making a custom code model, I'm happy to offer advice or support!",
"Hi @Rocketknight1 - I don't really understand the logic as one can call return_token_type_ids=True in the current EsmTokenizer and it will output token_type_ids, just wrong ones that ignore special tokens. So, seems like there should either be a warning \"Hey, Esm doesn't use these just so you know\" or removal of this capability if the EsmTokenizer is going to be completely \"faithful.\"\r\n\r\nHowever, I respect your decision and commitment to faithful representations. Thanks!"
] | 1,705 | 1,706 | null | NONE | null | # What does this PR do?
Enables EsmTokenizer to correctly return token_type_ids. Before, ignored special tokens. create_token_type_ids_from_sequences was adapted from BertTokenizer with eos instead of sep per Esm special tokens.
This
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 14
```
To this
`len(tokens.input_ids), len(tokens.token_type_ids) # 17, 17 `
Fixes [#28656](https://github.com/huggingface/transformers/issues/28656)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28657/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28657",
"html_url": "https://github.com/huggingface/transformers/pull/28657",
"diff_url": "https://github.com/huggingface/transformers/pull/28657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28657.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28656/comments | https://api.github.com/repos/huggingface/transformers/issues/28656/events | https://github.com/huggingface/transformers/issues/28656 | 2,095,309,779 | I_kwDOCUB6oc584-PT | 28,656 | EsmTokenizer does not return correct length token type ids | {
"login": "lhallee",
"id": 72926928,
"node_id": "MDQ6VXNlcjcyOTI2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhallee",
"html_url": "https://github.com/lhallee",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhallee/subscriptions",
"organizations_url": "https://api.github.com/users/lhallee/orgs",
"repos_url": "https://api.github.com/users/lhallee/repos",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhallee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Suggested changes that fix this in Pull request [#28657](https://github.com/huggingface/transformers/pull/28657)",
"cc @Rocketknight1 "
] | 1,705 | 1,706 | null | NONE | null | ### System Info
transformers 4.37
python 3.10.11
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
EsmTokenizer ignores special tokens when generating
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 14
```
Is the same for return_special_tokens=True or False
I understand Esm does not natively use token_type_ids but some personal versions and upcoming contributions to the field do and could benefit from having EsmTokenizer with correct token_type_ids.
### Expected behavior
```
tokenizer = EsmTokenizer.from_pretrained('facebook/esm2_t6_8M_UR50D')
seq = 'PROTEIN'
len(seq) # 7
tokens = tokenizer(seq, seq, return_token_type_ids=True)
len(tokens.input_ids), len(tokens.token_type_ids) # 17, 17
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28656/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28655/comments | https://api.github.com/repos/huggingface/transformers/issues/28655/events | https://github.com/huggingface/transformers/pull/28655 | 2,094,838,316 | PR_kwDOCUB6oc5kxVh7 | 28,655 | Bump pillow from 10.0.1 to 10.2.0 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | closed | false | null | [] | [
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 10.x.x again, unless you re-open this PR."
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.1 to 10.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/releases">pillow's releases</a>.</em></p>
<blockquote>
<h2>10.2.0</h2>
<p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html">https://pillow.readthedocs.io/en/stable/releasenotes/10.2.0.html</a></p>
<h2>Changes</h2>
<ul>
<li>Add <code>keep_rgb</code> option when saving JPEG to prevent conversion of RGB colorspace <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a> [<a href="https://github.com/bgilbert"><code>@bgilbert</code></a>]</li>
<li>Trim negative glyph offsets in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Removed unnecessary "pragma: no cover" <a href="https://redirect.github.com/python-pillow/Pillow/issues/7668">#7668</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Trim glyph size in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7669">#7669</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fix loading IPTC images and update test <a href="https://redirect.github.com/python-pillow/Pillow/issues/7667">#7667</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Allow uncompressed TIFF images to be saved in chunks <a href="https://redirect.github.com/python-pillow/Pillow/issues/7650">#7650</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Concatenate multiple JPEG EXIF markers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7496">#7496</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Changed IPTC tile tuple to match other plugins <a href="https://redirect.github.com/python-pillow/Pillow/issues/7661">#7661</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Do not assign new fp attribute when exiting context manager <a href="https://redirect.github.com/python-pillow/Pillow/issues/7566">#7566</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support arbitrary masks for uncompressed RGB DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7589">#7589</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support setting ROWSPERSTRIP tag <a href="https://redirect.github.com/python-pillow/Pillow/issues/7654">#7654</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7662">#7662</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Optimise <code>ImageColor</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7657">#7657</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Restricted environment keys for ImageMath.eval() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7655">#7655</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Optimise <code>ImageMode.getmode</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7641">#7641</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Added trusted PyPI publishing <a href="https://redirect.github.com/python-pillow/Pillow/issues/7616">#7616</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Compile FriBiDi for Windows ARM64 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7629">#7629</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Fix incorrect color blending for overlapping glyphs <a href="https://redirect.github.com/python-pillow/Pillow/issues/7497">#7497</a> [<a href="https://github.com/ZachNagengast"><code>@ZachNagengast</code></a>]</li>
<li>Add .git-blame-ignore-revs file <a href="https://redirect.github.com/python-pillow/Pillow/issues/7528">#7528</a> [<a href="https://github.com/akx"><code>@akx</code></a>]</li>
<li>Attempt memory mapping when tile args is a string <a href="https://redirect.github.com/python-pillow/Pillow/issues/7565">#7565</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fill identical pixels with transparency in subsequent frames when saving GIF <a href="https://redirect.github.com/python-pillow/Pillow/issues/7568">#7568</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Removed unnecessary string length check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7560">#7560</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Determine mask mode in Python instead of C <a href="https://redirect.github.com/python-pillow/Pillow/issues/7548">#7548</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Corrected duration when combining multiple GIF frames into single frame <a href="https://redirect.github.com/python-pillow/Pillow/issues/7521">#7521</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Handle disposing GIF background from outside palette <a href="https://redirect.github.com/python-pillow/Pillow/issues/7515">#7515</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Seek past the data when skipping a PSD layer <a href="https://redirect.github.com/python-pillow/Pillow/issues/7483">#7483</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>ImageMath: Inline <code>isinstance</code> check <a href="https://redirect.github.com/python-pillow/Pillow/issues/7623">#7623</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Update actions/upload-artifact action to v4 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7619">#7619</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Import plugins relative to the module <a href="https://redirect.github.com/python-pillow/Pillow/issues/7576">#7576</a> [<a href="https://github.com/deliangyang"><code>@deliangyang</code></a>]</li>
<li>Translate encoder error codes to strings; deprecate <code>ImageFile.raise_oserror()</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7609">#7609</a> [<a href="https://github.com/bgilbert"><code>@bgilbert</code></a>]</li>
<li>Updated readthedocs to latest version of Python <a href="https://redirect.github.com/python-pillow/Pillow/issues/7611">#7611</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Support reading BC4U and DX10 BC1 images <a href="https://redirect.github.com/python-pillow/Pillow/issues/6486">#6486</a> [<a href="https://github.com/REDxEYE"><code>@REDxEYE</code></a>]</li>
<li>Optimize ImageStat.Stat.extrema <a href="https://redirect.github.com/python-pillow/Pillow/issues/7593">#7593</a> [<a href="https://github.com/florath"><code>@florath</code></a>]</li>
<li>Handle pathlib.Path in FreeTypeFont <a href="https://redirect.github.com/python-pillow/Pillow/issues/7578">#7578</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Use list comprehensions to create transformed lists <a href="https://redirect.github.com/python-pillow/Pillow/issues/7597">#7597</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
<li>Added support for reading DX10 BC4 DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7603">#7603</a> [<a href="https://github.com/sambvfx"><code>@sambvfx</code></a>]</li>
<li>Optimized ImageStat.Stat.count <a href="https://redirect.github.com/python-pillow/Pillow/issues/7599">#7599</a> [<a href="https://github.com/florath"><code>@florath</code></a>]</li>
<li>Moved error from truetype() to FreeTypeFont <a href="https://redirect.github.com/python-pillow/Pillow/issues/7587">#7587</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Correct PDF palette size when saving <a href="https://redirect.github.com/python-pillow/Pillow/issues/7555">#7555</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>Fixed closing file pointer with olefile 0.47 <a href="https://redirect.github.com/python-pillow/Pillow/issues/7594">#7594</a> [<a href="https://github.com/radarhere"><code>@radarhere</code></a>]</li>
<li>ruff: Minor optimizations of list comprehensions, x in set, etc. <a href="https://redirect.github.com/python-pillow/Pillow/issues/7524">#7524</a> [<a href="https://github.com/cclauss"><code>@cclauss</code></a>]</li>
<li>Build Windows wheels using cibuildwheel <a href="https://redirect.github.com/python-pillow/Pillow/issues/7580">#7580</a> [<a href="https://github.com/nulano"><code>@nulano</code></a>]</li>
<li>Raise ValueError when TrueType font size is zero or less <a href="https://redirect.github.com/python-pillow/Pillow/issues/7584">#7584</a> [<a href="https://github.com/akx"><code>@akx</code></a>]</li>
<li>Install cibuildwheel from requirements file <a href="https://redirect.github.com/python-pillow/Pillow/issues/7581">#7581</a> [<a href="https://github.com/hugovk"><code>@hugovk</code></a>]</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst">pillow's changelog</a>.</em></p>
<blockquote>
<h2>10.2.0 (2024-01-02)</h2>
<ul>
<li>
<p>Add <code>keep_rgb</code> option when saving JPEG to prevent conversion of RGB colorspace <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a>
[bgilbert, radarhere]</p>
</li>
<li>
<p>Trim glyph size in ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7669">#7669</a>, <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a>
[radarhere, nulano]</p>
</li>
<li>
<p>Deprecate IptcImagePlugin helpers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7664">#7664</a>
[nulano, hugovk, radarhere]</p>
</li>
<li>
<p>Allow uncompressed TIFF images to be saved in chunks <a href="https://redirect.github.com/python-pillow/Pillow/issues/7650">#7650</a>
[radarhere]</p>
</li>
<li>
<p>Concatenate multiple JPEG EXIF markers <a href="https://redirect.github.com/python-pillow/Pillow/issues/7496">#7496</a>
[radarhere]</p>
</li>
<li>
<p>Changed IPTC tile tuple to match other plugins <a href="https://redirect.github.com/python-pillow/Pillow/issues/7661">#7661</a>
[radarhere]</p>
</li>
<li>
<p>Do not assign new fp attribute when exiting context manager <a href="https://redirect.github.com/python-pillow/Pillow/issues/7566">#7566</a>
[radarhere]</p>
</li>
<li>
<p>Support arbitrary masks for uncompressed RGB DDS images <a href="https://redirect.github.com/python-pillow/Pillow/issues/7589">#7589</a>
[radarhere, akx]</p>
</li>
<li>
<p>Support setting ROWSPERSTRIP tag <a href="https://redirect.github.com/python-pillow/Pillow/issues/7654">#7654</a>
[radarhere]</p>
</li>
<li>
<p>Apply ImageFont.MAX_STRING_LENGTH to ImageFont.getmask() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7662">#7662</a>
[radarhere]</p>
</li>
<li>
<p>Optimise <code>ImageColor</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7657">#7657</a>
[hugovk]</p>
</li>
<li>
<p>Restricted environment keys for ImageMath.eval() <a href="https://redirect.github.com/python-pillow/Pillow/issues/7655">#7655</a>
[wiredfool, radarhere]</p>
</li>
<li>
<p>Optimise <code>ImageMode.getmode</code> using <code>functools.lru_cache</code> <a href="https://redirect.github.com/python-pillow/Pillow/issues/7641">#7641</a>
[hugovk, radarhere]</p>
</li>
<li>
<p>Fix incorrect color blending for overlapping glyphs <a href="https://redirect.github.com/python-pillow/Pillow/issues/7497">#7497</a>
[ZachNagengast, nulano, radarhere]</p>
</li>
<li>
<p>Attempt memory mapping when tile args is a string <a href="https://redirect.github.com/python-pillow/Pillow/issues/7565">#7565</a>
[radarhere]</p>
</li>
<li>
<p>Fill identical pixels with transparency in subsequent frames when saving GIF <a href="https://redirect.github.com/python-pillow/Pillow/issues/7568">#7568</a>
[radarhere]</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/python-pillow/Pillow/commit/6956d0b2853f5c7ec5f6ec4c60725c5a7ee73aeb"><code>6956d0b</code></a> 10.2.0 version bump</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/31c8dacdc727673e9099f1ac86019714cdccec67"><code>31c8dac</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7675">#7675</a> from python-pillow/pre-commit-ci-update-config</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/40a3f91af2c78870676a13629b5902bab4ab4cf0"><code>40a3f91</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7674">#7674</a> from nulano/url-example</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/cb41b0cc78eeefbd9ed2ce8c10f8d6d4c405a706"><code>cb41b0c</code></a> [pre-commit.ci] pre-commit autoupdate</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/de62b25ed318f1604aa4ccd6f942a04c6b2c8b59"><code>de62b25</code></a> fix image url in "Reading from URL" example</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/7c526a6c6bdc7cb947f0aee1d1ee17c266ff6c61"><code>7c526a6</code></a> Update CHANGES.rst [ci skip]</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/d93a5ad70bf94dbb63bdbfb19491a02976574d6d"><code>d93a5ad</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7553">#7553</a> from bgilbert/jpeg-rgb</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/aed764fe8404926472499208a39e5bf90d861b2a"><code>aed764f</code></a> Update CHANGES.rst [ci skip]</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/f8df5303fa9daf40cf8bfe232403cb40389d8f8f"><code>f8df530</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7672">#7672</a> from nulano/imagefont-negative-crop</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/24e9485e6bb733a1a816f228dc75fd0086a93e19"><code>24e9485</code></a> Merge pull request <a href="https://redirect.github.com/python-pillow/Pillow/issues/7671">#7671</a> from radarhere/imagetransform</li>
<li>Additional commits viewable in <a href="https://github.com/python-pillow/Pillow/compare/10.0.1...10.2.0">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pillow&package-manager=pip&previous-version=10.0.1&new-version=10.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28655",
"html_url": "https://github.com/huggingface/transformers/pull/28655",
"diff_url": "https://github.com/huggingface/transformers/pull/28655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28655.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28654/comments | https://api.github.com/repos/huggingface/transformers/issues/28654/events | https://github.com/huggingface/transformers/pull/28654 | 2,094,764,082 | PR_kwDOCUB6oc5kxFIV | 28,654 | Add Depth Anything | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tests_pr_documentation_tests is failing due to the checkpoint not yet being available on the TikTok organization.",
"@amyeroberts thanks for your review, addressed all comments. CI is failing due to unrelated tests"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
This PR adds an alternative design to #28643, which adds a standalone separate model.
Pros:
- [x] does not clutter the existing modeling_dpt.py
- [x] is more in line with the [philosophy](https://huggingface.co/blog/transformers-design-philosophy)
Cons:
- [x] actually, not a lot :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28654",
"html_url": "https://github.com/huggingface/transformers/pull/28654",
"diff_url": "https://github.com/huggingface/transformers/pull/28654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28654.patch",
"merged_at": 1706171690000
} |
https://api.github.com/repos/huggingface/transformers/issues/28653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28653/comments | https://api.github.com/repos/huggingface/transformers/issues/28653/events | https://github.com/huggingface/transformers/pull/28653 | 2,094,695,897 | PR_kwDOCUB6oc5kw2Si | 28,653 | integrations: fix DVCLiveCallback model logging | {
"login": "dberenbaum",
"id": 2308172,
"node_id": "MDQ6VXNlcjIzMDgxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dberenbaum",
"html_url": "https://github.com/dberenbaum",
"followers_url": "https://api.github.com/users/dberenbaum/followers",
"following_url": "https://api.github.com/users/dberenbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions",
"organizations_url": "https://api.github.com/users/dberenbaum/orgs",
"repos_url": "https://api.github.com/users/dberenbaum/repos",
"events_url": "https://api.github.com/users/dberenbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/dberenbaum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Fixes issues with `HF_DVCLIVE_LOG_MODEL` environment variable not always being respected.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr Could you please take a look when you have a chance? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28653",
"html_url": "https://github.com/huggingface/transformers/pull/28653",
"diff_url": "https://github.com/huggingface/transformers/pull/28653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28653.patch",
"merged_at": 1706001070000
} |
https://api.github.com/repos/huggingface/transformers/issues/28652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28652/comments | https://api.github.com/repos/huggingface/transformers/issues/28652/events | https://github.com/huggingface/transformers/pull/28652 | 2,094,694,394 | PR_kwDOCUB6oc5kw19J | 28,652 | [WIP] VMamba implementation | {
"login": "dmus",
"id": 464378,
"node_id": "MDQ6VXNlcjQ2NDM3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/464378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmus",
"html_url": "https://github.com/dmus",
"followers_url": "https://api.github.com/users/dmus/followers",
"following_url": "https://api.github.com/users/dmus/following{/other_user}",
"gists_url": "https://api.github.com/users/dmus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmus/subscriptions",
"organizations_url": "https://api.github.com/users/dmus/orgs",
"repos_url": "https://api.github.com/users/dmus/repos",
"events_url": "https://api.github.com/users/dmus/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"@dmus Very exciting! Let us know when the PR is ready for review. \r\n\r\ncc @ArthurZucker as I believe there's an on-going mamba implementation that we might want to coordinate with here ",
"I don't have bandwidth yet so nice if you want to do ti! ",
"I have a question about this test in test_modeling_common.py\r\n```python\r\ndef test_forward_signature(self):\r\n config, _ = self.model_tester.prepare_config_and_inputs_for_common()\r\n\r\n for model_class in self.all_model_classes:\r\n model = model_class(config)\r\n signature = inspect.signature(model.forward)\r\n # signature.parameters is an OrderedDict => so arg_names order is deterministic\r\n arg_names = [*signature.parameters.keys()]\r\n\r\n if model.config.is_encoder_decoder:\r\n expected_arg_names = [\r\n \"input_ids\",\r\n \"attention_mask\",\r\n \"decoder_input_ids\",\r\n \"decoder_attention_mask\",\r\n ]\r\n expected_arg_names.extend(\r\n [\"head_mask\", \"decoder_head_mask\", \"cross_attn_head_mask\", \"encoder_outputs\"]\r\n if \"head_mask\" and \"decoder_head_mask\" and \"cross_attn_head_mask\" in arg_names\r\n else [\"encoder_outputs\"]\r\n )\r\n self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names)\r\n elif model_class.__name__ in [*get_values(MODEL_FOR_BACKBONE_MAPPING_NAMES)] and self.has_attentions:\r\n expected_arg_names = [\"pixel_values\", \"output_hidden_states\", \"output_attentions\", \"return_dict\"]\r\n self.assertListEqual(arg_names, expected_arg_names)\r\n elif model_class.__name__ in [*get_values(MODEL_FOR_BACKBONE_MAPPING_NAMES)] and not self.has_attentions:\r\n expected_arg_names = [\"pixel_values\", \"output_hidden_states\", \"return_dict\"]\r\n self.assertListEqual(arg_names, expected_arg_names)\r\n else:\r\n expected_arg_names = [model.main_input_name]\r\n self.assertListEqual(arg_names[:1], expected_arg_names)\r\n```\r\n\r\nit fails because model.main_input_name is equal to 'input_ids'. I don't know why this is the case and where I can set this model.main_input_name. This is a pure vision model, no NLP",
"@dmus For certain tests like these, where the default implementation doesn't apply, we override the test in the model's testing module e.g. [like here for DETR](https://github.com/huggingface/transformers/blob/0548af54ccc81d31e6264bb394e74c89c477518d/tests/models/detr/test_modeling_detr.py#L420).",
"And for my understanding, is it expected that the forward method of the `VMambaForImageClassification` returns a ImageClassifierOutputWithNoAttention object? \r\n\r\nAnd the VMamba model should return a `BaseModelOutput`?",
"> And for my understanding, is it expected that the forward method of the `VMambaForImageClassification` returns a ImageClassifierOutputWithNoAttention object?\r\n> \r\n> And the VMamba model should return a `BaseModelOutput`?\r\n\r\nYep! ",
"Two tests that are still failing are the `VMambaModelTest::test_torch_fx` and `VMambaModelTest::test_torch_fx_output_loss` because AssertionError: Couldn't trace module: Model VMambaModel is not supported yet, supported models: AlbertForMaskedLM, AlbertForMultipleChoice, AlbertForPreTraining, AlbertForQuestionAnswering, AlbertForSequenceClassificat...\r\n\r\nI am not sure what to do here, suggestions?",
"Also the `test_initialization` is failing. This check that fails is\r\n\r\nif param.requires_grad:\r\n self.assertIn(\r\n ((param.data.mean() * 1e9).round() / 1e9).item(),\r\n [0.0, 1.0],\r\n msg=f\"Parameter {name} of model {model_class} seems not properly initialized\",\r\n )\r\n\r\nit fails with\r\n\r\nAssertionError: 0.0031509040854871273 not found in [0.0, 1.0] : Parameter patch_embed.proj.weight of model <class 'transformers.models.vmamba.modeling_vmamba.VMambaModel'> seems not properly initialized\r\n\r\nI am not sure what is exactly expected here. Also if I copy the _init_weights from modeling_vit.py the test fails.",
"> Two tests that are still failing are the `VMambaModelTest::test_torch_fx` and `VMambaModelTest::test_torch_fx_output_loss` because AssertionError: Couldn't trace module: Model VMambaModel is not supported yet, supported models: AlbertForMaskedLM, AlbertForMultipleChoice, AlbertForPreTraining, AlbertForQuestionAnswering, AlbertForSequenceClassificat...\r\n> \r\n> I am not sure what to do here, suggestions?\r\n\r\nIn the testing suite, you can force fx test not to run by setting `fx_compatible = False` e.g. [like here](https://github.com/huggingface/transformers/blob/95346e9dcd2724ba8203c61759907fb3a8b737cb/tests/models/efficientnet/test_modeling_efficientnet.py#L138) ",
"> Also the `test_initialization` is failing. This check that fails is\r\n> \r\n> if param.requires_grad: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f\"Parameter {name} of model {model_class} seems not properly initialized\", )\r\n> \r\n> it fails with\r\n> \r\n> AssertionError: 0.0031509040854871273 not found in [0.0, 1.0] : Parameter patch_embed.proj.weight of model <class 'transformers.models.vmamba.modeling_vmamba.VMambaModel'> seems not properly initialized\r\n> \r\n> I am not sure what is exactly expected here. Also if I copy the _init_weights from modeling_vit.py the test fails.\r\n\r\nThe initialization of the weights should follow what's been done for the original model implementation, not other models in the library e.g. vit. \r\n\r\nIf the weight initialization in `_init_weights` is different from the default behaviour, then you need to override the test in `test_modeling_vmamba.py` e.g. [like here for BLIP](https://github.com/huggingface/transformers/blob/95346e9dcd2724ba8203c61759907fb3a8b737cb/tests/models/blip/test_modeling_blip.py#L921)\r\n",
"All test are passing now locally. (I still use a local checkpoint because the vmamba checkpoint is not uploaded to the hub yet). So I think this PR is ready for review.\r\n\r\nI did implement the VMambaForImageClassification.\r\n\r\nNot done yet is implementing the\r\nVMambaForSegmentation and VMambaForObjectDetection because that requires a bit more work, but that could follow a similar design as the VMambaForImageClassification",
"@dmus Great - glad to hear it's in a good stage! \r\n\r\nRegarding the other classes e.g. `VMambaForObjectDetection`, they can always be added in follow-up PRs. \r\n\r\nFor the tests: \r\n* The tests relating to `natten` aren't releated to this PR, and was an issue we encountered with new releases and library compatibility. A fix has been merged into main. Rebasing to include these commits should resolve those. \r\n* Make sure to read the errors printed out in the CI runs. For some of the tests, e.g. `check_repo`, it's telling you that the class needs to be added to the public init. The other faliures e.g. ` No module named 'torch'` are happening because the vmamba classes don't have the import protections in `src/transformers/models/vmamba/__init__.py` for when packages like `torch` or `timm` aren't available. You can look at other model PRs e.g. [like this one for Swin](https://github.com/huggingface/transformers/pull/15085/files) which shows you all the places you need to modify in the codebase to fully add a model. ",
"For this error: \r\n\r\n> Checking all objects are properly documented.\r\n> Traceback (most recent call last):\r\n> File \"/home/derk/transformers/utils/check_repo.py\", line 1181, in <module>\r\n> check_repo_quality()\r\n> File \"/home/derk/transformers/utils/check_repo.py\", line 1165, in check_repo_quality\r\n> check_all_objects_are_documented()\r\n> File \"/home/derk/transformers/utils/check_repo.py\", line 1047, in check_all_objects_are_documented\r\n> raise Exception(\r\n> Exception: The following objects are in the public init so should be documented:\r\n> - VMambaForImageClassification\r\n\r\nWhat exactly should be documented? It looks like the VMambaForImageClassification in modeling_vmamba.py is documented",
"It should be listed under docs/source/en/model_doc/vmamba.md",
"Thanks. Now I encounter this error when running python utils/check_inits.py:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/derk/transformers/utils/check_inits.py\", line 370, in <module>\r\n check_all_inits()\r\n File \"/home/derk/transformers/utils/check_inits.py\", line 298, in check_all_inits\r\n raise ValueError(\"\\n\\n\".join(failures))\r\nValueError: Problem in src/transformers/models/vmamba/__init__.py, both halves do not define the same objects.\r\nDifferences for base imports:\r\n from .modeling_vmamba import in TYPE_HINT but not in _import_structure.\r\n VMAMBA_PRETRAINED_MODEL_ARCHIVE_LIST in TYPE_HINT but not in _import_structure.\r\n VMambaForImageClassification in TYPE_HINT but not in _import_structure.\r\n VMambaModel in TYPE_HINT but not in _import_structure.\r\n VMambaPreTrainedModel in TYPE_HINT but not in _import_structure.\r\n in TYPE_HINT but not in _import_structure.\r\n VMAMBA_PRETRAINED_MODEL_ARCHIVE_LIST in _import_structure but not in TYPE_HINT.\r\n VMambaForImageClassification in _import_structure but not in TYPE_HINT.\r\n VMambaModel in _import_structure but not in TYPE_HINT.\r\n VMambaPreTrainedModel in _import_structure but not in TYPE_HINT.\r\n\r\nwhat is TYPE_HINT and where should I fix this?"
] | 1,705 | 1,706 | null | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28606
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28652/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28652",
"html_url": "https://github.com/huggingface/transformers/pull/28652",
"diff_url": "https://github.com/huggingface/transformers/pull/28652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28652.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28651/comments | https://api.github.com/repos/huggingface/transformers/issues/28651/events | https://github.com/huggingface/transformers/issues/28651 | 2,094,681,853 | I_kwDOCUB6oc582k79 | 28,651 | Memory consumption for inference with Llama2-7B is weird | {
"login": "c3ianwu",
"id": 92783433,
"node_id": "U_kgDOBYfDSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92783433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c3ianwu",
"html_url": "https://github.com/c3ianwu",
"followers_url": "https://api.github.com/users/c3ianwu/followers",
"following_url": "https://api.github.com/users/c3ianwu/following{/other_user}",
"gists_url": "https://api.github.com/users/c3ianwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c3ianwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c3ianwu/subscriptions",
"organizations_url": "https://api.github.com/users/c3ianwu/orgs",
"repos_url": "https://api.github.com/users/c3ianwu/repos",
"events_url": "https://api.github.com/users/c3ianwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/c3ianwu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @c3ianwu \r\nThis is interesting, I am not 100% sure what is wrong here but I can give you some insights. \r\nWhen designing the tests for quantization, as we were running multiple tests with `generate` I used to get OOM on our CI machines that had ~16GB GPU RAM. It seems the fix was simply to empty the CUDA cache after the test. Maybe here the CUDA cache gets somehow accumulated and causes this behaviour. Can you try to call `torch.cuda.empty_cache()` after the `generate` call? You should also add `import gc; gc.collect()` before that call\r\nFor reference, check out this thread https://github.com/huggingface/accelerate/issues/614#issuecomment-1224213502 from @ydshieh ",
"Thanks @younesbelkada. \r\n\r\nModified my script:\r\n\r\n```\r\nclass LocalModel:\r\n\r\n def __init__(self, model, tokenizer):\r\n self.model = model\r\n self.tokenizer = tokenizer\r\n\r\n def generate(self, prompts, do_sample=False, temperature=0, top_k=0, top_p=0, repetition_penalty=1.0, max_new_tokens=128):\r\n self.tokenizer.pad_token = self.tokenizer.eos_token\r\n tokenized_inputs = self.tokenizer(prompts, return_tensors=\"pt\", padding=True).to(self.model.device)\r\n inputs = tokenized_inputs[\"input_ids\"]\r\n attention_mask = tokenized_inputs[\"attention_mask\"]\r\n tic = time.time()\r\n logits = self.model.generate(input_ids=inputs, \r\n attention_mask=attention_mask, \r\n do_sample=do_sample, \r\n temperature=temperature, \r\n top_k=top_k, \r\n top_p=top_p, \r\n repetition_penalty=repetition_penalty,\r\n max_new_tokens=max_new_tokens)\r\n max_alloc = torch.cuda.max_memory_allocated(0) / 1e9\r\n print(\"Peak GPU Memory Consumption: {}\".format(max_alloc))\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n torch.cuda.reset_peak_memory_stats(0)\r\n after_clearing_alloc = torch.cuda.max_memory_allocated(0) / 1e9\r\n print(\"After clearing: {}\".format(after_clearing_alloc))\r\n toc = time.time()\r\n print(\"Input tokens: {}\".format(len(inputs[0])))\r\n print(\"Output tokens: {}\".format(len(logits[0])))\r\n print(\"Time for generation: {}\".format(toc - tic))\r\n return max_alloc, after_clearing_alloc\r\n```\r\nThe plot looks like this:\r\n\r\n![Screenshot 2024-01-23 at 21 32 05](https://github.com/huggingface/transformers/assets/92783433/713a647c-d798-4a46-8cad-8515b3382678)\r\n\r\nThe gradient of the linear sloping bit is still the same (about 0.065, double what we expect). It also looks like clearing the cache is having the desired effect, but the memory consumption for generation is still off.\r\n\r\nFor the beginning bit - I assume it's allocating some memory prior to generation (I guess since we expect to generate at least some tokens)? That would explain the flat line.\r\n\r\nAm running this on a GCP container on a Jupyter notebook. Thought it might be worth mentioning given the flask issue mentioned in https://github.com/huggingface/accelerate/issues/614#issuecomment-1224213502",
"Hi dude, \r\n\r\nTL; DR: pass `eos_token_id=tokenizer.eos_token_id` in `model.generate()`\r\n\r\nI was running into the same issue as you did. It turns out that it was due to the update of `transformers` where you have to pass the `eos_token_id` in `model.generate()`, otherwise it won't stop generating unless it hits `max_new_tokens` or OOM is triggered.",
"Hi @g-h-chen .\r\n\r\nThanks for the insights , will try these. just to mention i have been facing similar issues while running Mistral 7b from local.\r\n\r\nBelow is the code snippet i am using- \r\n\r\nmodel_id = \"mistralai/Mistral-7B-Instruct-v0.2\"\r\ndevice = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\r\n# begin initializing HF items, need auth token for these\r\nhf_auth = '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'\r\nmodel_config = transformers.AutoConfig.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth,\r\n cache_dir=\"/fs/scratch/SX_ETL3_GenAI_Team/models/\"\r\n)\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n # quantization_config=bnb_config,\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n cache_dir=\"/fs/scratch/SX_ETL3_GenAI_Team/models/\"\r\n)\r\nmodel.eval()\r\nprint(f\"Model loaded on {device}\")\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\r\n model_id,\r\n use_auth_token=hf_auth,\r\n cache_dir=\"/fs/scratch/SX_ETL3_GenAI_Team/models/\")\r\n\r\ngenerate_text = transformers.pipeline(\r\n model=model, tokenizer=tokenizer,\r\n return_full_text=True, # langchain expects the full text\r\n task='text-generation',\r\n # we pass model parameters here too\r\n temperature=0.2, # 'randomness' of outputs, 0.0 is the min and 1.0 the max\r\n max_new_tokens=512,\r\n cache_dir=None,\r\n device_map='auto'\r\n # mex number of tokens to generate in the output\r\n # repetition_penalty=1.1 # without this output begins repeating\r\n)\r\n\r\n\r\n\r\ntable_list = [list of 50 html tables ]\r\n\r\nfor i, text in enumerate(table_list):\r\n print(i)\r\n result = generate_text(f\"\"\"Summarize the following table in detail, dont abbreviate or expand any abbreviations, keep the information as precise as possible from original text:\r\n {text}\"\"\")\r\n print(result[0]['generated_text'])\r\n print('='*50)\r\n\r\n\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n\r\nI have a A100 80 GB gpu but while iterating over it after 28 tables there is OOM issue . i am not sure why does it keeps filling up the memory while inferencing . Ideally it should release memory after each inference ? or am i wrong somewhere here.\r\n Any help would be appreciated",
"> eos_token_id=tokenizer.eos_token_id\r\n\r\n@g-h-chen not sure this is the fix. Have tried the same steps with eos token set and I'm getting the same memory profile as before.\r\n\r\nAlso if anything we want it to hit max_new_tokens every time (for memory profiling) so that we can be sure that it is outputting sequences of the length we expect. The theoretical calculations I provide above assume that outputs of a particular length have been produced."
] | 1,705 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker @younesbelkada @Gan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to track GPU memory consumption when doing inference with Llama2-7B. This is my set-up:
```
import json
import tqdm
import warnings
warnings.filterwarnings('ignore')
import time
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import datasets
import matplotlib.pyplot as plt
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", torch_dtype=torch.bfloat16)
model.to(device=0)
prompt_data = datasets.load_from_disk("/data/metamath_100k_2048/train") # this is just some supervised training text data
prompts = prompt_data["inputs"] # this is a list of strings
class LocalModel:
def __init__(self, model, tokenizer):
self.model = model
self.tokenizer = tokenizer
def generate(self, prompts, do_sample=False, temperature=0, top_k=0, top_p=0, repetition_penalty=1.0, max_new_tokens=128):
self.tokenizer.pad_token = self.tokenizer.eos_token
tokenized_inputs = self.tokenizer(prompts, return_tensors="pt", padding=True).to(self.model.device)
inputs = tokenized_inputs["input_ids"]
attention_mask = tokenized_inputs["attention_mask"]
tic = time.time()
logits = self.model.generate(input_ids=inputs,
attention_mask=attention_mask,
do_sample=do_sample,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
max_new_tokens=max_new_tokens)
max_alloc = torch.cuda.max_memory_allocated(0) / 1e9
print("Peak GPU Memory Consumption: {}".format(torch.cuda.max_memory_allocated(0) / 1e9))
torch.cuda.reset_peak_memory_stats(0)
toc = time.time()
print("Time for generation: {}".format(toc - tic))
return max_alloc
```
I ran
```
local_model = LocalModel(model, tokenizer)
alloc = []
x = [0, 2, 4, 6, 8, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160]
for i in x:
alloc.append(local_model.generate(prompts[:64], max_new_tokens=i))
plt.scatter(x, alloc)
plt.xlabel("Max New Tokens")
plt.ylabel("Peak Mem Usage / GB")
plt.show()
```
This is the plot:
<img width="580" alt="Screenshot 2024-01-22 at 20 00 36" src="https://github.com/huggingface/transformers/assets/92783433/22036995-a80f-46bf-9c44-e6f6329486b0">
### Expected behavior
I tried to compute theoretical numbers. I estimated the number of input tokens:
```
def calculate_prompt_tokens(tokenizer, prompts, batch_size):
tokenizer.pad_token = tokenizer.eos_token
tokens = tokenizer(prompts[:batch_size], return_tensors="pt", padding=True)
return tokens["input_ids"].shape[0] * tokens["input_ids"].shape[1]
calculate_prompt_tokens(tokenizer, prompts, batch_size=64)
```
which returns 12992. Taking the model to be 7B params ~ 14GB in bf16, and assuming that the kv cache consumes `4*num_layers*d_model = 4*32*4096 = 524,288 bytes/token`, we get an estimated `14 + (12992*524288)*1e-9 = 20.8GB` before anything is generated, which looks about right from the graph.
Using the same logic, we know that each additional generation step should cost (via the kv cache) `524,288*64 = 0.0034GB / step` of memory. Looking at the gradient of the linear portion of the plot, we get ~0.0067GB / step instead, which is around double the amount.
1. Why is the memory consumed for generation greater than expected?
2. What's going on in the early portion of the plot? Why is there a big jump at the start?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28651/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28651/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28650/comments | https://api.github.com/repos/huggingface/transformers/issues/28650/events | https://github.com/huggingface/transformers/pull/28650 | 2,094,606,858 | PR_kwDOCUB6oc5kwiw7 | 28,650 | [DO NOT MERGE] Testing safetensors 0.4.2 | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28650",
"html_url": "https://github.com/huggingface/transformers/pull/28650",
"diff_url": "https://github.com/huggingface/transformers/pull/28650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28650.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28649/comments | https://api.github.com/repos/huggingface/transformers/issues/28649/events | https://github.com/huggingface/transformers/issues/28649 | 2,094,580,013 | I_kwDOCUB6oc582MEt | 28,649 | 4.37 ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils' | {
"login": "erew123",
"id": 35898566,
"node_id": "MDQ6VXNlcjM1ODk4NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/35898566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erew123",
"html_url": "https://github.com/erew123",
"followers_url": "https://api.github.com/users/erew123/followers",
"following_url": "https://api.github.com/users/erew123/following{/other_user}",
"gists_url": "https://api.github.com/users/erew123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erew123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erew123/subscriptions",
"organizations_url": "https://api.github.com/users/erew123/orgs",
"repos_url": "https://api.github.com/users/erew123/repos",
"events_url": "https://api.github.com/users/erew123/events{/privacy}",
"received_events_url": "https://api.github.com/users/erew123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @gante \r\n",
"Just encountered this error and found this thread by google searching it",
"Confirm this happens on 4.37.0",
"Me too, literally just happened. I'll try downgrading first.",
"Encountered this error as well. Downgrading Transformers to 4.36.2 fixes it.",
"This worked for me:\r\nUpdate your TTS requirements.txt look for transformers>=4.33.0 and replace it with:\r\ntransformers==4.33.0\r\n\r\nRun:\r\npip install -r requirements.txt",
"@gante This seems to be due to `SampleOutput` being remove from `generation.utils.py` in #28494. I missed in the PR that the mapping were completely removed. I think the best thing to do is add back, with a patch and deprecation warning. ",
"> This worked for me: Update your TTS requirements.txt look for transformers>=4.33.0 and replace it with: transformers==4.33.0\r\n> \r\n> Run: pip install -r requirements.txt\r\n\r\nFor anyone currently experiencing this issue - any version of transformers before 4.37 should still be fine i.e. setting `transformers>=4.33.0,<4.37` in requirements.txt should work until a patch is released. ",
"@amyeroberts Thanks for looking into this Amy & @gante its very appreciated!",
"transformers==4.33.0 works for me. ",
"transformers will import transformers_stream_generator, while transformers_stream_generator has the following import\r\n\r\n```\r\nfrom transformers.generation.utils import GenerateOutput, SampleOutput, logger\r\n```\r\nBut in transformers>=4.37, SampleOutput was removed and transformers will raise ImportError although transformers_stream_generator has already been installed.\r\n```\r\nImportError: This modeling file requires the following packages that were not found in your environment: transformers_stream_generator. Run pip install transformers_stream_generator\r\n```",
"Hi all, a patch has been released which should resolve the issues here: `SampleOutput` can now be imported again as a type: https://github.com/huggingface/transformers/releases/tag/v4.37.1\r\n\r\nYou should be able to revert back to the original requirements.txt file, install the latest transformers and run your code with Coqui's TTS. ",
"@amyeroberts Thanks so much Amy. I will give it a test at some point soon and feed back if there's any issues. Really appreciate your help with that.",
"@amyeroberts Actually managed to give it a quick go now and everything loads up fine again! Awesome stuff! Ill close off the ticket. Again, thanks so much!",
"@erew123 Great - thanks for confirming! "
] | 1,705 | 1,706 | 1,706 | NONE | null | ### System Info
I am a developer of AllTalk https://github.com/erew123/alltalk_tts/ which uses the Coqui TTS engine https://github.com/coqui-ai/TTS
As of the 4.37 update, I have users reporting this error:
```
Traceback (most recent call last):
File "/home/ai/alltalk_tts/tts_server.py", line 7, in
from TTS.tts.configs.xtts_config import XttsConfig
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/configs/xtts_config.py", line 5, in
from TTS.tts.models.xtts import XttsArgs, XttsAudioConfig
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 12, in
from TTS.tts.layers.xtts.stream_generator import init_stream_support
File "/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/TTS/tts/layers/xtts/stream_generator.py", line 24, in
from transformers.generation.utils import GenerateOutput, SampleOutput, logger
ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils' (/home/ai/alltalk_tts/alltalk_environment/env/lib/python3.11/site-packages/transformers/generation/utils.py)
```
The issue is mainly this:
`ImportError: cannot import name 'SampleOutput' from 'transformers.generation.utils'`
Downgrading to 4.36.2 of Transformers makes things work fine again.
I looked to see if this could be related to **Remove support for torch 1.10** but can find no references to **SampleOutput** being a part of that.
Would you be able to confirm to me is this something that has been dropped in 4.37 or perhaps an omission that will be resolved in a future update?
Thanks
### Who can help?
@sanchit-gandhi (Im guessing you may be the correct person as this is Speech, apologies if not).
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Use transformers 4.37 with the Coqui TTS engine and try to import their XTTS model.
### Expected behavior
Of course, for this model to import correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28648/comments | https://api.github.com/repos/huggingface/transformers/issues/28648/events | https://github.com/huggingface/transformers/pull/28648 | 2,094,285,376 | PR_kwDOCUB6oc5kvcR4 | 28,648 | [`TokenizationUtils`] add support for `split_special_tokens` | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,705 | 1,705 | null | COLLABORATOR | null | # What does this PR do?
Adds support for `split_special_tokens` for fast models as well
- [ ] deprecate `split_special_tokens` for `encode_special_tokens` for API consistency
- [ ] make sure this is saved and used not only as kwargs but also the attribute
- [ ] add some tests
- [ ] add some docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28648/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28648",
"html_url": "https://github.com/huggingface/transformers/pull/28648",
"diff_url": "https://github.com/huggingface/transformers/pull/28648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28648.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28647/comments | https://api.github.com/repos/huggingface/transformers/issues/28647/events | https://github.com/huggingface/transformers/issues/28647 | 2,094,271,576 | I_kwDOCUB6oc581AxY | 28,647 | Why tokens / second is more on Float32 than Float16 | {
"login": "Anindyadeep",
"id": 58508471,
"node_id": "MDQ6VXNlcjU4NTA4NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/58508471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anindyadeep",
"html_url": "https://github.com/Anindyadeep",
"followers_url": "https://api.github.com/users/Anindyadeep/followers",
"following_url": "https://api.github.com/users/Anindyadeep/following{/other_user}",
"gists_url": "https://api.github.com/users/Anindyadeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anindyadeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anindyadeep/subscriptions",
"organizations_url": "https://api.github.com/users/Anindyadeep/orgs",
"repos_url": "https://api.github.com/users/Anindyadeep/repos",
"events_url": "https://api.github.com/users/Anindyadeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anindyadeep/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @SunMarc , not 100% sure here but maybe this is expected with a potential silent fp32 -> fp16 downcasting overhead",
"@Anindyadeep - thanks for raising this issue! Could you also share the standard deviation between the runs? ",
"We only perform downcasting when there is no dtype specified. GPTQ model can indeed run with fp32 weight_dtype. However, we expect it to be slower than fp16. ",
"> @Anindyadeep - thanks for raising this issue! Could you also share the standard deviation between the runs?\r\n\r\nSure, here: \r\n\r\n```\r\nResults in Mean, Std\r\n\r\nFP-16: (40.10957833368466, 1.3878278677006122)\r\nFP-32: (48.72477229543234, 0.28218654623962147)\r\n```",
"Hi, I saw some patterns in the normal hugging face transformers inference, that with fp16 > fp32 when context size increased to 512. However this is not applying for AutoGPTQ\r\n\r\nHere are the results (updated, for 512 context length)\r\n\r\n```\r\nFP-16: 30.42 ± 0.40\r\nFP-32: 42.01 ± 1.03\r\n```"
] | 1,705 | 1,706 | null | CONTRIBUTOR | null | ### System Info
```
- `transformers` version: 4.34.1
- Platform: Linux-5.4.0-169-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import time
import torch
import numpy as np
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
@torch.inference_mode()
def benchmark_gptq(dtype, prompt = "Hello, what do you know about transformers", repetitions = 10):
model_path = "./models/llama-2-7b-autogptq"
quantization_config = GPTQConfig(
bits=4,
group_size=128,
desc_act=False, use_exllama=False,
use_cuda_fp16=True if dtype == torch.float16 else False
)
model = AutoModelForCausalLM.from_pretrained(
model_path,
quantization_config=quantization_config,
torch_dtype=dtype,
device_map='cuda:0'
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenized_input = tokenizer.encode(prompt, return_tensors='pt').to('cuda:0')
results = []
print("STARTING TO BENCHMARK FOR: ", "fp16" if dtype==torch.float16 else "fp32", "\n")
for i in range(repetitions):
print("....")
start = time.time()
output = model.generate(input_ids = tokenized_input, max_new_tokens = 100).detach().cpu().numpy()
delta = time.time() - start
results.append(
len(output[0]) / delta
)
return np.mean(results)
if __name__ == '__main__':
print("FP-16: ", benchmark_gptq(dtype=torch.float16))
print("FP-32: ", benchmark_gptq(dtype=torch.float32))
```
### Expected behavior
This was the output:
```
FP-16: 39.50591397818114
FP-32: 49.23083222100881
```
Please note: The metric which is used here is: `tokens/sec`
But the expected should be the reversed right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28647/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28646/comments | https://api.github.com/repos/huggingface/transformers/issues/28646/events | https://github.com/huggingface/transformers/pull/28646 | 2,094,253,001 | PR_kwDOCUB6oc5kvVOB | 28,646 | improve efficient training on CPU documentation | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @amyeroberts @stevhliu, could you help review this PR? Thx!",
"@yao-matrix",
"@stevhliu thanks so much for the corrections! It looks much better now and I only made one more change as @yao-matrix suggested in the comment. Pls help us to merge this PR. Many thanks! "
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | ## What does this PR do?
This PR improves the CPU efficient training documentation to make it more clear, accurate and up-to-date. Concrete improvements are
- add full names of the CPU instruction sets (e.g. Intel® Advanced Vector Extensions 512 instead of AVX-512)
- add one sentence to explain "mixed precision"
- add OOB mixed precision training using BF16 and further improvement with IPEX | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28646/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28646",
"html_url": "https://github.com/huggingface/transformers/pull/28646",
"diff_url": "https://github.com/huggingface/transformers/pull/28646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28646.patch",
"merged_at": 1706116034000
} |
https://api.github.com/repos/huggingface/transformers/issues/28645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28645/comments | https://api.github.com/repos/huggingface/transformers/issues/28645/events | https://github.com/huggingface/transformers/pull/28645 | 2,093,983,567 | PR_kwDOCUB6oc5kua1I | 28,645 | [`GPTNeoX`] Fix GPTNeoX + Flash Attention 2 issue | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/28613
Indeed, probably due to copy-pasta the target_dtype was trying to get inferred from the wrong attribute
cc @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28645/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28645",
"html_url": "https://github.com/huggingface/transformers/pull/28645",
"diff_url": "https://github.com/huggingface/transformers/pull/28645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28645.patch",
"merged_at": 1705935001000
} |
https://api.github.com/repos/huggingface/transformers/issues/28644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28644/comments | https://api.github.com/repos/huggingface/transformers/issues/28644/events | https://github.com/huggingface/transformers/pull/28644 | 2,093,953,365 | PR_kwDOCUB6oc5kuULr | 28,644 | small doc update for CamemBERT | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28644/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28644",
"html_url": "https://github.com/huggingface/transformers/pull/28644",
"diff_url": "https://github.com/huggingface/transformers/pull/28644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28644.patch",
"merged_at": 1706539593000
} |
https://api.github.com/repos/huggingface/transformers/issues/28643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28643/comments | https://api.github.com/repos/huggingface/transformers/issues/28643/events | https://github.com/huggingface/transformers/pull/28643 | 2,093,924,305 | PR_kwDOCUB6oc5kuNzt | 28,643 | Convert Depth Anything | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have opened a PR that may be more in line with the philosophy. Let me know which one you prefer :)",
"Closing in favor of #28654 "
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
[Depth Anything](https://twitter.com/_akhaliq/status/1749284669936275463) came out, and it is compatible with our implementation of DPT. It leverages DINOv2 as backbone.
It does use a small a tweak in the decoder, where it sets `size` instead of `scale_factor` when interpolating.
Demo notebook: https://colab.research.google.com/drive/1tHrdu4TY6f_oTXJbqPUn2DVoSKxbNa3O?usp=sharing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28643/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28643",
"html_url": "https://github.com/huggingface/transformers/pull/28643",
"diff_url": "https://github.com/huggingface/transformers/pull/28643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28643.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28642/comments | https://api.github.com/repos/huggingface/transformers/issues/28642/events | https://github.com/huggingface/transformers/pull/28642 | 2,093,923,569 | PR_kwDOCUB6oc5kuNpW | 28,642 | Set correct dtypes for ONNX quantization | {
"login": "severinsimmler",
"id": 16133277,
"node_id": "MDQ6VXNlcjE2MTMzMjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16133277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severinsimmler",
"html_url": "https://github.com/severinsimmler",
"followers_url": "https://api.github.com/users/severinsimmler/followers",
"following_url": "https://api.github.com/users/severinsimmler/following{/other_user}",
"gists_url": "https://api.github.com/users/severinsimmler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severinsimmler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severinsimmler/subscriptions",
"organizations_url": "https://api.github.com/users/severinsimmler/orgs",
"repos_url": "https://api.github.com/users/severinsimmler/repos",
"events_url": "https://api.github.com/users/severinsimmler/events{/privacy}",
"received_events_url": "https://api.github.com/users/severinsimmler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @severinsimmler, `transformers.convert_graph_to_onnx` is deprecated (https://github.com/huggingface/transformers/blob/dafd59512cf6376ee2f056e38935d83df77a213c/src/transformers/convert_graph_to_onnx.py#L362), I would recommend you to use optimum utilities.",
"@fxmarty Yes, makes sense to migrate to `optimum`. Should I close this PR then? However, would be great if quantization would work even with the deprecated module until version 5 is released.",
"Let's see what @michaelbenayoun thinks - I would just recommend to move to Optimum.",
"I agree with @fxmarty . Since ONNX related features are supported in Optimum, it does not make sense to add new features in Transformers at this point.",
"But this is not really a new feature, but rather a bugfix? It's definitely broken at the moment.",
"Let's move to Optimum anyways.",
"I understand that the module is deprecated and agree that everyone should move to Optimum, but `transformers.convert_graph_to_onnx.quantize` is broken and the fix is literally five changed lines. In case you are not going to release Transformers 5 next week and remove the whole module, it'd be worth merging this PR imho. If you don't think so, feel free to close this PR.",
"Okay, got it :D What a pity"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
It's currently not to possible to quantize an ONNX model with `transformers.convert_graph_to_onnx`.
Running the following snippet on `main`:
```python
from pathlib import Path
from transformers import pipeline
from transformers.convert_graph_to_onnx import convert_pytorch, quantize
# load a ner model
nlp = pipeline(task="ner", model="dbmdz/bert-large-cased-finetuned-conll03-english")
# name of the onnx file to be exported
output = Path("model.onnx")
# first transform pytorch to onnx model
convert_pytorch(nlp, output=output, opset=11, use_external_format=False)
# onnx model can now be quantized
quantize(output)
```
will result in:
```
Traceback (most recent call last):
File "/home/severin/git/transformers/test-quantization.py", line 12, in <module>
quantized_model = quantize(output)
^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/src/transformers/convert_graph_to_onnx.py", line 472, in quantize
quantizer.quantize_model()
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 310, in quantize_model
op_quantizer.quantize()
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/operators/gather.py", line 29, in quantize
) = self.quantizer.quantize_activation(node, [0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 825, in quantize_activation
return self.__quantize_inputs(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 915, in __quantize_inputs
q_weight_name, zp_name, scale_name = self.quantize_initializer(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/onnx_quantizer.py", line 995, in quantize_initializer
_, _, zero_point, scale, q_weight_data = quantize_data(
^^^^^^^^^^^^^^
File "/home/severin/git/transformers/.venv/lib/python3.11/site-packages/onnxruntime/quantization/quant_utils.py", line 277, in quantize_data
raise ValueError(f"Unexpected value for qType={qType}.")
ValueError: Unexpected value for qType=False.
```
The values are updated in this PR to be consistent with the default values in `optimum` (see [this](https://github.com/huggingface/optimum/blob/bb7b71a2c1f9c9220845b258afd88a5c1a24c013/optimum/onnxruntime/quantization.py#L367-L368) and also [this](https://github.com/huggingface/optimum/blob/bb7b71a2c1f9c9220845b258afd88a5c1a24c013/optimum/onnxruntime/configuration.py#L275-L277)).
Running the snippet from above in my branch outputs as expected:
```
Quantized model has been written at model-quantized.onnx: ✔
```
Tested with `onnxruntime` 1.16.3 (on Python 3.11.6) and 1.12.1 (on Python 3.10.13).
## Who can review?
@SunMarc and @younesbelkada (neither `bitsandbytes` nor `autogpt`, but quantization though)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28642/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28642",
"html_url": "https://github.com/huggingface/transformers/pull/28642",
"diff_url": "https://github.com/huggingface/transformers/pull/28642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28642.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28641/comments | https://api.github.com/repos/huggingface/transformers/issues/28641/events | https://github.com/huggingface/transformers/issues/28641 | 2,093,895,845 | I_kwDOCUB6oc58zlCl | 28,641 | Qwen2 weights are not there/deleted? | {
"login": "aliencaocao",
"id": 20109683,
"node_id": "MDQ6VXNlcjIwMTA5Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/20109683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliencaocao",
"html_url": "https://github.com/aliencaocao",
"followers_url": "https://api.github.com/users/aliencaocao/followers",
"following_url": "https://api.github.com/users/aliencaocao/following{/other_user}",
"gists_url": "https://api.github.com/users/aliencaocao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliencaocao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliencaocao/subscriptions",
"organizations_url": "https://api.github.com/users/aliencaocao/orgs",
"repos_url": "https://api.github.com/users/aliencaocao/repos",
"events_url": "https://api.github.com/users/aliencaocao/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliencaocao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @JustinLin610 the author of the pull request! Are the weights not publicly available yet? 🤗 "
] | 1,705 | 1,706 | null | CONTRIBUTOR | null | ### System Info
https://huggingface.co/Qwen2/Qwen2-7B-Chat-beta gives a 404, despite the tutorial in https://huggingface.co/docs/transformers/main/model_doc/qwen2 quoting it.
https://huggingface.co/Qwen only has Qwen-1 models.
@ArthurZucker @younesbelkada @stevhliu
By the way, the docs is inconsistent in the model path. Most uses https://huggingface.co/Qwen2/Qwen2-7B-beta but in docs for the [config class](https://huggingface.co/docs/transformers/main/model_doc/qwen2#transformers.Qwen2Config), it uses https://huggingface.co/Qwen/Qwen2-7B-beta which also does not exist.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Visit https://huggingface.co/Qwen2/Qwen2-7B-Chat-beta
### Expected behavior
Model is there and accessible | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28641/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28640/comments | https://api.github.com/repos/huggingface/transformers/issues/28640/events | https://github.com/huggingface/transformers/pull/28640 | 2,093,854,665 | PR_kwDOCUB6oc5kt-ed | 28,640 | Add missing key to TFLayoutLM signature | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | MEMBER | null | LayoutLM is missing the `bbox` key in its signature, which affects exporting to TFLite/TF Serving. LayoutLMv3 already has the correct signature and doesn't need to be fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28640",
"html_url": "https://github.com/huggingface/transformers/pull/28640",
"diff_url": "https://github.com/huggingface/transformers/pull/28640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28640.patch",
"merged_at": 1705929389000
} |
https://api.github.com/repos/huggingface/transformers/issues/28639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28639/comments | https://api.github.com/repos/huggingface/transformers/issues/28639/events | https://github.com/huggingface/transformers/pull/28639 | 2,093,795,566 | PR_kwDOCUB6oc5ktxXz | 28,639 | compatibility to original owlv2 model | {
"login": "talshaharabany",
"id": 50660642,
"node_id": "MDQ6VXNlcjUwNjYwNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/50660642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talshaharabany",
"html_url": "https://github.com/talshaharabany",
"followers_url": "https://api.github.com/users/talshaharabany/followers",
"following_url": "https://api.github.com/users/talshaharabany/following{/other_user}",
"gists_url": "https://api.github.com/users/talshaharabany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talshaharabany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talshaharabany/subscriptions",
"organizations_url": "https://api.github.com/users/talshaharabany/orgs",
"repos_url": "https://api.github.com/users/talshaharabany/repos",
"events_url": "https://api.github.com/users/talshaharabany/events{/privacy}",
"received_events_url": "https://api.github.com/users/talshaharabany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28639",
"html_url": "https://github.com/huggingface/transformers/pull/28639",
"diff_url": "https://github.com/huggingface/transformers/pull/28639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28639.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28638/comments | https://api.github.com/repos/huggingface/transformers/issues/28638/events | https://github.com/huggingface/transformers/pull/28638 | 2,093,791,766 | PR_kwDOCUB6oc5ktwiL | 28,638 | Avoid root logger's level being changed | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
A complement to #28575 -> root cause is found and fixed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28638/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28638",
"html_url": "https://github.com/huggingface/transformers/pull/28638",
"diff_url": "https://github.com/huggingface/transformers/pull/28638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28638.patch",
"merged_at": 1705931130000
} |
https://api.github.com/repos/huggingface/transformers/issues/28637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28637/comments | https://api.github.com/repos/huggingface/transformers/issues/28637/events | https://github.com/huggingface/transformers/pull/28637 | 2,093,712,354 | PR_kwDOCUB6oc5ktfCr | 28,637 | Fix windows err with checkpoint race conditions | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amyeroberts I'm not a windows expert but after looking the only alternative would to be to use the atomic `os.replace()` since it's atomic *except* it doesn't work on folders, so there's not really a good solution here for this. \r\n\r\nI don't think we have too many multi-node windows users (as this seems fairly uncommon, I'd expect more UNIX-based multinode users), so it's okay for now. If a user reports an issue hopefully more users will know a windows-based solution, or we can try a hacky-workaround using `replace()` and checking for a particular file that would exist in the folder. \r\n\r\nLet me know if you're comfortable with this, else I can include the hack in here",
"@muellerzr Sounds good to me! Thanks for looking into this 🤗 ",
"Just FYI, this is not a race condition, the error is because you're trying to call os.open() on a directory (not a file), which doesn't work in Windows. See: https://stackoverflow.com/questions/21785127/open-a-directory-on-windows-permission-denied",
"@adamkolar Yep - this PR is addressing the permissions issue with Windows. The permission issue arose from a check that was introduced to handle race conditions when renaming checkpoints #28364 ",
"@amyeroberts I see, my bad"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Windows doesn't like python trying to open files, so makes the additional race condition check to be non-windows based
Fixes # (issue)
https://github.com/huggingface/transformers/pull/28364
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28637",
"html_url": "https://github.com/huggingface/transformers/pull/28637",
"diff_url": "https://github.com/huggingface/transformers/pull/28637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28637.patch",
"merged_at": 1706016636000
} |
https://api.github.com/repos/huggingface/transformers/issues/28636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28636/comments | https://api.github.com/repos/huggingface/transformers/issues/28636/events | https://github.com/huggingface/transformers/pull/28636 | 2,093,599,501 | PR_kwDOCUB6oc5ktGNe | 28,636 | [`SigLIP`] Only import tokenizer if sentencepiece available | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @NielsRogge "
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Protects the import of siglip's tokenizer such that users can safely run `from transformers import *` if they don't have sentencepiece installed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28636",
"html_url": "https://github.com/huggingface/transformers/pull/28636",
"diff_url": "https://github.com/huggingface/transformers/pull/28636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28636.patch",
"merged_at": 1705936817000
} |
https://api.github.com/repos/huggingface/transformers/issues/28635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28635/comments | https://api.github.com/repos/huggingface/transformers/issues/28635/events | https://github.com/huggingface/transformers/issues/28635 | 2,093,563,935 | I_kwDOCUB6oc58yUAf | 28,635 | Tokenizer `encode/decode` methods are inconsistent, TypeError: argument 'ids': 'list' object cannot be interpreted as an integer | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"https://github.com/huggingface/transformers/blob/83f9196cc44a612ef2bd5a0f721d08cb24885c1f/src/transformers/tokenization_utils_fast.py#L596-L605\r\n\r\nWhy only remove the leading batch axis while return tensor is `None`? I mean consider the annotation of `text` parameter of `_encode_plus` method, we won't need batch axis at all, so why not remove it for all return tensor types?",
"Fully agree with you. Encode can take batched, decode only a single. `batch_decode` is the one that support a batch of inputs. It has been that way for a WHILE now. I plan to deprecate some of this and simplify the API in favor of just encode decode that can support batches and singles. \r\n\r\nWould you like to work on a fix for this? 🤗 ",
"Sure, already working on it, I will create a PR later today.",
"Hi @ArthurZucker, \r\n- Should we keep related APIs as the same as `huggingface/tokenizers` lib? Consider if we only have one `(de)encode` method which not just process single but batches, it will be impossible to distinguish the difference between `List[TextInput]` and `PreTokenizedInput` with its own power without changing the logic or adding extra parameters, consider they are just the same thing if we check their type, so I think:\r\n - We add a `batch` parameter to the `(de)encode` methods, and keeping/adding the `(de)encode_batch` methods. This way won't \"solve\" this issue, cos decode won't be able to know what's the method generated the input, consider we won't add extra info fields to `encode` method, however, it should be acceptable if we well documented it, and such scenario will only happen if users misuse the methods (e.g., they use `decode` method to decode the `encode_batch` returns).\r\n (WIP on this without thinking about the backward compatibilities).\r\n - We change the logic of the function, like if the method receives `list[str]` type input, we only treat it as batched strings, rather than pre-tokenized single string. This way will solve this issue, but we will lose the ability to process either `list[TextInput]` or `PreTokenizedInput` with `encode` method.\r\n\r\n BTW, I wonder to ask why you decided to only remove list-object's leading batch axis in `encode` method? May it can provide some convince for users while using it?\r\n- I prefer to use `Enum`s rather than use \"magic strings\" (e.g., `utils/generic/infer_framework_XXX`) everywhere in the project (for inner methods, fine for API to provide user convenience), during the refracting period, I found lots of the legacy code that used \"magic strings\", even for condition tests.\r\n- Can we try to use `abc` stdlib to define tokenizer base class? (Just a nit, for some cases we won't use it because it is not suitable for some classes, like the `Dataset` class in PyTorch project)\r\n- Consider we already have:\r\n ```python\r\n if is_flax_available():\r\n import jax.numpy as jnp\r\n ```\r\n Then why we have `import tensorflow` everywhere rather than do the same at the top of the source code via `is_tf_available`?\r\n- Consider replacing those monkey patches (e.g., logger) into subclasses/wrapper classes?\r\n- Wanna to rename the following classes (won't do now coz lots of files will be changed):\r\n - `BatchEncoding` to `EntryEncoding`: name \"Batch\" is not suitable for the `encode` method, and the result will not always be batched.\r\n - `TensorType` to `TArrayType`: added new Python `SEQUENCE` enum value.\r\n- move `as_tensor`/`is_tensor` to generic.py",
"Sorry for such a big change in the API I'd rather take care of it, I was more referring to a small PR that supports for now decoding a batch of inputs with decode! \r\n\r\nAbout all the points you mentioned, for me the direction of the library is a bit different. Appart from removing calls to TF everywhere which indeed should be protected the same way as flax! ",
"> Sorry for such a big change in the API I'd rather take care of it, I was more referring to a small PR that supports for now decoding a batch of inputs with decode!\r\n\r\n`decode` itself can't handle such tasks well, as I mentioned before:\r\n> it will be impossible to distinguish the difference between List[TextInput] and PreTokenizedInput with its own power without changing the logic or adding extra parameters\r\n\r\nSo yes, it becomes a \"big change\", since I already created a PR, you may take care of this based on it, no need to sorry 🤗\r\n\r\n> the direction of the library is a bit different\r\n\r\nCan you explain this more? I think most of the points are just about the code style, so it won't affect the direction of anything, but may improve the maintainability :)\r\n\r\n> Appart from removing calls to TF everywhere which indeed should be protected the same way as flax!\r\n\r\nCool, I'm glad that you also think so, we'd better consider this more, consider having `import` statement for frequently used functions is definitely a bad idea coz the function has to look up those libraries every time when it gets called, even Python will only do one true import for one library (heavy operation, so we may also need to consider, when we should have the necessary true import).\r\n\r\n\r\n\r\n\r\n\r\n"
] | 1,705 | 1,707 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Run the following code:
```python
from transformers import AutoTokenizer
text = "test"
tokenizer = AutoTokenizer.from_pretrained("gpt2")
encoded = tokenizer.encode(text, return_tensors='pt')
result_text = tokenizer.decode(encoded, skip_special_tokens=True)
print(text)
```
Will raise exception:
```
Traceback (most recent call last):
File "main.py", line 8, in <module>
tokenizer.decode(encoded, skip_special_tokens=True)
File "/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3748, in decode
return self._decode(
^^^^^^^^^^^^^
File "/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 625, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument 'ids': 'list' object cannot be interpreted as an integer
```
### Expected behavior
Should be able to print the original text `"test"`, rather than raise an exception(`TypeError`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28635/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28634/comments | https://api.github.com/repos/huggingface/transformers/issues/28634/events | https://github.com/huggingface/transformers/pull/28634 | 2,093,449,358 | PR_kwDOCUB6oc5kslMZ | 28,634 | Exllama kernels support for AWQ models | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I guess all points are addressed.\r\n@casper-hansen when is 0.1.9 planned ?",
"@younesbelkada The next release will be 0.2.0 🤗. For T4 support, I have not tested it. If AutoGPTQ supports T4 with ExLlama v1 and v2 kernels, AutoAWQ should too as the kernels are the same.\r\n\r\nEDIT: To answer the timeline question. There is no set-in-stone plan for the next release. PRs to be merged before release include AMD support, Marlin support, Qwen2 support, and hopefully PEFT support. I expect this could be done in <1-2 weeks.",
"Awesome! Per my understanding ex-llama + AutoGPTQ should be supported on T4 so it should be all good ! \r\nLet me know whenever you have some progress for the PEFT support so that I'll dive in to add AWQ + PEFT support directly in PEFT "
] | 1,705 | 1,706 | null | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following https://github.com/casper-hansen/AutoAWQ/pull/313
ExllamaV2 offers up to 2x speedup compared to GEMM, while also compatible with AMD ROCm.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SunMarc and @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28634/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28634",
"html_url": "https://github.com/huggingface/transformers/pull/28634",
"diff_url": "https://github.com/huggingface/transformers/pull/28634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28634.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28633/comments | https://api.github.com/repos/huggingface/transformers/issues/28633/events | https://github.com/huggingface/transformers/pull/28633 | 2,093,350,117 | PR_kwDOCUB6oc5ksPt9 | 28,633 | [`Vilt`] align input and model dtype in the ViltPatchEmbeddings forward pass | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@yao-matrix",
"Pls have a review, thx!",
"Hi @ArthurZucker , could you take a look at this PR? Thx! ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | ## What does this PR do?
Just like the case in [BlipVisionEmbeddings](https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip/modeling_blip.py#L249), there should also be a dtype alignment in the ViltPatchEmbeddings as well. Otherwise, I would get the `RuntimeError: Input type (float) and bias type (c10::Half) should be the same`, when my "dandelin/vilt-b32-finetuned-vqa" model is loaded in half-precision data type.
## Reproduction
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
import torch
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", torch_dtype=torch.float16).to("cuda")
# prepare inputs
encoding = processor(image, text, return_tensors="pt").to("cuda")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
@ArthurZucker and @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28633/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28633",
"html_url": "https://github.com/huggingface/transformers/pull/28633",
"diff_url": "https://github.com/huggingface/transformers/pull/28633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28633.patch",
"merged_at": 1706195001000
} |
https://api.github.com/repos/huggingface/transformers/issues/28632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28632/comments | https://api.github.com/repos/huggingface/transformers/issues/28632/events | https://github.com/huggingface/transformers/issues/28632 | 2,093,162,870 | I_kwDOCUB6oc58wyF2 | 28,632 | Can't quantize gptq model on CPU runtime? | {
"login": "gesanqiu",
"id": 37237570,
"node_id": "MDQ6VXNlcjM3MjM3NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/37237570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gesanqiu",
"html_url": "https://github.com/gesanqiu",
"followers_url": "https://api.github.com/users/gesanqiu/followers",
"following_url": "https://api.github.com/users/gesanqiu/following{/other_user}",
"gists_url": "https://api.github.com/users/gesanqiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gesanqiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gesanqiu/subscriptions",
"organizations_url": "https://api.github.com/users/gesanqiu/orgs",
"repos_url": "https://api.github.com/users/gesanqiu/repos",
"events_url": "https://api.github.com/users/gesanqiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gesanqiu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @SunMarc I think that the fix should go on optimum side but I am not sure, wdyt? ",
"Hi @gesanqiu, there is indeed an issue. In the meantime, you can do `AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, use_safetensors=True, quantization_config=gptq_config)`. I will fix the issue on optimum @younesbelkada ! \r\n",
"@SunMarc Thx. I also set `cache_block_outputs=False` in GPTQConfig to avoid OOM when quantizing model.layers blocks.",
"Yes, this can also help with oom since we don't cache the output !"
] | 1,705 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GPTQConfig
import torch
model_path = r'/data1/ls/hf_models/multi_lan-mango-dev/'
save_path = r'/data1/ls/hf_models/multi_lan-mango-dev-gptq'
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
gptq_config = GPTQConfig(bits=4, dataset="wikitext2", tokenizer=tokenizer, group_size=32, use_exllama=False)
quantized_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map='cpu', use_safetensors=True, quantization_config=gptq_config)
# quantized_model.to("cpu")
quantized_model.save_pretrained(save_path)
```
I have 4*A40(48G) on my machine, and I tried to quantize a 30B model with `device_map='auto'`, but the gpu memory utilizaiton isn't balanced on all the GPUs during quantizing model.layers blocks and OOM occurred. So I want to quantize the model on CPU runtime, The logs shown as following:
```shell
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:07<00:00, 2.10it/s]
Traceback (most recent call last):
File "/home/dell/workSpace/test/gptq_hf.py", line 9, in <module>
quantized_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map='cpu', use_safetensors=True, quantization_config=gptq_config)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3780, in from_pretrained
quantizer.quantize_model(model, quantization_config.tokenizer)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/optimum/gptq/quantizer.py", line 431, in quantize_model
model(**data)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward
outputs = self.model(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1025, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward
return F.embedding(
File "/home/dell/anaconda3/envs/vllm-kv_quant/lib/python3.10/site-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
I think the issue is because the model is on CPU but the `input_ids` encoded by tokenizer isn't on GPU?
### Expected behavior
Quantizing the model succeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28631/comments | https://api.github.com/repos/huggingface/transformers/issues/28631/events | https://github.com/huggingface/transformers/pull/28631 | 2,093,098,420 | PR_kwDOCUB6oc5krZGk | 28,631 | rm input dtype change in CPU | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @amyeroberts . I have tested it on difference CPUs and didn't find any crashes. Could you please help me review it? Thx!",
"Hi @jiqing-feng, I think the correct solution here is to enable the necessary types in the ASR pipeline, rather than removing this cast.",
"> Hi @jiqing-feng, I think the correct solution here is to enable the necessary types in the ASR pipeline, rather than removing this cast.\r\n\r\nSorry for that I didn't make it clear. We can see that the function name is `_ensure_tensor_on_device`, so it should only ensure the tensor is on the correct device, instead of changing tensor types. Of course, we can enable the necessary types in the ASR pipeline, but not in this function.",
"@jiqing-feng Can you share a code snippet showing usage which this PR fixes and a full traceback of the error without this fix? ",
"> @jiqing-feng Can you share a code snippet showing usage which this PR fixes and a full traceback of the error without this fix?\r\n\r\nHi @amyeroberts . I run the following script in CPU:\r\n```python\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\nfrom datasets import Audio\r\nimport torch\r\n\r\nminds = load_dataset(\"PolyAI/minds14\", name=\"de-DE\", split=\"train\")\r\nminds = minds.cast_column(\"audio\", Audio(sampling_rate=16_000))\r\nexample = minds[0]\r\n\r\nasr = pipeline(\"automatic-speech-recognition\", model=\"maxidl/wav2vec2-large-xlsr-german\", torch_dtype=torch.bfloat16)\r\noutput = asr(example[\"audio\"][\"array\"])\r\nprint(output)\r\n```\r\nwill get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jiqingfe/transformers/test.py\", line 11, in <module>\r\n output = asr(example[\"audio\"][\"array\"]) File \"/home/jiqingfe/transformers/src/transformers/pipelines/automatic_speech_recognition.py\", line 292, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/base.py\", line 1154, in __call__ return next(\r\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py\", line 124, in __next__\r\n item = next(self.iterator)\r\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py\", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params)\r\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/base.py\", line 1068, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/automatic_speech_recognition.py\", line 524, in _forward outputs = self.model(**inputs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/jiqingfe/transformers/src/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 1967, in forward\r\n outputs = self.wav2vec2(\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/jiqingfe/transformers/src/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 1552, in forward\r\n extract_features = self.feature_extractor(input_values)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/jiqingfe/transformers/src/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 460, in forward\r\n hidden_states = conv_layer(hidden_states)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/jiqingfe/transformers/src/transformers/models/wav2vec2/modeling_wav2vec2.py\", line 335, in forward\r\n hidden_states = self.conv(hidden_states)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/conv.py\", line 310, in forward\r\n return self._conv_forward(input, self.weight, self.bias)\r\n File \"/home/jiqingfe/miniconda3/envs/ipex/lib/python3.10/site-packages/torch/nn/modules/conv.py\", line 306, in _conv_forward\r\n return F.conv1d(input, weight, bias, self.stride,\r\nRuntimeError: Input type (torch.FloatTensor) and weight type (CPUBFloat16Type) should be the same or input should be a MKLDNN tensor and weight is a dense tensor\r\n```\r\n\r\nBut it will output the correct answer if applied my changes:\r\n```\r\n{'text': 'ich möchte gerne geld auf mein konto einzallen'}\r\n```",
"Hi @jiqing-feng, thanks for providing more information. This isn't something I think we should merge in. Not all CPUs support bf16 and so removing this will cause issues for users who are running on GPU for prediction. See #17637 for the reasons for adding this in. \r\n\r\nAs mentioned in #28199, autocasting your model before passing to the pipeline is an alternative to adding autocast to the pipeline. \r\n\r\nThe pipelines are not intended to cover all cases, but rather give a simple one-line entry point to predictions. I'd suggest using the [modeling code directly](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.forward.example), where you'll have more control ",
"> Hi @jiqing-feng, thanks for providing more information. This isn't something I think we should merge in. Not all CPUs support bf16 and so removing this will cause issues for users who are running on GPU for prediction. See #17637 for the reasons for adding this in.\r\n> \r\n> As mentioned in #28199, autocasting your model before passing to the pipeline is an alternative to adding autocast to the pipeline.\r\n> \r\n> The pipelines are not intended to cover all cases, but rather give a simple one-line entry point to predictions. I'd suggest using the [modeling code directly](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC.forward.example), where you'll have more control\r\n\r\nHi @amyeroberts . Thanks for your reply. \r\n\r\nFor the 1st issue, we know that not all CPUs support bf16 and fp16, but I think we should let it crash while forward so that the users will know that they shouldn't use low-precision data types, and they can cast model and inputs to fp32.\r\n\r\nFor the 2nd issue, I can't understand why it affects GPU. I ran the code in #17637 , and the device is `cuda:0`, so removing this code won't have any impact.\r\n\r\n\r\ncc @Narsil \r\n",
"@jiqing-feng Apologies - there was a typo on my part. I meant cause issues for users running on CPU for prediction. \r\n\r\nI understand the idea of allowing it to crash, but the reality is that this logic is a breaking change, as it will stop working for users when it previously did work. ",
"> @jiqing-feng Apologies - there was a typo on my part. I meant cause issues for users running on CPU for prediction.\r\n> \r\n> I understand the idea of allowing it to crash, but the reality is that this logic is a breaking change, as it will stop working for users when it previously did work.\r\n\r\nSorry, I don't understand why it stops working for users. On my test, it won't work for bf16 models if we keep this code, and works if we remove this code. Can you give me an example to show how my changes impact users? Thx!\r\n\r\nBTW, I added a warning there to remind users to use fp32 if the device does not support low-precision."
] | 1,705 | 1,708 | null | CONTRIBUTOR | null | Hi @amyeroberts . Refer to [28199](https://github.com/huggingface/transformers/pull/28199). Since `Autocast` cannot integrate into the pipeline, I propose that keep the inputs dtype in the pipeline. Otherwise, it will block the low-precision usage in both ASR and text-to-audio.
BTW, we will be ready for review once we confirm that it works on different CPUs, please keep this PR open. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28631/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28631",
"html_url": "https://github.com/huggingface/transformers/pull/28631",
"diff_url": "https://github.com/huggingface/transformers/pull/28631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28631.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28630/comments | https://api.github.com/repos/huggingface/transformers/issues/28630/events | https://github.com/huggingface/transformers/issues/28630 | 2,093,093,853 | I_kwDOCUB6oc58whPd | 28,630 | Disable removing shared tensors by default | {
"login": "imoneoi",
"id": 26354659,
"node_id": "MDQ6VXNlcjI2MzU0NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/26354659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imoneoi",
"html_url": "https://github.com/imoneoi",
"followers_url": "https://api.github.com/users/imoneoi/followers",
"following_url": "https://api.github.com/users/imoneoi/following{/other_user}",
"gists_url": "https://api.github.com/users/imoneoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imoneoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imoneoi/subscriptions",
"organizations_url": "https://api.github.com/users/imoneoi/orgs",
"repos_url": "https://api.github.com/users/imoneoi/repos",
"events_url": "https://api.github.com/users/imoneoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/imoneoi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @imoneoi \r\nThanks for the issue! \r\nI don't think we can disable sharding by default as it might break many things such as the ability to load models on a free-tier google colab instance. Among many possible options, few fixes that I see for your case and to fix #27293 are:\r\n\r\n1- Warn users if they are using DS to not save their model with `safe_serialization`\r\n2- Make that block optional through an argument `shard_weights=True` and either set it to `False` for DeepSpeed or warn users about it in case they are using DeepSpeed\r\n\r\n--> in general we encourage users to use safetensors, so I would say option 2 might be the best solution here\r\n\r\nWould you be happy to open a PR with one of these solutions ? cc @amyeroberts @pacman100 @muellerzr what do you think ",
"Hmmm I think what @imoneoi is reporting is a different issue than what you're describing @younesbelkada, namely that `safetensors` refuses shared (and not sharded) tensor serialization and therefore removes the copies of the same tensors in the state dict.\r\n\r\nWe're definitely aiming for this to be frictionless, so the more insights we have in the code that fails, the better we'll be able to help.\r\n\r\nThanks @muellerzr for the minimal reproducer on the other thread, I'm pasting it below:\r\n\r\n> ```py\r\n> import torch\r\n> from accelerate import Accelerator\r\n> from accelerate.utils import DeepSpeedPlugin, HfDeepSpeedConfig\r\n> from transformers import AutoModelForCausalLM\r\n> from transformers.modeling_utils import unwrap_model\r\n> \r\n> transformers_config = HfDeepSpeedConfig({\r\n> \"train_micro_batch_size_per_gpu\": 2,\r\n> \"gradient_accumulation_steps\": 2,\r\n> \"gradient_clipping\": 1.0,\r\n> \"offload_optimizer_device\": None,\r\n> \"offload_param_device\": None,\r\n> \"zero3_init_flag\": False,\r\n> \"zero_optimization\": {\r\n> \"stage\": 2,\r\n> },\r\n> })\r\n> \r\n> plugin = DeepSpeedPlugin(transformers_config)\r\n> \r\n> accelerator = Accelerator(deepspeed_plugin=plugin)\r\n> \r\n> model_name = \"bert-base-cased\"\r\n> model = AutoModelForCausalLM.from_pretrained(model_name)\r\n> \r\n> opt = torch.optim.Adam(model.parameters(), lr=1e-5)\r\n> \r\n> model, opt = accelerator._prepare_deepspeed(model, opt)\r\n> \r\n> state_dict = accelerator.get_state_dict(model)\r\n> \r\n> model = unwrap_model(model)\r\n> model.save_pretrained(\r\n> \"testing_fuyu_8b\",\r\n> state_dict=state_dict,\r\n> safe_serialization=True\r\n> )\r\n> ```\r\n\r\ncc @Narsil if you have the bandwidth to take a look, this looks like it's impacting quite a few deepspeed users. Thanks a lot :raised_hands: ",
"Temporary solution: set `safe_serialization=False` will work",
"Did look up, and this snippet works for me with all latest revisions. (accelerate, deepspeed, transformers)\r\n",
"Hello, \r\n\r\n1. Versions:\r\n```\r\n- `transformers` version: 4.37.0\r\n- `Accelerate` version: 0.26.1\r\n- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Numpy version: 1.26.0\r\n- PyTorch version (GPU?): 2.1.2+cu121 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- System RAM: 503.54 GB\r\n- GPU type: NVIDIA A100-SXM4-80GB\r\n- `Accelerate` default config:\r\n\tNot found\r\n- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.2\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.26.1\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.2+cu121 (True)\r\n- Tensorflow version (GPU?): 2.15.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n--------------------------------------------------\r\nDeepSpeed C++/CUDA extension op report\r\n--------------------------------------------------\r\nNOTE: Ops not installed will be just-in-time (JIT) compiled at\r\n runtime if needed. Op compatibility means that your system\r\n meet the required dependencies to JIT install the op.\r\n--------------------------------------------------\r\nJIT compiled ops requires ninja\r\nninja .................. [OKAY]\r\n--------------------------------------------------\r\nop name ................ installed .. compatible\r\n--------------------------------------------------\r\n [WARNING] async_io requires the dev libaio .so object and headers but these were not found.\r\n [WARNING] async_io: please install the libaio-dev package with apt\r\n [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.\r\nasync_io ............... [NO] ....... [NO]\r\nfused_adam ............. [NO] ....... [OKAY]\r\ncpu_adam ............... [NO] ....... [OKAY]\r\ncpu_adagrad ............ [NO] ....... [OKAY]\r\ncpu_lion ............... [NO] ....... [OKAY]\r\n [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH\r\nevoformer_attn ......... [NO] ....... [NO]\r\nfused_lamb ............. [NO] ....... [OKAY]\r\nfused_lion ............. [NO] ....... [OKAY]\r\ninference_core_ops ..... [NO] ....... [OKAY]\r\ncutlass_ops ............ [NO] ....... [OKAY]\r\nquantizer .............. [NO] ....... [OKAY]\r\nragged_device_ops ...... [NO] ....... [OKAY]\r\nragged_ops ............. [NO] ....... [OKAY]\r\nrandom_ltd ............. [NO] ....... [OKAY]\r\n [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1\r\n [WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible\r\nsparse_attn ............ [NO] ....... [NO]\r\nspatial_inference ...... [NO] ....... [OKAY]\r\ntransformer ............ [NO] ....... [OKAY]\r\nstochastic_transformer . [NO] ....... [OKAY]\r\ntransformer_inference .. [NO] ....... [OKAY]\r\n--------------------------------------------------\r\nDeepSpeed general environment info:\r\ntorch install path ............... ['/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch']\r\ntorch version .................... 2.1.2+cu121\r\ndeepspeed install path ........... ['/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed']\r\ndeepspeed info ................... 0.12.6, unknown, unknown\r\ntorch cuda version ............... 12.1\r\ntorch hip version ................ None\r\nnvcc version ..................... 12.1\r\ndeepspeed wheel compiled w. ...... torch 2.1, cuda 12.1\r\nshared memory (/dev/shm) size .... 251.77 GB\r\n```\r\n2. Code:\r\n```\r\nimport torch\r\nfrom accelerate import Accelerator\r\nfrom accelerate.utils import DeepSpeedPlugin, HfDeepSpeedConfig\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers.modeling_utils import unwrap_model\r\n\r\ntransformers_config = HfDeepSpeedConfig({\r\n \"train_micro_batch_size_per_gpu\": 2,\r\n \"gradient_accumulation_steps\": 2,\r\n \"gradient_clipping\": 1.0,\r\n \"offload_optimizer_device\": None,\r\n \"offload_param_device\": None,\r\n \"zero3_init_flag\": False,\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"stage3_gather_16bit_weights_on_model_save\": True\r\n },\r\n})\r\n\r\nplugin = DeepSpeedPlugin(transformers_config)\r\n\r\naccelerator = Accelerator(deepspeed_plugin=plugin)\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\nopt = torch.optim.Adam(model.parameters(), lr=1e-5)\r\n\r\nmodel, opt = accelerator._prepare_deepspeed(model, opt)\r\n\r\nstate_dict = accelerator.get_state_dict(model)\r\n\r\nmodel = unwrap_model(model)\r\nmodel.save_pretrained(\r\n \"remove\",\r\n state_dict=state_dict,\r\n safe_serialization=True\r\n)\r\n```\r\n\r\n3. Command:\r\n```\r\ntorchrun --nproc-per-node 2 issue_28630.py\r\n```\r\n\r\n4. Output:\r\n```\r\n[2024-01-24 13:01:29,798] [INFO] [config.py:974:print_user_config] json = {\r\n \"train_micro_batch_size_per_gpu\": 2, \r\n \"gradient_accumulation_steps\": 2, \r\n \"gradient_clipping\": 1.0, \r\n \"offload_optimizer_device\": null, \r\n \"offload_param_device\": null, \r\n \"zero3_init_flag\": false, \r\n \"zero_optimization\": {\r\n \"stage\": 3, \r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n }, \r\n \"steps_per_print\": inf, \r\n \"fp16\": {\r\n \"enabled\": false\r\n }, \r\n \"bf16\": {\r\n \"enabled\": false\r\n }, \r\n \"zero_allow_untested_optimizer\": true\r\n}\r\nRemoved shared tensor {'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.weight', 'cls.predictions.transform.dense.weight', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.embeddings.position_embeddings.weight', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading\r\n```\r\n\r\nObservations:\r\n1. Happens when using DeepSpeed Stage 3 when weights from many layers are concatenated, flattened and sharded across device(s). Basically when using flat tensors from which views are taken for individual layers as mentioned by @imoneoi \r\n2. Also, this is not limited to just DeepSpeed. For example, when using Torch compile also as shown by https://github.com/huggingface/transformers/issues/27293#issuecomment-1870466945 and I can reproduce it.\r\n3. Also, it again happens for FSDP too. Able to reproduce it for https://github.com/huggingface/accelerate/issues/2155#issuecomment-1874303370 with below command:\r\n```\r\naccelerate launch --config_file fsdp_config.yaml run_mlm_no_trainer.py \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --model_name_or_path bert-base-cased \\\r\n --output_dir /tmp/test-mlm\r\n```\r\nwith config:\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndistributed_type: FSDP\r\ndowncast_bf16: 'no'\r\nfsdp_config:\r\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\r\n fsdp_backward_prefetch: BACKWARD_PRE\r\n fsdp_cpu_ram_efficient_loading: true\r\n fsdp_forward_prefetch: false\r\n fsdp_offload_params: false\r\n fsdp_sharding_strategy: FULL_SHARD\r\n fsdp_state_dict_type: FULL_STATE_DICT\r\n fsdp_sync_module_states: true\r\n fsdp_transformer_layer_cls_to_wrap: BertLayer\r\n fsdp_use_orig_params: true\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: fp16\r\nnum_machines: 1\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```\r\noutput:\r\n```\r\nRemoved shared tensor {'cls.predictions.transform.dense.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'bert.embeddings.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading\r\n```\r\n\r\nPossible Solutions:\r\nDisable `safetensors` for DeepSpeed/FSDP when there are shared tensors other then the ones specified via `model.config.tie_encoder_decoder` and `model.config.tie_word_embeddings`\r\n\r\n",
"I think the reproducer from Zach needs fixes. With below change to only call `save_pretrained` on main process, the checkpoint is saved properly when using DeepSpeed.\r\n\r\n```diff\r\nimport torch\r\nfrom accelerate import Accelerator\r\nfrom accelerate.utils import DeepSpeedPlugin, HfDeepSpeedConfig\r\nfrom transformers import AutoModelForCausalLM\r\nfrom transformers.modeling_utils import unwrap_model\r\n\r\ntransformers_config = HfDeepSpeedConfig({\r\n \"train_micro_batch_size_per_gpu\": 2,\r\n \"gradient_accumulation_steps\": 2,\r\n \"gradient_clipping\": 1.0,\r\n \"offload_optimizer_device\": None,\r\n \"offload_param_device\": None,\r\n \"zero3_init_flag\": False,\r\n \"zero_optimization\": {\r\n \"stage\": 3,\r\n \"stage3_gather_16bit_weights_on_model_save\": True\r\n },\r\n})\r\n\r\nplugin = DeepSpeedPlugin(transformers_config)\r\n\r\naccelerator = Accelerator(deepspeed_plugin=plugin)\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\nopt = torch.optim.Adam(model.parameters(), lr=1e-5)\r\n\r\nmodel, opt = accelerator._prepare_deepspeed(model, opt)\r\n\r\nstate_dict = accelerator.get_state_dict(model)\r\n\r\n+ if accelerator.is_main_process:\r\n model = unwrap_model(model)\r\n model.save_pretrained(\r\n \"remove\",\r\n state_dict=state_dict,\r\n safe_serialization=True\r\n )\r\n```",
"@pacman100 @younesbelkada Thanks for your observations! Should we consider disabling `safetensors` and warn the user about safetensors is disabled when shared tensors are found as a quick fix to mitigate issues in deepspeed, FSDP and torch.compile?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,706 | null | NONE | null | ### System Info
```
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, torchrun
```
### Who can help?
@younesbelkada @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Minimal reproduction on DeepSpeed can be found at https://github.com/huggingface/transformers/issues/27293 where disabling safe_serialization solves this issue.
Related (DeepSpeed): https://github.com/huggingface/transformers/issues/27293
### Expected behavior
Consider disabling removing shared tensors by default in https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L2409-L2452. This piece of code determines shared tensors through storage locations, but there are many cases that tensors are views of a large tensor, thus sharing the same location.
One example is when `q_proj`, `k_proj`, and `v_proj` are views of `qkv_proj`, and also DeepSpeed ZeRO, where all parameters are views of a large flat tensor. We've observed failures in both cases.
Besides, not removing shared tensors will not usually cause a large storage overhead as common shared tensors (such as tied embeddings) take up only a small fraction of the total parameters.
```
Removed shared tensor {'model.layers.27.self_attn.k_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.11.self_attn.k_proj.weight'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28630/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28629/comments | https://api.github.com/repos/huggingface/transformers/issues/28629/events | https://github.com/huggingface/transformers/issues/28629 | 2,093,006,116 | I_kwDOCUB6oc58wL0k | 28,629 | Fast tokenizer's time complexity is not linear | {
"login": "getao",
"id": 12735658,
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/getao",
"html_url": "https://github.com/getao",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"organizations_url": "https://api.github.com/users/getao/orgs",
"repos_url": "https://api.github.com/users/getao/repos",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"received_events_url": "https://api.github.com/users/getao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Feel free to check this issue (duplicate) #25873",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,706 | null | NONE | null | ### System Info
torch 2.1.2
transformers 4.36.2
### Who can help?
@ArthurZucker
I'm using the streaming dataloader to train on a large dataset. However, I find some process is usually stuck and finally lead to NCCL timeout After I carefully checked, I found the problem may come from the tokenizer.
When a batch (I think it is 1000 by default) contains too many tokens (e.g., the batch has a document that is a book and is very long), the tokenization process will be extremely slow. I test the tokenization efficiency for sequences with different lengths and find the time cost is not linear to the sequence length but looks like quadratic.
Tokenizing a 50k-word sequence costs 0.5s but tokenizing a 500k-word sequence costs 70s (about 140x slower).
I don't know if it is a bug. If it is by design, how should I prevent the tokenizer being stuck by some batches with too many tokens. I think one way is to reduce the batch size (default 1000). Is there any other way?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
import time
def test_time(test_str):
a = time.time()
tokens = tokenizer(test_str)
b = time.time()
return b-a
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
test_str = "I love NLP. " * 100000 # we can change the number to try different lengths
time_cost = test_time(test_str)
print(time_cost)
```
### Expected behavior
Linear time cost increase as the sequence length | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28629/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28628/comments | https://api.github.com/repos/huggingface/transformers/issues/28628/events | https://github.com/huggingface/transformers/pull/28628 | 2,092,857,414 | PR_kwDOCUB6oc5kqlf4 | 28,628 | Support single token decode for `CodeGenTokenizer` | {
"login": "cmathw",
"id": 108584265,
"node_id": "U_kgDOBnjdSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108584265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmathw",
"html_url": "https://github.com/cmathw",
"followers_url": "https://api.github.com/users/cmathw/followers",
"following_url": "https://api.github.com/users/cmathw/following{/other_user}",
"gists_url": "https://api.github.com/users/cmathw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmathw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmathw/subscriptions",
"organizations_url": "https://api.github.com/users/cmathw/orgs",
"repos_url": "https://api.github.com/users/cmathw/repos",
"events_url": "https://api.github.com/users/cmathw/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmathw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
This PR should fix #28627 by first converting token_ids to a list in the the `decode` method of `CodeGenTokenizer` class. No new tests added but happy to write some if need be.
## Example
```python3
from transformers.models.auto.tokenization_auto import AutoTokenizer
phi_tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
a = "The cat sat on the mat"
gpt2_tokens = gpt2_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
gpt2_str_tokens = gpt2_tokenizer.batch_decode(gpt2_tokens) # Essentially: [gpt2_tokenizer.decode(seq) for seq in gpt2_tokens]
print(gpt2_str_tokens) # <-- This is fine and will output: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens[0])]
print(gpt2_single_decode) # <-- Decoding a 0-D tensor, this is fine and will output: ['<|endoftext|>']
phi_tokens = phi_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens) # Essentially: [phi_tokenizer.decode(seq) for seq in phi_tokens]
print(phi_str_tokens) # <-- Cannot do this due to below...
phi_single_decode = [phi_tokenizer.decode(phi_tokens[0])]
print(phi_single_decode) # <-- Cannot decode a 0-D Tensor, hence cannot do above either
single_tok = phi_tokens[0].detach().cpu().tolist()
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens)]
phi_single_decode = [phi_tokenizer.decode(phi_tokens)]
```
## Output before fix:
```bash
TypeError: iteration over a 0-d tensor
```
## Output after fix:
```bash
['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
['<|endoftext|>']
['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
['<|endoftext|>']
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker @rooa | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28628/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28628",
"html_url": "https://github.com/huggingface/transformers/pull/28628",
"diff_url": "https://github.com/huggingface/transformers/pull/28628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28628.patch",
"merged_at": 1706023645000
} |
https://api.github.com/repos/huggingface/transformers/issues/28627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28627/comments | https://api.github.com/repos/huggingface/transformers/issues/28627/events | https://github.com/huggingface/transformers/issues/28627 | 2,092,841,862 | I_kwDOCUB6oc58vjuG | 28,627 | Support decoding single tokens with `CodeGenTokenizer` | {
"login": "cmathw",
"id": 108584265,
"node_id": "U_kgDOBnjdSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/108584265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmathw",
"html_url": "https://github.com/cmathw",
"followers_url": "https://api.github.com/users/cmathw/followers",
"following_url": "https://api.github.com/users/cmathw/following{/other_user}",
"gists_url": "https://api.github.com/users/cmathw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmathw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmathw/subscriptions",
"organizations_url": "https://api.github.com/users/cmathw/orgs",
"repos_url": "https://api.github.com/users/cmathw/repos",
"events_url": "https://api.github.com/users/cmathw/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmathw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"thanks for the detailed issue! "
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help?
@ArthurZucker @rooa
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
``` python3
from transformers.models.auto.tokenization_auto import AutoTokenizer
phi_tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2",
add_bos_token=True,
use_fast=False,
trust_remote_code=True)
a = "The cat sat on the mat"
gpt2_tokens = gpt2_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
gpt2_str_tokens = gpt2_tokenizer.batch_decode(gpt2_tokens) # Essentially: [gpt2_tokenizer.decode(seq) for seq in gpt2_tokens]
print(gpt2_str_tokens) # <-- This is fine and will output: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']
gpt2_single_decode = [gpt2_tokenizer.decode(gpt2_tokens[0])]
print(gpt2_single_decode) # <-- Decoding a 0-D tensor, this is fine and will output: ['<|endoftext|>']
phi_tokens = phi_tokenizer(a, return_tensors="pt")["input_ids"][0] # torch.Size([7])
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens) # Essentially: [phi_tokenizer.decode(seq) for seq in phi_tokens]
print(phi_str_tokens) # <-- Cannot do this due to below...
phi_single_decode = [phi_tokenizer.decode(phi_tokens[0])]
print(phi_single_decode) # <-- Cannot decode a 0-D Tensor, hence cannot do above either
```
Returns:
TypeError: iteration over a 0-d tensor
### Expected behavior
In the above example,
```python3
phi_str_tokens = phi_tokenizer.batch_decode(phi_tokens)
```
Should return: ['<|endoftext|>', 'The', ' cat', ' sat', ' on', ' the', ' mat']. This is what the gpt2 tokenizer returns for example.
```python3
phi_single_decode = [phi_tokenizer.decode(phi_tokens)]
```
Should return: ['<|endoftext|>']. This is what the gpt2 tokenizer returns for example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28627/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28626/comments | https://api.github.com/repos/huggingface/transformers/issues/28626/events | https://github.com/huggingface/transformers/issues/28626 | 2,092,734,450 | I_kwDOCUB6oc58vJfy | 28,626 | No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0` | {
"login": "rafa852",
"id": 59406764,
"node_id": "MDQ6VXNlcjU5NDA2NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/59406764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafa852",
"html_url": "https://github.com/rafa852",
"followers_url": "https://api.github.com/users/rafa852/followers",
"following_url": "https://api.github.com/users/rafa852/following{/other_user}",
"gists_url": "https://api.github.com/users/rafa852/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafa852/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafa852/subscriptions",
"organizations_url": "https://api.github.com/users/rafa852/orgs",
"repos_url": "https://api.github.com/users/rafa852/repos",
"events_url": "https://api.github.com/users/rafa852/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafa852/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @rafa852 - what is the issue being raised here?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,705 | null | NONE | null | No, you cannot set a `token` to an `id`. It is the same as `tokenzier.pad_token_id = 0` if `tokenizer.eos_token_id` is `0`
_Originally posted by @ArthurZucker in https://github.com/huggingface/transformers/issues/26072#issuecomment-1859852130_
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28626/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28626/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28625/comments | https://api.github.com/repos/huggingface/transformers/issues/28625/events | https://github.com/huggingface/transformers/issues/28625 | 2,092,731,446 | I_kwDOCUB6oc58vIw2 | 28,625 | ESM Rotary Embedding implementation is not TorchScript safe | {
"login": "ChenchaoZhao",
"id": 35147961,
"node_id": "MDQ6VXNlcjM1MTQ3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenchaoZhao",
"html_url": "https://github.com/ChenchaoZhao",
"followers_url": "https://api.github.com/users/ChenchaoZhao/followers",
"following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions",
"organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs",
"repos_url": "https://api.github.com/users/ChenchaoZhao/repos",
"events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @Rocketknight1 ",
"Hi @ChenchaoZhao, I think you're correct - would you be willing to open a PR to fix the issue?",
"Hi @Rocketknight1, sure. Anything else other than the contribution guide I should be aware before making the PR?https://github.com/huggingface/transformers/blob/03cc17775b961d16cc4d0d7ab0c8487120d0b708/CONTRIBUTING.md",
"Hi @ChenchaoZhao, that guide should cover everything. Just open the PR to transformers and tag me in it + link this issue. And thank you - it's an important fix so it's great that you're willing to handle it!"
] | 1,705 | 1,706 | null | NONE | null | ### System Info
This issue is independent of the env. It’s purely about the PyTorch implementation of ESM position embedding
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load an ESM model from pretrained
2. To cuda
3. Use ‘torch.jit.trace’
### Expected behavior
It may crash when you trace the model.
Even if it doesn't crash at trace time, when save the model and move the model to a different device and perform inference it will crash with a device error.
The reason is the cached sin and cos are NOT registered as buffers which means pytorch will not know how to move the python attribute when using the ‘to’ method. I would suggest copy and use the llama ROPE implementation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28625/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28624/comments | https://api.github.com/repos/huggingface/transformers/issues/28624/events | https://github.com/huggingface/transformers/issues/28624 | 2,092,675,763 | I_kwDOCUB6oc58u7Kz | 28,624 | WhisperForAudioClassification throws errors while using use_weighted_layer_sum | {
"login": "chercheurkg",
"id": 128296694,
"node_id": "U_kgDOB6Wm9g",
"avatar_url": "https://avatars.githubusercontent.com/u/128296694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chercheurkg",
"html_url": "https://github.com/chercheurkg",
"followers_url": "https://api.github.com/users/chercheurkg/followers",
"following_url": "https://api.github.com/users/chercheurkg/following{/other_user}",
"gists_url": "https://api.github.com/users/chercheurkg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chercheurkg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chercheurkg/subscriptions",
"organizations_url": "https://api.github.com/users/chercheurkg/orgs",
"repos_url": "https://api.github.com/users/chercheurkg/repos",
"events_url": "https://api.github.com/users/chercheurkg/events{/privacy}",
"received_events_url": "https://api.github.com/users/chercheurkg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @chercheurkg, thanks for raising an issue!\r\n\r\nThis was fixed in #28563 and is part of the most recent release. Could you try installing v4.37 to verify this resolves your issue? \r\n\r\n`pip install -U transformers`",
"@amyeroberts ,\r\nThanks for your reply! After upgrading to v4.37, the issue seems to be resolved. ",
"Great to hear that @chercheurkg - closing as complete!"
] | 1,705 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.9
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For a classification task, I tried to fine-tune **whisper-small** pre-trained model using WhisperForAudioClassification and setting use_weighted_layer_sum equal to true. It threw the following error.
```
File "some_path\site-packages\torch\amp\autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "some_path\site-packages\transformers\models\whisper\modeling_whisper.py", line 2418, in forward
hidden_states = torch.stack(encoder_outputs, dim=1)
TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not BaseModelOutput
0%| | 0/2085 [00:52<?, ?it/s]
```
1. Using **whisper-small** pretrained model and setting use_weighted_layer_sum equal to true
```
config = AutoConfig.from_pretrained(
'openai/whisper-small',
..........
)
config.use_weighted_layer_sum = True
```
2. start training it using a label dataset
### Expected behavior
It should not throw the above error as it should work for both `use_weighted_layer_sum = True and use_weighted_layer_sum = False` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28623/comments | https://api.github.com/repos/huggingface/transformers/issues/28623/events | https://github.com/huggingface/transformers/issues/28623 | 2,092,648,273 | I_kwDOCUB6oc58u0dR | 28,623 | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn while model parameters requires_grad is True | {
"login": "zhongshsh",
"id": 62104945,
"node_id": "MDQ6VXNlcjYyMTA0OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62104945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongshsh",
"html_url": "https://github.com/zhongshsh",
"followers_url": "https://api.github.com/users/zhongshsh/followers",
"following_url": "https://api.github.com/users/zhongshsh/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongshsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongshsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongshsh/subscriptions",
"organizations_url": "https://api.github.com/users/zhongshsh/orgs",
"repos_url": "https://api.github.com/users/zhongshsh/repos",
"events_url": "https://api.github.com/users/zhongshsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongshsh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I solved the problem by setting gradient_checkpointing_kwargs={\"use_reentrant\":False}, but I don’t know why.",
"cc @younesbelkada ",
"@ArthurZucker hi, I also have another problem. When I loaded Mixtral using `device_map` as `auto` in a node of 8 GPUs (A100 80G), it would not use the 8-th GPU although I changed the `device_map` to `balance`. \r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto')\r\n```\r\n\r\nHow can I do to load the model in all GPUs balancedly? \r\n\r\nI tried to use `max_memory` to solve my problem, but it would cause other problems when using Trainer. For example, I couldn't save the model when using RAM (although I set the `max_memory` to be large (14G per GPU), it still used RAM).",
"Hi @zhongshsh \r\nIndeed to fix that issue you need either to\r\n1- call `model.enable_input_require_grads()`\r\nor\r\n2- pass `gradient_checkpointing_kwargs={\"use_reentrant\":False}` \r\nRegarding your second point have you tried with `\"balanced\"` (instead of `\"balance\"`)? Although I think `\"auto\"` should evenly split the model across all available GPUs cc @SunMarc ",
"Hi @younesbelkada thx for your reply. \r\n\r\nYes, I try `balanced`. You can reproduce this problem easily by using code \r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto') # or balanced\r\n```"
] | 1,705 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.27.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I load `Mixtral` using device_map as `auto` (have to use `auto` otherwise OOM), then I make some of the model parameters' grad `True` (e.g., model.model.layers[0]), and use `Trainer` to fine-tune it.
I launch the code by `python xx.py` as explaining in https://github.com/huggingface/accelerate/issues/1840#issuecomment-1683105994.
error message I got as follows:
```
Traceback (most recent call last):
File "xx.py", line 461, in <module>
trainer.train()
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 1864, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3/lib/python3.11/site-packages/transformers/trainer.py", line 2763, in training_step
self.accelerator.backward(loss)
File "miniconda3/lib/python3.11/site-packages/accelerate/accelerator.py", line 1964, in backward
loss.backward(**kwargs)
File "miniconda3/lib/python3.11/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "miniconda3/lib/python3.11/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
If I set other module such as `lm_head` grad as True, the code can run successfully. I guess whether the above error caused by setting `device_map` as `auto`?
### Expected behavior
make some of `Mixtral` parameters' grad `True`, and use `Trainer` to fine-tune it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28623/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28623/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28622/comments | https://api.github.com/repos/huggingface/transformers/issues/28622/events | https://github.com/huggingface/transformers/issues/28622 | 2,092,639,091 | I_kwDOCUB6oc58uyNz | 28,622 | Can `LlamaTokenizerFast` support the argument `add_prefix_space = False` | {
"login": "hnyls2002",
"id": 95566987,
"node_id": "U_kgDOBbI8iw",
"avatar_url": "https://avatars.githubusercontent.com/u/95566987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hnyls2002",
"html_url": "https://github.com/hnyls2002",
"followers_url": "https://api.github.com/users/hnyls2002/followers",
"following_url": "https://api.github.com/users/hnyls2002/following{/other_user}",
"gists_url": "https://api.github.com/users/hnyls2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hnyls2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hnyls2002/subscriptions",
"organizations_url": "https://api.github.com/users/hnyls2002/orgs",
"repos_url": "https://api.github.com/users/hnyls2002/repos",
"events_url": "https://api.github.com/users/hnyls2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/hnyls2002/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, #28010 will add support for this! 🤗 ",
"Thanks, I am very excited about this."
] | 1,705 | 1,708 | 1,708 | NONE | null | ### System Info
With `transformers==4.36.2`
It seems the argument `add_prefix_space` is invalid here.
### Who can help?
@ArthurZucker
### Reproduction
```
>>> from transformers import LlamaTokenizerFast
>>> tokenizer = LlamaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer", add_prefix_space = False)
>>> tokenizer.tokenize("hello")
['▁hello']
>>> tokenizer.decode(tokenizer.encode("hello"))
'<s> hello'
```
### Expected behavior
Is there a bug, or is it my wrong usage? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28622/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28621/comments | https://api.github.com/repos/huggingface/transformers/issues/28621/events | https://github.com/huggingface/transformers/pull/28621 | 2,092,497,369 | PR_kwDOCUB6oc5kpdul | 28,621 | ⚠️ Raise `Exception` when trying to generate 0 tokens ⚠️ | {
"login": "danielkorat",
"id": 32893314,
"node_id": "MDQ6VXNlcjMyODkzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielkorat",
"html_url": "https://github.com/danielkorat",
"followers_url": "https://api.github.com/users/danielkorat/followers",
"following_url": "https://api.github.com/users/danielkorat/following{/other_user}",
"gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions",
"organizations_url": "https://api.github.com/users/danielkorat/orgs",
"repos_url": "https://api.github.com/users/danielkorat/repos",
"events_url": "https://api.github.com/users/danielkorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielkorat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @gante \r\nCan you please have a look?\r\nNot sure why the tests are failing.",
"@danielkorat the test is failing because it is poorly parameterized -- it raises the newly added exception. I'd like to ask you to try to fix the test parameterization (ping me if you get stuck :) )",
"> @danielkorat the test is failing because it is poorly parameterized -- it raises the newly added exception. I'd like to ask you to try to fix the test parameterization (ping me if you get stuck :) )\r\n\r\nHi @gante @amyeroberts, \r\nI made the requested changes and fixed the test above as well. \r\nThis test is now failing and it seems to be unrelated to my changes:\r\n```bash\r\n_ FlaxWav2Vec2ModelTest.test_equivalence_pt_to_flax [FlaxWav2Vec2ForPreTraining] _\r\n....\r\nE AssertionError: 1.1444092e-05 not less than or equal to 1e-05 : outputs.codevector_perplexity: Difference between PyTorch and Flax is 1.1444091796875e-05 (>= 1e-05).\r\n```",
"It was probably a flaky test 😉 ",
"Thanks @ArthurZucker, I changed the title",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@ArthurZucker can you merge please?",
"Thanks for the PR 😉 ",
"@danielkorat thank you for the fix 💛 "
] | 1,705 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
Currently, setting `max_new_tokens=0` produces 1 token instead of 0, and only a warning is produced.
To prevent unexpected patterns of generation, this warning should be changed to an `Exception`.
### Example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/tiny_starcoder_py"
tokenizer = AutoTokenizer.from_pretrained("bigcode/tiny_starcoder_py")
model = AutoModelForCausalLM.from_pretrained("bigcode/tiny_starcoder_py")
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
outputs = model.generate(**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=0)
print(f"Input length: {len(inputs['input_ids'][0])}")
print(f"Output length: {len(outputs[0])}")
```
### Output before fix:
```bash
/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py:1136: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Input length: 7
Output length: 8
```
### Output after fix:
```bash
Traceback (most recent call last):
File "/home/sdp/fix-zero-max-new-tokens/test.py", line 8, in <module>
outputs = model.generate(**inputs,
File "/storage/sdp/anaconda3/envs/fix-zero-max-new-tokens/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py", line 1396, in generate
self._validate_generated_length(generation_config, input_ids_length, has_default_max_length)
File "/home/sdp/fix-zero-max-new-tokens/transformers/src/transformers/generation/utils.py", line 1136, in _validate_generated_length
raise ValueError(
ValueError: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28621/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28621",
"html_url": "https://github.com/huggingface/transformers/pull/28621",
"diff_url": "https://github.com/huggingface/transformers/pull/28621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28621.patch",
"merged_at": 1707309721000
} |
https://api.github.com/repos/huggingface/transformers/issues/28620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28620/comments | https://api.github.com/repos/huggingface/transformers/issues/28620/events | https://github.com/huggingface/transformers/pull/28620 | 2,092,471,767 | PR_kwDOCUB6oc5kpZAG | 28,620 | Unused "embedding_size" in bert attention | {
"login": "amar-jay",
"id": 64834413,
"node_id": "MDQ6VXNlcjY0ODM0NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/64834413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amar-jay",
"html_url": "https://github.com/amar-jay",
"followers_url": "https://api.github.com/users/amar-jay/followers",
"following_url": "https://api.github.com/users/amar-jay/following{/other_user}",
"gists_url": "https://api.github.com/users/amar-jay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amar-jay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amar-jay/subscriptions",
"organizations_url": "https://api.github.com/users/amar-jay/orgs",
"repos_url": "https://api.github.com/users/amar-jay/repos",
"events_url": "https://api.github.com/users/amar-jay/events{/privacy}",
"received_events_url": "https://api.github.com/users/amar-jay/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @amar-jay, thanks for opening this PR! \r\n\r\nTo resolve the quality checks here, you'll need to run `make fix-copies`. The difficulty here is that, even though embedding size isn't part of the bert config - this attention class is copied in many cases throughout the library and might be protecting some hidden behaviour. \r\n\r\nThe check was added in #3257 but I'm not sure why. In the spirit of chesterton's fence let's ask @LysandreJik who added it. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | The embedding_size is not used.
# What does this PR do?
Fixes unused code in bert attention
Fixes a minor typo in code
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28620/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28620",
"html_url": "https://github.com/huggingface/transformers/pull/28620",
"diff_url": "https://github.com/huggingface/transformers/pull/28620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28620.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28619/comments | https://api.github.com/repos/huggingface/transformers/issues/28619/events | https://github.com/huggingface/transformers/issues/28619 | 2,092,457,798 | I_kwDOCUB6oc58uF9G | 28,619 | KOSMOS-2, finding probability distribution of the text sequence | {
"login": "snpushpi",
"id": 55248448,
"node_id": "MDQ6VXNlcjU1MjQ4NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/55248448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snpushpi",
"html_url": "https://github.com/snpushpi",
"followers_url": "https://api.github.com/users/snpushpi/followers",
"following_url": "https://api.github.com/users/snpushpi/following{/other_user}",
"gists_url": "https://api.github.com/users/snpushpi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snpushpi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snpushpi/subscriptions",
"organizations_url": "https://api.github.com/users/snpushpi/orgs",
"repos_url": "https://api.github.com/users/snpushpi/repos",
"events_url": "https://api.github.com/users/snpushpi/events{/privacy}",
"received_events_url": "https://api.github.com/users/snpushpi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh can you look at it? Thank you so much!",
"Hi, \r\n\r\nI think you are only interested in looking the logits for the parts of `prompt1 = \"An old man sitting on a bench in a public park alone, reading a book.\"`.\r\n\r\nThis part is after the token (id) of `end of image` (eoi, `64004`). So you have to find where the token id `64004` is in `inputs1[\"input_ids\"]`, and only take the parts after it (along the sequence dimension).\r\n\r\n- using `image_embeds_position_mask` works, but you don't need to having the leading 2 tokens\r\n - using `image_embeds_position_mask` includes `64004`, and the logits from that place is the logit for the next token (`An`) given the token of `eoi (64004)` and the previous image info etc.\r\n",
"Thank you, that makes sense! "
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
Google Colab with gpu environment enabled and related libraries installed
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below
### Reproduction
Hi,
I am trying to extract the probability distribution of the tokens in a sequence from the model. Here is what
`
from PIL import Image
import requests
from transformers import AutoProcessor, Kosmos2ForConditionalGeneration
import torch
model = Kosmos2ForConditionalGeneration.from_pretrained("microsoft/kosmos-2-patch14-224")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
prompt1 = "An old man sitting on a bench in a public park alone, reading a book."
url1 = "http://images.cocodataset.org/val2017/000000264535.jpg"
image1 = Image.open(requests.get(url1, stream=True).raw)
inputs1 = processor(text=prompt1, images=image1, return_tensors="pt").to(device)
model_output = model(
pixel_values=inputs1["pixel_values"],
input_ids=inputs1["input_ids"],
image_embeds=None,
image_embeds_position_mask=inputs1["image_embeds_position_mask"],
use_cache=True,
)
input_ids = processor.tokenizer(prompt, return_tensors = 'pt').input_ids.to(device)
input_ids.shape
# torch.Size([1, 14])
model_output.logits.shape
# torch.Size([1, 79, 65037])
token_logits = torch.cat([model_output.logits[:,:2,:],model_output.logits[:,67:,:]],dim=1) #Is this correct?
`
So I am trying to extract the logits of the text tokens from the model output. The goal is to eventually calculate the probability of a certain word given the previous words and the image. But I am not sure of the last line where I extracted the text token logits from the model output. I did what I did since that's what the inputs1["image_embeds_position_mask"] mask looked like, but I am not sure if that is the right thing to do. Can someone confirm that?
### Expected behavior
explained above. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28618/comments | https://api.github.com/repos/huggingface/transformers/issues/28618/events | https://github.com/huggingface/transformers/pull/28618 | 2,092,301,494 | PR_kwDOCUB6oc5koz_I | 28,618 | Fix utf-8 yaml load for marian conversion to pytorch in Windows | {
"login": "SystemPanic",
"id": 25750030,
"node_id": "MDQ6VXNlcjI1NzUwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/25750030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SystemPanic",
"html_url": "https://github.com/SystemPanic",
"followers_url": "https://api.github.com/users/SystemPanic/followers",
"following_url": "https://api.github.com/users/SystemPanic/following{/other_user}",
"gists_url": "https://api.github.com/users/SystemPanic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SystemPanic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SystemPanic/subscriptions",
"organizations_url": "https://api.github.com/users/SystemPanic/orgs",
"repos_url": "https://api.github.com/users/SystemPanic/repos",
"events_url": "https://api.github.com/users/SystemPanic/events{/privacy}",
"received_events_url": "https://api.github.com/users/SystemPanic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker \r\n\r\nYes, you can reproduce it with following Marian yaml file (Github doesn't allow .yml upload, so **change the file from .txt to .yml after download**):\r\n\r\n```\r\nwith open('opusTCv20210807.spm32k-spm32k.vocab.yml') as f:\r\n print(yaml.load(f, Loader=yaml.BaseLoader))\r\n```\r\n\r\nand with the simple pull applied:\r\n\r\n```\r\nwith open('opusTCv20210807.spm32k-spm32k.vocab.yml', encoding='utf-8') as f:\r\n print(yaml.load(f, Loader=yaml.BaseLoader))\r\n```\r\n\r\n[opusTCv20210807.spm32k-spm32k.vocab.txt](https://github.com/huggingface/transformers/files/14026656/opusTCv20210807.spm32k-spm32k.vocab.txt)\r\n",
"But I am not sure I follow why you are changing the file type? ",
"@ArthurZucker \r\n\r\nI already said it, Github has not included the .yml extension in their supported file types for the \"Attach files\" button.\r\n\r\nYou can check if you want, try to attach a .yml file in a Github comment, you can't.\r\n\r\nJust renaming to .txt it's the simpler way to attach it. I don't get the point of confusion with this...",
"Sorry I got confused! 😓 alright thanks for providing a file, I can't reproduce your issue as it works out of the box for me",
"@ArthurZucker \r\n\r\nThat's not possible. I forgot to mention that i'm using Windows 10.\r\n\r\ntransformers-cli env:\r\n\r\n```\r\n- `transformers` version: 4.37.1\r\n- Platform: Windows-10-10.0.19045-SP0\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.19.4\r\n- Safetensors version: 0.4.2\r\n- Accelerate version: 0.22.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.0+cu118 (True)\r\n- Tensorflow version (GPU?): 2.10.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n```\r\n\r\nOS default encoding is: Windows-1252 (CP-1252)",
"Thanks you!"
] | 1,705 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
Fix yaml load for yaml files with UTF-8 encoding in convert_marian_to_pytorch.py
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28618/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28618",
"html_url": "https://github.com/huggingface/transformers/pull/28618",
"diff_url": "https://github.com/huggingface/transformers/pull/28618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28618.patch",
"merged_at": 1707376995000
} |
https://api.github.com/repos/huggingface/transformers/issues/28617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28617/comments | https://api.github.com/repos/huggingface/transformers/issues/28617/events | https://github.com/huggingface/transformers/pull/28617 | 2,092,276,191 | PR_kwDOCUB6oc5kovFV | 28,617 | [`Llava`] Update convert_llava_weights_to_hf.py script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | Based on discussion in the issue https://github.com/huggingface/transformers/issues/28597
Fixes: https://github.com/huggingface/transformers/issues/28597
* Remove config update of adding padding to `vocab_size` and `text_config.vocab_size` which causes `ValueError` exception.
* Remove keys that ends with `inv_freq` from the state dict.
* Add examples and instructions for creating `model_state_dict.bin` that can be used by the script.
```console
$ python src/transformers/models/llava/convert_llava_weights_to_hf.py -h
usage: convert_llava_weights_to_hf.py [-h] [--text_model_id TEXT_MODEL_ID] [--vision_model_id VISION_MODEL_ID] [--output_hub_path OUTPUT_HUB_PATH]
[--old_state_dict_id OLD_STATE_DICT_ID]
optional arguments:
-h, --help show this help message and exit
--text_model_id TEXT_MODEL_ID
Hub location of the text model
--vision_model_id VISION_MODEL_ID
Hub location of the vision model
--output_hub_path OUTPUT_HUB_PATH
Location on the hub of the converted model
--old_state_dict_id OLD_STATE_DICT_ID
Location on the hub of the raw state dict of the original model. The filename needs to be `model_state_dict.bin`
Example:
python transformers/src/transformers/models/llava/convert_llava_weights_to_hf.py --text_model_id lmsys/vicuna-7b-v1.5 --vision_model_id openai/clip-vit-large-patch14-336 --output_hub_path org/llava-v1.5-7b-conv --old_state_dict_id liuhaotian/llava-v1.5-7b
Example for creating the old state dict file with Python:
import torch
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
# load model
kwargs = {"device_map": "auto", "torch_dtype": torch.float16}
model = LlavaLlamaForCausalLM.from_pretrained("liuhaotian/llava-v1.5-7b", low_cpu_mem_usage=True, **kwargs)
# load vision tower
model.get_vision_tower().load_model()
# Save state dict
torch.save(model.state_dict(), "tmp/hf_models/llava-v1.5-7b/model_state_dict.bin")
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada if you can please review
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28617/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28617",
"html_url": "https://github.com/huggingface/transformers/pull/28617",
"diff_url": "https://github.com/huggingface/transformers/pull/28617.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28617.patch",
"merged_at": 1705933698000
} |
https://api.github.com/repos/huggingface/transformers/issues/28616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28616/comments | https://api.github.com/repos/huggingface/transformers/issues/28616/events | https://github.com/huggingface/transformers/pull/28616 | 2,092,273,218 | PR_kwDOCUB6oc5kougi | 28,616 | Token healing | {
"login": "Ayenem",
"id": 50707385,
"node_id": "MDQ6VXNlcjUwNzA3Mzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/50707385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ayenem",
"html_url": "https://github.com/Ayenem",
"followers_url": "https://api.github.com/users/Ayenem/followers",
"following_url": "https://api.github.com/users/Ayenem/following{/other_user}",
"gists_url": "https://api.github.com/users/Ayenem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ayenem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ayenem/subscriptions",
"organizations_url": "https://api.github.com/users/Ayenem/orgs",
"repos_url": "https://api.github.com/users/Ayenem/repos",
"events_url": "https://api.github.com/users/Ayenem/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ayenem/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Circleci tests seem to fail, `tests_flax` for example outputs:\r\n\r\n> ImportError while importing test module '/home/circleci/transformers/tests/generation/test_token_healing.py'.\r\n> Hint: make sure your test modules/packages have valid Python names.\r\n> Traceback:\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/importlib/__init__.py:127: in import_module\r\n> return _bootstrap._gcd_import(name[level:], package, level)\r\n> tests/generation/test_token_healing.py:23: in <module>\r\n> class TokenHealingTestCase(unittest.TestCase):\r\n> tests/generation/test_token_healing.py:26: in TokenHealingTestCase\r\n> completion_model = AutoModelForCausalLM.from_pretrained(\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/utils/import_utils.py:1304: in __getattribute__\r\n> requires_backends(cls, cls._backends)\r\n> ../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/utils/import_utils.py:1292: in requires_backends\r\n> raise ImportError(\"\".join(failed))\r\n> E ImportError: \r\n> E AutoModelForCausalLM requires the PyTorch library but it was not found in your environment.\r\n\r\nI don't know how to specify to the Circleci environment that it should install pytorch. This is my first PR, so I would appreciate some guidance 🙏 ",
"> Thank you for opening this cool PR! I have written a few comments, but they should be easy to solve 🤗\r\n\r\nThank you for your detailed reviews and for taking the time to explain their reasons 🙏 Hopefully, they're all fixed now.\r\n\r\nA new headscratcher shows up:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/83383/workflows/2d8b4ca9-abb5-4be1-a5fc-163e3a0d78d7/jobs/1075117?invite=true#step-115-22\r\n\r\nI have `pygtrie` specified here:\r\nhttps://github.com/huggingface/transformers/blob/ce38ed5b9a4c5c9e456b7910caf6a7ebbec235e2/setup.py#L148\r\nand here:\r\nhttps://github.com/huggingface/transformers/blob/ce38ed5b9a4c5c9e456b7910caf6a7ebbec235e2/src/transformers/dependency_versions_table.py#L54\r\nI tried adding `pygtrie >= 2.5.0` to `_test_requirements.txt` under `examples/{flax, pytorch, tensorflow}` but that didn't fix it. Maybe there is something I should do at `transformers/src/transformers/utils/import_utils.py`? @gante ",
"> Thank you for iterating 🤗\r\n> \r\n> To fix the dependency issue, I believe you need to add `pygtrie` to `extras[\"torch\"]` in `setup.py` (L262). Otherwise, `pygtrie` is not installed -- being in the list where you added the dependency is not enough to install the dependency :)\r\n\r\nThank you for the follow-up! I applied your suggestions. CI is still breaking on the pygtrie import unfortunately. I'll try to think about it some more this weekend.",
"@Ayenem hope you don't mind, moving a few minor things around to make our CI happy 🤗 ",
"> @Ayenem hope you don't mind, moving a few minor things around to make our CI happy 🤗\r\n\r\nThank you for taking the time!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"`examples_flax` is failing for reasons unrelated to this PR, I'm going to see if I can open a separate PR to fix it :)",
"Ok, actually, it has to be fixed in this PR: `pygtrie` has to be added to `/examples/flax/_tests_requirements.txt` :( \r\n\r\nAs for the other failures, I recommend the following:\r\n1. Rebase with `main`\r\n2. run `make fixup` in the `transformers` folder\r\n3. commit the resulting changes\r\n\r\nIf issues persist, I'll dive into the failures :)",
"uhmmm something has gone wrong, the diff shouldn't be 18k lines long 😅 ",
"> uhmmm something has gone wrong, the diff shouldn't be 18k lines long 😅\r\n\r\nIndeed...I did a `git fetch upstream` followed by `git rebase upstream/main` then solved conflicts and pushed. I suppose I misinterpreted what you meant by rebasing with main 😓 \r\nI'll `git revert` and push to undo the last two commits, then just `git rebase main` from there.",
"@Ayenem That should be it: \r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n(handle merge conflicts)\r\ngit push origin your_branch -f\r\n```\r\n\r\npossibly something went wrong in the merge conflicts stage?",
"@Ayenem you're getting the best of CI problems, hehe",
"> @Ayenem you're getting the best of CI problems, hehe\r\n\r\nLucky me 😄 Rebase should've worked as expected now 🤞 ",
"@Ayenem looking at the test errors, other than sorting the merge conflicts (which should fix most of the problems), the new test needs `@require_auto_gptq`. Alternatively, a smaller model like `gpt2` can be used.",
"> @Ayenem looking at the test errors, other than sorting the merge conflicts (which should fix most of the problems), the new test needs `@require_auto_gptq`. Alternatively, a smaller model like `gpt2` can be used.\r\n\r\nI don't seem to have access to those conflicts locally and I don't have write access for resolving them from github 😕 I don't even know why there are conflicts on files I haven't touched tbh\r\n![image](https://github.com/huggingface/transformers/assets/50707385/6a210a82-48d8-4d61-b263-4c8268bd4b1c)",
"@Ayenem I have no more suggestions as well. When this happens, it often means something unexpected happened at rebase time. Most (or even all) CI errors we're seeing here are also already solved on `main` 😞 \r\n\r\nMay I suggest to:\r\n1. Close this PR\r\n2. Update `main` on your fork\r\n3. Create a new branch on your fork, past these changes, and open a new PR?\r\n\r\nAgain, apologies for all this additional work 🤗 "
] | 1,705 | 1,708 | 1,708 | NONE | null | # What does this PR do?
Token healing rectifies the token boundary bias in greedy tokenization. It does this by trimming and regrowing the prompt to better align with the model's tokenizer, thus enhancing generation quality. The improvement is clearest with completion models.
Token boundary bias is a silent performance killer that doesn't seem very well known. It has clear impact on completion quality, though I'm not sure where it would fit as a transformers feature.
A more thorough explanation of the problem: [The Art of Prompt Design: Prompt Boundaries and Token Healing | by Scott Lundberg](https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38).
### Motivation
Given a completion prompt with a partial url ending with `:`, the model might have seen the expected completion `://` as a _single_ token in training. However, the prompt's tail token `:` tells it that the next token is not `//`, and so it generates a wrong completion. Such errors compound in auto-regressive language models.
Fixes [#28346](https://github.com/huggingface/transformers/issues/28346)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
- @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28616",
"html_url": "https://github.com/huggingface/transformers/pull/28616",
"diff_url": "https://github.com/huggingface/transformers/pull/28616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28616.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28615/comments | https://api.github.com/repos/huggingface/transformers/issues/28615/events | https://github.com/huggingface/transformers/pull/28615 | 2,092,050,819 | PR_kwDOCUB6oc5koCAa | 28,615 | enable graident checkpointing in DetaObjectDetection and add tests in Swin/Donut_Swin | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also found missing part that transfomers team did not put.\r\n\r\nhttps://github.com/jozhang97/DETA/blob/f47e50efeb194357fb93a119f196a2e485fffdb5/models/deformable_transformer.py#L225\r\n\r\nAlso modified donut_swin and swin model for passing CI.",
"```\r\nroot@1c4a6f3ab7da:/mnt/nas2/users/sbchoi/transformers# pytest tests/models/deta/\r\n==================================================================== test session starts ====================================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, anyio-4.2.0, hydra-core-1.3.2\r\ncollected 155 items\r\n\r\ntests/models/deta/test_image_processing_deta.py ....ss........ [ 9%]\r\ntests/models/deta/test_modeling_deta.py .........s...............ssssss.ssssssssss......s...............s......s.............sssssssssss.ssssssssssss [ 79%]\r\nsss..s..s.......s.s.ss.sss....ss [100%]\r\n\r\n===================================================================== warnings summary ======================================================================\r\n../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:462: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:465: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_gradient_checkpointing\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_training_gradient_checkpointing\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\r\n warnings.warn(\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1968: UserWarning: Use of index_put_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[indices] = tensor (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/TensorAdvancedIndexing.cpp:708.)\r\n t[t != t] = 0\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1968: UserWarning: Use of masked_fill_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[mask] = scalar (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/cuda/Indexing.cu:1564.)\r\n t[t != t] = 0\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n======================================================= 96 passed, 59 skipped, 12 warnings in 56.64s ========================================================\r\n\r\n```",
"```\r\nroot@1c4a6f3ab7da:/mnt/nas2/users/sbchoi/transformers# pytest tests/models/swin/\r\n==================================================================== test session starts ====================================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, anyio-4.2.0, hydra-core-1.3.2\r\ncollected 189 items\r\n\r\ntests/models/swin/test_modeling_swin.py .........ssssss.ssssssssss...............s.......s.......ssssssss.sssssssssssssssssss.s........s.....sss....s [ 57%]\r\n........ [ 61%]\r\ntests/models/swin/test_modeling_tf_swin.py ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [100%]\r\n\r\n===================================================================== warnings summary ======================================================================\r\n../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:462: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:465: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_fast_init_context_manager\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_for_masked_image_modeling\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_gradient_checkpointing\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/swin/modeling_swin.py:173: FutureWarning: logits attribute is deprecated and will be removed in version 5 of Transformers. Please use the reconstruction attribute to retrieve the final output instead.\r\n warnings.warn(\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_gradient_checkpointing\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_training_gradient_checkpointing\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\r\n warnings.warn(\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_pipeline_image_classification\r\n /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:105: UserWarning: Repo card metadata block was not found. Setting CardData to empty.\r\n warnings.warn(\"Repo card metadata block was not found. Setting CardData to empty.\")\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_torch_fx\r\n /opt/conda/lib/python3.10/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()'\r\n torch.has_cuda,\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_torch_fx\r\n /opt/conda/lib/python3.10/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()'\r\n torch.has_cudnn,\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_torch_fx\r\n /opt/conda/lib/python3.10/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()'\r\n torch.has_mps,\r\n\r\ntests/models/swin/test_modeling_swin.py::SwinModelTest::test_torch_fx\r\n /opt/conda/lib/python3.10/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()'\r\n torch.has_mkldnn,\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n======================================================= 66 passed, 123 skipped, 17 warnings in 29.65s =======================================================\r\n```",
"```\r\nroot@1c4a6f3ab7da:/mnt/nas2/users/sbchoi/transformers# pytest tests/models/donut/\r\n==================================================================== test session starts ====================================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, anyio-4.2.0, hydra-core-1.3.2\r\ncollected 117 items\r\n\r\ntests/models/donut/test_image_processing_donut.py ........... [ 9%]\r\ntests/models/donut/test_modeling_donut_swin.py ........ssssss..sssssssss.....................s........sssssssssssssssssssssssssss.s........s.....sss. [ 96%]\r\n...\r\n```",
"@amyeroberts Added test function for each corresponding models!",
"@amyeroberts I have fixed as you suggested. Overall it has been same as https://github.com/huggingface/transformers/pull/28686 with additional modification. Only modifying as the https://github.com/huggingface/transformers/pull/28686 will pass the CI but when we try in real code it will failure because there are gradient_checkpointing function in backbone.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts I think it seems good after merging most recent PR?",
"@amyeroberts all fixed! but could you rerun the CI? It seems just CI got failed to load extension in torchvision",
"@SangbumChoi Looks great! Yes, we had quite a few issues yesterday because of a new release of pytorch which broke everything 😢 A fix has been merged into main, which should resolve the version issues. Could you rebase? This should update the CI images and trigger another run. ",
"@amyeroberts all done!",
"@SangbumChoi Thanks for iterating and another great contribution! "
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28615/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28615",
"html_url": "https://github.com/huggingface/transformers/pull/28615",
"diff_url": "https://github.com/huggingface/transformers/pull/28615.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28615.patch",
"merged_at": 1706800065000
} |
https://api.github.com/repos/huggingface/transformers/issues/28614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28614/comments | https://api.github.com/repos/huggingface/transformers/issues/28614/events | https://github.com/huggingface/transformers/issues/28614 | 2,091,954,715 | I_kwDOCUB6oc58sLIb | 28,614 | Training with FSDP slows down the convergence speed. | {
"login": "yuangpeng",
"id": 57125678,
"node_id": "MDQ6VXNlcjU3MTI1Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57125678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuangpeng",
"html_url": "https://github.com/yuangpeng",
"followers_url": "https://api.github.com/users/yuangpeng/followers",
"following_url": "https://api.github.com/users/yuangpeng/following{/other_user}",
"gists_url": "https://api.github.com/users/yuangpeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuangpeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuangpeng/subscriptions",
"organizations_url": "https://api.github.com/users/yuangpeng/orgs",
"repos_url": "https://api.github.com/users/yuangpeng/repos",
"events_url": "https://api.github.com/users/yuangpeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuangpeng/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @pacman100 ",
"Although I use my own task and use my own dataset, I only added a few modules and training with full unfreeze. If you need a detailed script, I will delete the private content before uploading it in these days. @pacman100 ",
"Hope it can solved soon!",
"Can you provide a minimal reproducer for this. Using the latest releases, everything is as expected. Below is the fine-tuning of Mistral on ultrachat SFT dataset. \r\n\r\n![Screenshot 2023-12-27 at 8 48 12 PM (1)](https://github.com/huggingface/transformers/assets/13534540/1865c960-a78a-46bd-9ee5-2a12e0ba68d2)\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.143-2-velinux1-amd64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.23.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'NO_PREFETCH', 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 2, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wrap': '', 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: A800
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pac
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A group of same experiments with purple and green, another group of sample experiments with yellow and blue. The other parameters are the same, one uses fsdp and the other uses deepspeed.
fsdp and deepspeed have similar losses early step, with differences appearing after about 20 steps. It’s not that fsdp is not converging, fsdp is also slowly converging.
fsdp config:
```
TrainingArguments.fsdp="shard_grad_op auto_wrap"
TrainingArguments.fsdp_config=dict(fsdp_transformer_layer_cls_to_wrap=["LlamaDecoderLayer"])
```
deepspeed config:
```
{
"bf16": {
"enabled": true
},
"train_micro_batch_size_per_gpu": "auto",
"zero_optimization": {
"stage": 2,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto"
}
}
```
<img width="1442" alt="image" src="https://github.com/huggingface/transformers/assets/57125678/1894247d-7e5f-423f-83e8-72aa14cfb4e6">
### Expected behavior
Expect similar loss curves using fsdp and deepspeed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28614/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/28614/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28613/comments | https://api.github.com/repos/huggingface/transformers/issues/28613/events | https://github.com/huggingface/transformers/issues/28613 | 2,091,937,176 | I_kwDOCUB6oc58sG2Y | 28,613 | Bug in GPT NeoX Implementation | {
"login": "andersonbcdefg",
"id": 17210823,
"node_id": "MDQ6VXNlcjE3MjEwODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/17210823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersonbcdefg",
"html_url": "https://github.com/andersonbcdefg",
"followers_url": "https://api.github.com/users/andersonbcdefg/followers",
"following_url": "https://api.github.com/users/andersonbcdefg/following{/other_user}",
"gists_url": "https://api.github.com/users/andersonbcdefg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersonbcdefg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersonbcdefg/subscriptions",
"organizations_url": "https://api.github.com/users/andersonbcdefg/orgs",
"repos_url": "https://api.github.com/users/andersonbcdefg/repos",
"events_url": "https://api.github.com/users/andersonbcdefg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersonbcdefg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice catch ! #28645 should fix it"
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
GPT-Neo-X does not have a "q_proj" module, so the following lines that check for the dtype raise an error.
```
input_dtype = query.dtype
if input_dtype == torch.float32:
if torch.is_autocast_enabled():
target_dtype = torch.get_autocast_gpu_dtype()
# Handle the case where the model is quantized
elif hasattr(self.config, "_pre_quantization_dtype"):
target_dtype = self.config._pre_quantization_dtype
else:
target_dtype = self.q_proj.weight.dtype
```
This is in `transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py`
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Initialize a GPT Neo X model with flash attention
model = AutoModel.from_pretrained(model_name_or_path, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16)
2. Try to run forward pass. Causes following error:
AttributeError: 'GPTNeoXFlashAttention2' object has no attribute 'q_proj'
### Expected behavior
The forward pass should work, or at least fail for a reason other than a reference a module that the given model does not actually have | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28613/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28612/comments | https://api.github.com/repos/huggingface/transformers/issues/28612/events | https://github.com/huggingface/transformers/pull/28612 | 2,091,894,574 | PR_kwDOCUB6oc5knigy | 28,612 | Update README_es.md | {
"login": "vladydev3",
"id": 82735444,
"node_id": "MDQ6VXNlcjgyNzM1NDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/82735444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vladydev3",
"html_url": "https://github.com/vladydev3",
"followers_url": "https://api.github.com/users/vladydev3/followers",
"following_url": "https://api.github.com/users/vladydev3/following{/other_user}",
"gists_url": "https://api.github.com/users/vladydev3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vladydev3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vladydev3/subscriptions",
"organizations_url": "https://api.github.com/users/vladydev3/orgs",
"repos_url": "https://api.github.com/users/vladydev3/repos",
"events_url": "https://api.github.com/users/vladydev3/events{/privacy}",
"received_events_url": "https://api.github.com/users/vladydev3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @osanseviero for verifying the Spanish? ",
"Thanks for updating and your contribution @vladydev3! "
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | Fixing grammatical errors in the text | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28612/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28612",
"html_url": "https://github.com/huggingface/transformers/pull/28612",
"diff_url": "https://github.com/huggingface/transformers/pull/28612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28612.patch",
"merged_at": 1706044141000
} |
https://api.github.com/repos/huggingface/transformers/issues/28611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28611/comments | https://api.github.com/repos/huggingface/transformers/issues/28611/events | https://github.com/huggingface/transformers/issues/28611 | 2,091,330,691 | I_kwDOCUB6oc58pyyD | 28,611 | PatchTST and PatchTSMixer categorical features and exogenous variables | {
"login": "chrisconst2",
"id": 101289285,
"node_id": "U_kgDOBgmNRQ",
"avatar_url": "https://avatars.githubusercontent.com/u/101289285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisconst2",
"html_url": "https://github.com/chrisconst2",
"followers_url": "https://api.github.com/users/chrisconst2/followers",
"following_url": "https://api.github.com/users/chrisconst2/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisconst2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisconst2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisconst2/subscriptions",
"organizations_url": "https://api.github.com/users/chrisconst2/orgs",
"repos_url": "https://api.github.com/users/chrisconst2/repos",
"events_url": "https://api.github.com/users/chrisconst2/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisconst2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 6462336551,
"node_id": "LA_kwDOCUB6oc8AAAABgS9uJw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Time%20Series",
"name": "Time Series",
"color": "C1E56A",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"@kashif ",
"@chrisconst2 so due to the nature of patching the only potential covariates that can be included with PatchTST and I believe PatchTSMixer are going to be static features and static real-valued covariates... I believe the method is not too flexible in being able to accommodate these extra bells and whistle... what kind of exogenous variables were you thinking of using?"
] | 1,705 | 1,705 | null | NONE | null | ### Feature request
Include categorical features and exogenous variables as input for the PatchTST and PatchTSMixer timeseries foundation models
### Motivation
Categorical features and exogenous variables are key components in timeseries modelling
### Your contribution
- | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28611/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28611/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28610/comments | https://api.github.com/repos/huggingface/transformers/issues/28610/events | https://github.com/huggingface/transformers/issues/28610 | 2,091,211,821 | I_kwDOCUB6oc58pVwt | 28,610 | ONNX export failure for models invoking SDPA attention | {
"login": "BowenBao",
"id": 9376104,
"node_id": "MDQ6VXNlcjkzNzYxMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9376104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BowenBao",
"html_url": "https://github.com/BowenBao",
"followers_url": "https://api.github.com/users/BowenBao/followers",
"following_url": "https://api.github.com/users/BowenBao/following{/other_user}",
"gists_url": "https://api.github.com/users/BowenBao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BowenBao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BowenBao/subscriptions",
"organizations_url": "https://api.github.com/users/BowenBao/orgs",
"repos_url": "https://api.github.com/users/BowenBao/repos",
"events_url": "https://api.github.com/users/BowenBao/events{/privacy}",
"received_events_url": "https://api.github.com/users/BowenBao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @fxmarty ",
"Thank you for the ping, thank you @BowenBao. cc @drisspg and linking relevant issues as well: https://github.com/pytorch/pytorch/issues/110681 & https://github.com/pytorch/pytorch/issues/108108\r\n\r\nSolution 3. SDPA tracing without attention_mask is I think not possible due to the data-dependent controlflow here: https://github.com/huggingface/transformers/blob/e201864bcb147ce3c2374605fac9f0e5043f7e43/src/transformers/models/llama/modeling_llama.py#L735 `q_len > 1`. The reason for this controlflow is that SDPA attention_mask from `is_causal` is top-left aligned.\r\n\r\nThe same issue exists when tracing SDPA with symbolic_trace or with dynamo + fullgraph=True (https://pytorch.slack.com/archives/C033H6DJSJU/p1702029349053049?thread_ts=1694790001.945579&cid=C033H6DJSJU).\r\n\r\nSolution 1. is what the error suggests. I don't think it would be easy to implement (would need to have `torch.jit.is_tracing()` controlflow that does magic on the model).\r\n\r\nSolution 2. is probably the most doable (we would need to look at pad tokens). Currently we try as much as possible to pass a `attn_mask=None` since SDPA is able to dispatch to some mem-efficient attention & flash attention path only in the case. We already avoid setting the `attention_mask` to None in case we are tracing: https://github.com/huggingface/transformers/blob/e201864bcb147ce3c2374605fac9f0e5043f7e43/src/transformers/modeling_attn_mask_utils.py#L371-L381\r\n\r\n",
"> Thank you for the ping, thank you @BowenBao. cc @drisspg and linking relevant issues as well: [pytorch/pytorch#110681](https://github.com/pytorch/pytorch/issues/110681) & [pytorch/pytorch#108108](https://github.com/pytorch/pytorch/issues/108108)\r\n> \r\n> Solution 3. SDPA tracing without attention_mask is I think not possible due to the data-dependent controlflow here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/e201864bcb147ce3c2374605fac9f0e5043f7e43/src/transformers/models/llama/modeling_llama.py#L735\r\n> \r\n> `q_len > 1`. The reason for this controlflow is that SDPA attention_mask from `is_causal` is top-left aligned.\r\n> The same issue exists when tracing SDPA with symbolic_trace or with dynamo + fullgraph=True (https://pytorch.slack.com/archives/C033H6DJSJU/p1702029349053049?thread_ts=1694790001.945579&cid=C033H6DJSJU).\r\n> \r\n> Solution 1. is what the error suggests. I don't think it would be easy to implement (would need to have `torch.jit.is_tracing()` controlflow that does magic on the model).\r\n> \r\n> Solution 2. is probably the most doable (we would need to look at pad tokens). Currently we try as much as possible to pass a `attn_mask=None` since SDPA is able to dispatch to some mem-efficient attention & flash attention path only in the case. We already avoid setting the `attention_mask` to None in case we are tracing:\r\n> \r\n> https://github.com/huggingface/transformers/blob/e201864bcb147ce3c2374605fac9f0e5043f7e43/src/transformers/modeling_attn_mask_utils.py#L371-L381\r\n\r\nHi @fxmarty why Solution 1 is not easy to implement? I was thining something like\r\n\r\n```python\r\nif torch.jit.is_tracing():\r\n attn_mask = old_attention()\r\nelse:\r\n attn_mask = new_sdpa_attention()\r\n```",
"Thanks for your reply and context @fxmarty \r\n\r\nI have a local fix using solution 1, will put up a PR to unblock exporter in the short term, while waiting on https://github.com/pytorch/pytorch/issues/108108 .",
"@fxmarty I might be wrong but, installing from the latest source, I still have the same issue for BART based model export without attention_mask. Is is something planned to be supported?"
] | 1,705 | 1,708 | 1,707 | CONTRIBUTOR | null | > ValueError: Attention using SDPA can not be traced with torch.jit.trace when no attention_mask is provided. To solve this issue, please either load your model with the argument `attn_implementation="eager"` or pass an attention_mask input when tracing the model.
There has been some discussion about its possible resolutions in the ONNX exporter team. I'd like to post an issue here as well to seek advice and preferences.
1. Check `torch.jit.is_tracing()` and fallback to eager attn implementation if needed.
2. Create `attention_mask` before passing to SDPA if it is None.
3. Support SDPA tracing w/o attention_mask (not sure how feasible this is). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28610/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28609/comments | https://api.github.com/repos/huggingface/transformers/issues/28609/events | https://github.com/huggingface/transformers/issues/28609 | 2,090,845,175 | I_kwDOCUB6oc58n8P3 | 28,609 | Code crashes without errors when importing Trainer in TPU context | {
"login": "samuele-bortolato",
"id": 81489249,
"node_id": "MDQ6VXNlcjgxNDg5MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/81489249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuele-bortolato",
"html_url": "https://github.com/samuele-bortolato",
"followers_url": "https://api.github.com/users/samuele-bortolato/followers",
"following_url": "https://api.github.com/users/samuele-bortolato/following{/other_user}",
"gists_url": "https://api.github.com/users/samuele-bortolato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuele-bortolato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuele-bortolato/subscriptions",
"organizations_url": "https://api.github.com/users/samuele-bortolato/orgs",
"repos_url": "https://api.github.com/users/samuele-bortolato/repos",
"events_url": "https://api.github.com/users/samuele-bortolato/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuele-bortolato/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I would like to work on this\n",
"I have the same problem.",
"having same kaggle issue",
"Gentle ping @muellerzr "
] | 1,705 | 1,708 | null | NONE | null | ### System Info
I'm working on Kaggle with TPU enabled (TPU VM v3-8), running !transformers-cli env returns
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:642] File already exists in database: tsl/profiler/protobuf/trace_events.proto
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1986] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
https://symbolize.stripped_domain/r/?trace=7a80dd07fd3c,7a80dd030fcf,5ab82e3a7b8f&map=
*** SIGABRT received by PID 367 (TID 367) on cpu 95 from PID 367; stack trace: ***
PC: @ 0x7a80dd07fd3c (unknown) (unknown)
@ 0x7a7f654bba19 928 (unknown)
@ 0x7a80dd030fd0 (unknown) (unknown)
@ 0x5ab82e3a7b90 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7a80dd07fd3c,7a7f654bba18,7a80dd030fcf,5ab82e3a7b8f&map=310b7ae7682f84c5c576a0b0030121f2:7a7f56a00000-7a7f656d11c0
E0119 15:49:22.169993 367 coredump_hook.cc:447] RAW: Remote crash data gathering hook invoked.
E0119 15:49:22.170011 367 client.cc:272] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0119 15:49:22.170016 367 coredump_hook.cc:542] RAW: Sending fingerprint to remote end.
E0119 15:49:22.170041 367 coredump_hook.cc:551] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] stat failed on crash reporting socket /var/google/services/logmanagerd/remote_coredump.socket (Is the listener running?): No such file or directory
E0119 15:49:22.170050 367 coredump_hook.cc:603] RAW: Dumping core locally.
E0119 15:50:17.482782 367 process_state.cc:808] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
Importing and printing manually
```
import torch_xla
print(torch_xla.__version__)
```
2.1.0+libtpu
```
import torch
print(torch.__version__)
```
2.1.0+cu121
```
import transformers
print(transformers.__version__)
```
4.36.2
### Who can help?
@muellerzr @stevhliu
I have been tryint to port my code to TPU, but cannot manage to import the libraries.
In my code (written in pytorch) I use the transformer library to load some pretrained LLMs and I subclassed the Trainer class to train some custom models with RL.
The code is working perfectly fine on GPU, but I can't manage to make it work on TPU and the code keeps crashing without returning any error. The documentation on how to use TPUs in the transformer library for a torch backend is still not present (after two years that the page was created in the documentation https://huggingface.co/docs/transformers/v4.21.3/en/perf_train_tpu), so I have no idea if I skipped any necessary step.
While the code imports without problems the transformer library, the whole session crashes when I try to import the Trainer class.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch_xla
print(torch_xla.__version__)
import torch
print(torch.__version__)
import transformers
print(transformers.__version__)
from transformers import Trainer
```
output:
->2.1.0+libtpu
->2.1.0+cu121
->4.36.2
->(crash session without outputs)
### Expected behavior
It should either import the library or throw an error, not crash the whole session without a hint. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28609/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28609/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28608/comments | https://api.github.com/repos/huggingface/transformers/issues/28608/events | https://github.com/huggingface/transformers/pull/28608 | 2,090,793,303 | PR_kwDOCUB6oc5kj3bE | 28,608 | [`Test tokenizers`] DO NOT MERGE | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
tests `tokenizers==0.15.1rc1` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28608/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28608",
"html_url": "https://github.com/huggingface/transformers/pull/28608",
"diff_url": "https://github.com/huggingface/transformers/pull/28608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28608.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28607/comments | https://api.github.com/repos/huggingface/transformers/issues/28607/events | https://github.com/huggingface/transformers/pull/28607 | 2,090,724,894 | PR_kwDOCUB6oc5kjogb | 28,607 | Generate: deprecate old src imports | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | MEMBER | null | # What does this PR do?
We have 3 thin wrappers for `generate`, one for each framework, whose sole purpose is to import the mixin from `src/transformers/generation(_flax/_tf)_utils.py`. In other words, to import from `src` according to the codebase before [this PR](https://github.com/huggingface/transformers/pull/20096).
Since this is a `src` import (and not a `from transformers import X`), I believe this can be safely removed before v5. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28607",
"html_url": "https://github.com/huggingface/transformers/pull/28607",
"diff_url": "https://github.com/huggingface/transformers/pull/28607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28607.patch",
"merged_at": 1706370859000
} |
https://api.github.com/repos/huggingface/transformers/issues/28606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28606/comments | https://api.github.com/repos/huggingface/transformers/issues/28606/events | https://github.com/huggingface/transformers/issues/28606 | 2,090,708,329 | I_kwDOCUB6oc58na1p | 28,606 | Add [VMamba] model | {
"login": "dmus",
"id": 464378,
"node_id": "MDQ6VXNlcjQ2NDM3OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/464378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmus",
"html_url": "https://github.com/dmus",
"followers_url": "https://api.github.com/users/dmus/followers",
"following_url": "https://api.github.com/users/dmus/following{/other_user}",
"gists_url": "https://api.github.com/users/dmus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmus/subscriptions",
"organizations_url": "https://api.github.com/users/dmus/orgs",
"repos_url": "https://api.github.com/users/dmus/repos",
"events_url": "https://api.github.com/users/dmus/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Thank you for your attention. I am one of the authors of `VMamba`. We have just renewed the repo with code easier to transplanting. I hope this would helps you in your splendid work!"
] | 1,705 | 1,706 | null | NONE | null | ### Model description
VMamba is a visual foundation model proposed in https://arxiv.org/pdf/2401.10166.pdf.
It is inspired by the recent advances in state stace models and in particular Mamba. The proposed architecture is computationally more efficient than vision transformer architectures because it scales linearly with growing resolution. It introduces a Cross-Scan Module (CSM) to have context from all directions (4 directions, starting in each corner and traversing in a horizontal or vertical direction). Evaluation on vision perception tasks shows promising capabilities.
Model weights will become available in a few days according to the repo of the authors.
1. [x] (Optional) Understood theoretical aspects
2. [x] Prepared transformers dev environment
3. [x] Set up debugging environment of the original repository
4. [x] Created script that successfully runs forward pass using
original repository and checkpoint
5. [x] Successfully opened a PR and added the model skeleton to Transformers
6. [x] Successfully converted original checkpoint to Transformers
checkpoint
7. [x] Successfully ran forward pass in Transformers that gives
identical output to original checkpoint
8. [x] Finished model tests in Transformers
9. [ ] ~~Successfully added Tokenizer in Transformers~~
10. [x] Run end-to-end integration tests
11. [x] Finished docs
12. [ ] Uploaded model weights to the hub
13. [x] Submitted the pull request for review
14. [ ] (Optional) Added a demo notebook
I am opening the issue to avoid duplicate work. My main motivation for porting this model is to learn a bit more about it (and about the internals of 🤗 Transformers). Some of you probably know this library much better than me, so feel free to write your own implementation if you can do it better or quicker. Otherwise, don’t hesitate to build on top of my fork.
### Open source status
- [X] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
- Original repo: https://github.com/MzeroMiko/VMamba
- Paper: https://arxiv.org/pdf/2401.10166.pdf
- implementation in progress:
- youtube vmamba vs vision mamba: https://www.youtube.com/watch?v=RtHDu6kFPb8
- vision mamba paper (similar idea): https://arxiv.org/pdf/2401.09417.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28606/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28606/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28605/comments | https://api.github.com/repos/huggingface/transformers/issues/28605/events | https://github.com/huggingface/transformers/pull/28605 | 2,090,707,493 | PR_kwDOCUB6oc5kjkuO | 28,605 | Falcon: removed unused function | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | MEMBER | null | # What does this PR do?
(see title) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28605/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28605",
"html_url": "https://github.com/huggingface/transformers/pull/28605",
"diff_url": "https://github.com/huggingface/transformers/pull/28605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28605.patch",
"merged_at": 1706370779000
} |
https://api.github.com/repos/huggingface/transformers/issues/28604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28604/comments | https://api.github.com/repos/huggingface/transformers/issues/28604/events | https://github.com/huggingface/transformers/pull/28604 | 2,090,532,106 | PR_kwDOCUB6oc5ki93N | 28,604 | fix a hidden bug of `GenerationConfig`, now the `generation_config.json` can be loaded successfully | {
"login": "ParadoxZW",
"id": 32508168,
"node_id": "MDQ6VXNlcjMyNTA4MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/32508168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParadoxZW",
"html_url": "https://github.com/ParadoxZW",
"followers_url": "https://api.github.com/users/ParadoxZW/followers",
"following_url": "https://api.github.com/users/ParadoxZW/following{/other_user}",
"gists_url": "https://api.github.com/users/ParadoxZW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParadoxZW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParadoxZW/subscriptions",
"organizations_url": "https://api.github.com/users/ParadoxZW/orgs",
"repos_url": "https://api.github.com/users/ParadoxZW/repos",
"events_url": "https://api.github.com/users/ParadoxZW/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParadoxZW/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ParadoxZW 👋 \r\n\r\nThe exception message you got, `not supported between instances of 'str' and 'int'`, seems to point at a failed attempt to sort `str` and `int`. In turn, this means `int` was used as a key of the json file, which is not intended for a `GenerationConfig`. May I ask for a short reproducer of the problem?\r\n\r\nSorting the keys is desirable to visually inspect the files, so I want to be sure what problem we are exactly fixing.",
"@gante Thanks for your reply.\r\n\r\nI'm so sorry that the code cannot be public for now. But it is something like that\r\n\r\n```Python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntorch.set_default_device(\"cuda\")\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"../Test_model\", \r\n torch_dtype=torch.float16, \r\n device_map=\"auto\",\r\n trust_remote_code=True)\r\n```\r\n\r\nAnd I can provide the `config_dict` (a variable in `to_json_string(self, use_diff: bool = True)` function) that causes this error:\r\n\r\n```\r\n{\r\n 'max_length': 4096, \r\n 'pad_token_id': 50256, \r\n 'bos_token_id': 50256, \r\n 'eos_token_id': 50295, \r\n 'transformers_version': '4.31.0', \r\n 'no_split_module_classes': ['ParallelBlock'], \r\n 'special_dtypes': {}, \r\n 'max_memory': {\r\n 0: 1742864712, \r\n 1: 1742864712, \r\n 2: 1742864712, \r\n 3: 11071258624, \r\n 'cpu': 321020162048\r\n }\r\n}\r\n```",
"@ParadoxZW I see. The root issue is that the GPU indexes are stored as integers. Since `max_memory` is not a generation parameter, I will not accept this PR: it supports a custom use case at the cost of readability for all Hub users.\r\n\r\nMy suggestion would be to store `max_memory` as \r\n```\r\n'max_memory': {\r\n '0': 1742864712, \r\n '1': 1742864712, \r\n '2': 1742864712, \r\n '3': 11071258624, \r\n 'cpu': 321020162048\r\n}\r\n```\r\nand to handle it on your side accordingly 🤗 ",
"I've made another commit and maintained `sort_keys=True`, while solved this problem in a more nice way.",
"And it seems that `max_memory` is not set by me or any other custom configuration files. It somehow is automatically generated by the program.",
"Thanks for your reply @amyeroberts !\r\n\r\nI've committed your suggestion and committed another change to deal with lists in the `config_dict`.\r\n\r\np.s.\r\n\r\n> It seems to be flagging a more general issue about these values being stored in the config: why is this happening in the generation config and not the model's config?\r\n\r\nWhen I was debugging, I found there are some inheritance relationships between the generation config and the model's config. It seems that model's config initialization process also calls the `to_json_string` method defined in `GenerationConfig` class (for some logging procedure I guess). Maybe I'm wrong. 😄 ",
"😱 why there are failed tests?",
"@ParadoxZW Probably unrelated to this PR :) Seems the run just timed out. I've set it to re-run. ",
"OK! So will this PR be merged? 😄 @amyeroberts ",
"@ParadoxZW - yes, now that everything has passed :) ",
"For future reference of anyone visiting the PR - merged as fix was OK. Will wait for @gante to come back from vacation to confirm the reason for the extra params in the generation config. ",
"@amyeroberts some models, like Whisper, have a large set of generation options exclusive to the model. Mostly audio models.\r\n\r\nThe alternative to having custom options would be to create a `GenerationConfig` class for each model, like we do with the model config. However, since the exceptions are so infrequent, imposing a new additional file per model would be wasteful. It also enable our users to do all sort of weird things, at the expense of issues like this once in a while :D\r\n"
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
I was developing an open-source LLM project, and I found that the `generation_config.json` file cannot be successfully loaded to controll model's generation process, though I've written some attributes in this file, such as `eos_token_id` (a newly initialized model object from `from_pretrained` api did not get correct `eos_token_id`).
I am aware of that there're many walkarounds to control the generation process instead of using `generation_config.json`. But I still want to use `generation_config.json` and no more other code, as it should be a standard way. So I dived into the source code of class `GenerationConfig` and spend hours to do debugging stuff.
The initialization process would be called several times during the initialization of a pretrained model. But I found the last time is somehow very strange. Using following code:
```Python
print('before')
logger.info(f"Configuration saved in {output_config_file}") # original L594 of `transformers/generation/configuration_utils.py`
print('after')
```
`before` was printed but `after` was not, as if the function is suddenly returned at here or broken. It gave me clue that there's some problem in the `__repr__` method of `GenerationConfig`. Continue to dive in, I finally located the bug:
```Python
try:
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" # original L991 of `transformers/generation/configuration_utils.py`
except Exception:
print(e)
raise
```
It gave me the error info of `not supported between instances of 'str' and 'int'`. So it seems that there is some dirty code like `try:... except: pass` outside of the `GenerationConfig` class. Nevertheless, I can finally solve the problem by
```Python
return json.dumps(config_dict, indent=2, sort_keys=False) + "\n"
```
Although only one line of code need to be changed, I believe it's a very hidden bug that one may spend a entire afternoon to find it. Now we can successfully load `generation_config.json` and correctly configure model's generation behavior.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- generate: @gante
- Big Model Inference: @SunMarc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28604",
"html_url": "https://github.com/huggingface/transformers/pull/28604",
"diff_url": "https://github.com/huggingface/transformers/pull/28604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28604.patch",
"merged_at": 1706032118000
} |
https://api.github.com/repos/huggingface/transformers/issues/28603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28603/comments | https://api.github.com/repos/huggingface/transformers/issues/28603/events | https://github.com/huggingface/transformers/issues/28603 | 2,090,397,789 | I_kwDOCUB6oc58mPBd | 28,603 | Error Using Ray Tune because of the repo id | {
"login": "matiasfreitas",
"id": 36213075,
"node_id": "MDQ6VXNlcjM2MjEzMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36213075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matiasfreitas",
"html_url": "https://github.com/matiasfreitas",
"followers_url": "https://api.github.com/users/matiasfreitas/followers",
"following_url": "https://api.github.com/users/matiasfreitas/following{/other_user}",
"gists_url": "https://api.github.com/users/matiasfreitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matiasfreitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matiasfreitas/subscriptions",
"organizations_url": "https://api.github.com/users/matiasfreitas/orgs",
"repos_url": "https://api.github.com/users/matiasfreitas/repos",
"events_url": "https://api.github.com/users/matiasfreitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matiasfreitas/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @matiasfreitas, thanks for raising this issue! \r\n\r\nThird-party integrations such as ray are maintained by their contributors, rather than the transformers team. In this case, it seems the repo id being passed is a logged step in the training. If you or anyone else in the community would like to open a PR to update, we'd be happy to review. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.30.0
- Platform: Linux-6.6.6-76060606-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
2024-01-19 11:54:21,786 ERROR tune_controller.py:911 -- Trial task failed for trial _objective_750f9_00002
Traceback (most recent call last):
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 24, in auto_init_wrapper
return fn(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/_private/worker.py", line 2524, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(HFValidationError): ray::ImplicitFunc.train() (pid=198543, ip=192.168.1.83, actor_id=d99794869b3e484929cd90d501000000, repr=_objective)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 375, in train
raise skipped from exception_cause(skipped)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 349, in entrypoint
return self._trainable_func(
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 666, in _trainable_func
output = fn()
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 350, in dynamic_modules_import_trainable
return trainable(*args, **kwargs)
File "/home/matiasfg/anaconda3/envs/Dataset_Objects/lib/python3.10/site-packages/ray/tune/trainable/util.py", line 325, in inner
return trainable(config, **fn_kwargs)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/integrations/integration_utils.py", line 251, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1514, in train
self.model = self.call_model_init(trial)
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1260, in call_model_init
model = self.model_init(trial)
File "/tmp/ipykernel_197623/1278964700.py", line 8, in getModel
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2600, in from_pretrained
resolved_config_file = cached_file(
File "/home/matiasfg/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 431, in cached_file
resolved_file = hf_hub_download(
File "/home/matiasfg/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/home/matiasfg/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 164, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '{'learning_rate': 8.288916866885153e-06, 'num_train_epochs': 5, 'seed': 24.443485457985144, 'per_device_train_batch_size': 16}'.
```
### Who can help?
@muellerzr @pacman100 @richardliaw @amogkam
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running that code with any trainer that code sounds enough, but not sure.
trainer.hyperparameter_search(direction="maximize", backend="ray", n_trials=10)
trainer.train()
### Expected behavior
A name on the standards of the validator should be used as repo id. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28603/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28602/comments | https://api.github.com/repos/huggingface/transformers/issues/28602/events | https://github.com/huggingface/transformers/pull/28602 | 2,090,371,299 | PR_kwDOCUB6oc5kiag2 | 28,602 | [`GPTNeoX`] Fix BC issue with 4.36 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
We broke some of the GPTNeoX model with the dtype in #25830. Sorry for the inconvenience.
This was breaking in terms of logits, now this PR will probably make the model slow than with casting to smaller dtype.
Fixes #28360, fixes #28316.
For a sample generation I am getting some slow down. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28602",
"html_url": "https://github.com/huggingface/transformers/pull/28602",
"diff_url": "https://github.com/huggingface/transformers/pull/28602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28602.patch",
"merged_at": 1705856480000
} |
https://api.github.com/repos/huggingface/transformers/issues/28601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28601/comments | https://api.github.com/repos/huggingface/transformers/issues/28601/events | https://github.com/huggingface/transformers/pull/28601 | 2,090,366,803 | PR_kwDOCUB6oc5kiZec | 28,601 | Add config tip to custom model docs | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | MEMBER | null | Mentioned this to @LysandreJik earlier - this PR adds a tip to the docs on uploading custom code models to encourage users to use a monolithic config that gets passed to sub-layers, like we use in core `transformers` code. Some models that didn't do this were very painful to port and required rewrites to all layers, so encouraging users to do this earlier might help a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28601/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28601",
"html_url": "https://github.com/huggingface/transformers/pull/28601",
"diff_url": "https://github.com/huggingface/transformers/pull/28601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28601.patch",
"merged_at": 1705931165000
} |
https://api.github.com/repos/huggingface/transformers/issues/28600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28600/comments | https://api.github.com/repos/huggingface/transformers/issues/28600/events | https://github.com/huggingface/transformers/pull/28600 | 2,090,197,596 | PR_kwDOCUB6oc5khzVS | 28,600 | RWKV: raise informative exception when attempting to manipulate `past_key_values` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28600). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | MEMBER | null | # What does this PR do?
Some generation methods (like the new ngram speculation thingy) need to manipulate `past_key_values`. RWKV, a recurrent neural network, doesn't have this attribute -- a standard `AttributeError` is raised when such methods are called with RWKV. (related comment: https://github.com/huggingface/transformers/pull/27775#issuecomment-1897404295)
This PR improves the error message, explaining what's happening and what to do.
NOTE: some newer RWKV variants use custom modeling code, so this PR won't affect them. I'll point the users to this PR if the issue pops up. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28600/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28600",
"html_url": "https://github.com/huggingface/transformers/pull/28600",
"diff_url": "https://github.com/huggingface/transformers/pull/28600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28600.patch",
"merged_at": 1705673376000
} |
https://api.github.com/repos/huggingface/transformers/issues/28599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28599/comments | https://api.github.com/repos/huggingface/transformers/issues/28599/events | https://github.com/huggingface/transformers/issues/28599 | 2,090,169,443 | I_kwDOCUB6oc58lXRj | 28,599 | [Kosmos-2] | {
"login": "basteran",
"id": 27162097,
"node_id": "MDQ6VXNlcjI3MTYyMDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/27162097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/basteran",
"html_url": "https://github.com/basteran",
"followers_url": "https://api.github.com/users/basteran/followers",
"following_url": "https://api.github.com/users/basteran/following{/other_user}",
"gists_url": "https://api.github.com/users/basteran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/basteran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/basteran/subscriptions",
"organizations_url": "https://api.github.com/users/basteran/orgs",
"repos_url": "https://api.github.com/users/basteran/repos",
"events_url": "https://api.github.com/users/basteran/events{/privacy}",
"received_events_url": "https://api.github.com/users/basteran/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi, see my comment\r\n\r\nhttps://github.com/microsoft/unilm/issues/1429#issuecomment-1900139771\r\n\r\n(I just saw you also opened an issue here before I replied there)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi I am still working on this issue with @ydshieh , we will update it whenever we have news!"
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.10.0
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.26.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
I think the person in charge of Kosmos-2 is @ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
*This issue refers to another issue reported on [the official Kosmos repository](https://github.com/microsoft/unilm/issues/1429)!*
Hello everyone, thank you very much for your contribution. I appreciate the effort and consistency in uploading the code for such many models and maintaining this repository.
I saw Kosmos-2 and I quickly thought I could fine-tune it on my downstream task. But I couldn't find any example of how to do it. I see there is [on the official Kosmos repository](https://github.com/microsoft/unilm/tree/master/kosmos-2#training) a little "guide" for Training the model, but I don't know if they're referring to the Pre-training or further fine-tuning, I'm interested in the second one.
So I tried to implement it myself using the `transformers` library, but I'm getting errors during the Fine-Tuning procedure.
```python
model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224", device_map="auto")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224", device_map="auto")
# load dummy dataset from json file
train_data = load_dataset("json", data_files=tmp_train_file_name)
val_data = load_dataset("json", data_files=tmp_val_file_name)
# process the inputs, i.e. images and texts
def kosmos2_collate_fn(examples):
images, texts = [], []
for example in examples:
image = Image.open(example['image_path'])
images.append(image)
texts.append(example['input_text'])
inputs = processor(text=texts, images=images, return_tensors="pt").to(model.device)
return Dataset.from_dict(inputs)
new_train_data = kosmos2_collate_fn(train_data)
new_val_data = kosmos2_collate_fn(val_data)
training_arguments = TrainingArguments(
remove_unused_columns=False,
per_device_train_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
warmup_ratio=0,
num_train_epochs=EPOCHS,
learning_rate=LEARNING_RATE,
logging_strategy="steps",
logging_steps=1,
optim="adamw_torch",
evaluation_strategy="epoch",
save_strategy="epoch",
output_dir=OUTPUT_DIR,
save_total_limit=1,
load_best_model_at_end=True,
label_names=["labels"]
)
trainer = Trainer(
model=model,
train_dataset=new_train_data,
eval_dataset=new_val_data,
args=training_arguments,
)
trainer.train()
```
and the resulting errors:
```console
Generating train split: 40 examples [00:00, 8627.15 examples/s]
Generating train split: 6 examples [00:00, 2428.20 examples/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/10 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/user/kosmos2/train.py", line 193, in <module>
trainer.train()
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train
return inner_training_loop(
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2735, in training_step
loss = self.compute_loss(model, inputs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2776, in compute_loss
raise ValueError(
ValueError: The model did not return a loss from the inputs, only the following keys: logits,past_key_values,image_embeds,projection_attentions,vision_model_output. For reference, the inputs it received are pixel_values,input_ids,attention_mask,image_embeds_position_mask.
0%| | 0/10 [00:03<?, ?it/s]
```
I can't figure out the issue. It says that the model did not return a loss, which means it didn't compute it. It looks like the `processor` did not return any `labels` and the `Trainer` could not compute the loss...
### Expected behavior
I would expect to train the model on my data, i.e. to compute the loss, perform gradient updates, etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28599/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28598/comments | https://api.github.com/repos/huggingface/transformers/issues/28598/events | https://github.com/huggingface/transformers/issues/28598 | 2,089,686,976 | I_kwDOCUB6oc58jhfA | 28,598 | what is the correct format of input when fine-tuning GPT2 for text generation with batch input? | {
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"sure. thanks!"
] | 1,705 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.33.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to fine-tune GPT2 for text generation with batch input. And I use follow code to format batch input:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(r'E:\pythonWork\models\gpt2')
max_length = 8
datas = [
"The dog.",
"The cute dog.",
]
model_input = tokenizer(datas)
print('original input:\n', model_input)
# prepare for batch input
# I add bos token at the start and eos token at the end, and add pad token at the right to pad the sentences to the
# same length. bos_token_id=eos_token_id=50256, and there is not a pad token, so i also use 50256 as pad token.
labels_list = []
for i in range(len(datas)):
input_ids = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # add bos and eos token
input_ids = input_ids + max(0, max_length-len(input_ids))*[tokenizer.eos_token_id] # add padding token
attention_mask = [1] + model_input['attention_mask'][i] + [1] # atten bos and eos token
attention_mask = attention_mask + max(0, max_length - len(attention_mask)) * [0] # dose't atten padding token
labels = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # take loss for bos and eos
labels = labels + max(0, max_length - len(labels)) * [-100] # padding dose't take loss
model_input['input_ids'][i] = input_ids
model_input['attention_mask'][i] = attention_mask
labels_list.append(labels)
model_input['labels'] = labels_list
print('batch input:\n', model_input)
```
print message
```
original input:
{'input_ids': [[464, 3290, 13], [464, 13779, 3290, 13]],
'attention_mask': [[1, 1, 1], [1, 1, 1, 1]]}
batch input:
{'input_ids': [[50256, 464, 3290, 13, 50256, 50256, 50256, 50256], [50256, 464, 13779, 3290, 13, 50256, 50256, 50256]],
'attention_mask': [[1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0]],
'labels': [[50256, 464, 3290, 13, 50256, -100, -100, -100], [50256, 464, 13779, 3290, 13, 50256, -100, -100]]}
``
### Expected behavior
my question:
1. the method I take to format batch input, is it right?
2. why can't gpt2 tokenizer auto format batch input like bert tokenzier do?
3. in this pre-training [demo](https://huggingface.co/learn/nlp-course/en/chapter7/6?fw=pt#preparing-the-dataset),
I found that it dose't add bos and eos tokens, and add pad token only at the end of the sequence.
So I think, in the pre-training time only need to add pad token to keep the sequence length consistent.
But when it comes to fine-tuning, additional eos tokens need to be added, and eos needs take loss because the model needs to learn when to stop generating.
Am I right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28598/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28597/comments | https://api.github.com/repos/huggingface/transformers/issues/28597/events | https://github.com/huggingface/transformers/issues/28597 | 2,089,437,004 | I_kwDOCUB6oc58ikdM | 28,597 | How to find or create the `model_state_dict.bin` file for the `convert_llava_weights_to_hf.py` script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @isaac-vidas \r\nThanks very much for your message, I think that these [two lines](https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L96-L97) seems wrong, does replacing them with `model.get_input_embeddings().weight.shape[0]` fixes the issue? ",
"You could also just remove these two lines as `resize_token_embeddings` method applies the changes to the config already\r\n\r\nP.S. Also had to remove `self_attn.rotary_emb.inv_freq` keys to perform the conversion from the checkpoint uploaded to the hub",
"Added a PR related to the discussion above https://github.com/huggingface/transformers/pull/28617"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | Hi @younesbelkada,
Following up on the [fix to the LLaVA convert script](https://github.com/huggingface/transformers/pull/28570) and thanks for all the help with the PR!
I encountered some issue with the convert script and wanted to ask about the recommended way to create the `model_state_dict.bin` file specified here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L74
In order to create the `model_state_dict.bin` I tried something like the following with the original https://github.com/haotian-liu/LLaVA code:
```python
import torch
from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM
# load model
kwargs = {"device_map": "auto", "torch_dtype": torch.float16}
model = LlavaLlamaForCausalLM.from_pretrained("liuhaotian/llava-v1.5-7b", low_cpu_mem_usage=True, **kwargs)
# load vision tower
model.get_vision_tower().load_model()
# Save state dict
torch.save(model.state_dict(), "tmp/hf_models/llava-v1.5-7b/model_state_dict.bin")
```
It works but when I used the convert script I had to make the following changes:
* Remove keys that ended with `.inv_freq` (e.g. `language_model.model.layers.0.self_attn.rotary_emb.inv_freq`)
* Comment out the update to the `model.config.vocab_size` and `model.config.text_config.vocab_size` with the `pad_shape` here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L96-L97 otherwise, when I would try to load the converted model, it will error with the following:
```python
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "Shopify/llava-1.5-7b"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
```
```console
ValueError: Trying to set a tensor of shape torch.Size([32064, 5120]) in "weight" (which has shape torch.Size([32128, 5120])), this look incorrect.
```
Am I doing something wrong when I create the `model_state_dict.bin` file or am I missing something else?
Thanks again in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28596/comments | https://api.github.com/repos/huggingface/transformers/issues/28596/events | https://github.com/huggingface/transformers/issues/28596 | 2,089,426,891 | I_kwDOCUB6oc58ih_L | 28,596 | HfDeepSpeedConfig + ZeRO3 init accuracy bug! | {
"login": "hijkzzz",
"id": 19810594,
"node_id": "MDQ6VXNlcjE5ODEwNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19810594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hijkzzz",
"html_url": "https://github.com/hijkzzz",
"followers_url": "https://api.github.com/users/hijkzzz/followers",
"following_url": "https://api.github.com/users/hijkzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/hijkzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hijkzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hijkzzz/subscriptions",
"organizations_url": "https://api.github.com/users/hijkzzz/orgs",
"repos_url": "https://api.github.com/users/hijkzzz/repos",
"events_url": "https://api.github.com/users/hijkzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hijkzzz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"https://github.com/microsoft/DeepSpeed/issues/4932",
"Hi @hijkzzz, thanks for raising an issue! \r\n\r\ncc @pacman100 @ArthurZucker cf [this comment](https://github.com/microsoft/DeepSpeed/issues/4932#issuecomment-1902177942)",
"Also cc @gante as it seems related to ROPE! ",
"To prevent duplication, let's focus all discussion on https://github.com/huggingface/transformers/issues/28685 🤗 "
] | 1,705 | 1,706 | null | NONE | null | ### System Info
see https://github.com/microsoft/DeepSpeed/issues/4932
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://github.com/microsoft/DeepSpeed/issues/4932
### Expected behavior
https://github.com/microsoft/DeepSpeed/issues/4932 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28596/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28595/comments | https://api.github.com/repos/huggingface/transformers/issues/28595/events | https://github.com/huggingface/transformers/issues/28595 | 2,089,424,034 | I_kwDOCUB6oc58ihSi | 28,595 | Trainer is DP? support DDP? | {
"login": "ciaoyizhen",
"id": 83450192,
"node_id": "MDQ6VXNlcjgzNDUwMTky",
"avatar_url": "https://avatars.githubusercontent.com/u/83450192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciaoyizhen",
"html_url": "https://github.com/ciaoyizhen",
"followers_url": "https://api.github.com/users/ciaoyizhen/followers",
"following_url": "https://api.github.com/users/ciaoyizhen/following{/other_user}",
"gists_url": "https://api.github.com/users/ciaoyizhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciaoyizhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciaoyizhen/subscriptions",
"organizations_url": "https://api.github.com/users/ciaoyizhen/orgs",
"repos_url": "https://api.github.com/users/ciaoyizhen/repos",
"events_url": "https://api.github.com/users/ciaoyizhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciaoyizhen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Can it be DDP by adding fsdp=\"shard_grap_op\" of TrainingArguments and fsdp=\"\"full_shard\" be DP?",
"Hi @ciaoyizhen, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nWe suggest searching in the forums first to see if any related questions have been asked e.g. https://discuss.huggingface.co/t/which-data-parallel-does-trainer-use-dp-or-ddp/16021/2. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### Feature request
Is the trainer DDP or DP? If it is DDP, why do I train with multiple graphics cards, and the graphics card memory consumed on cuda-0 is much larger than other graphics cards. Or is it that when I increase per_device_train_batch_size, the cuda-0 card will exceed the graphics card memory, and then it will cut the model parameters to other cards by itself? Or do I need to set any parameters? Just give an example. I ask ChatGPT. He answer me that Trainer is DP.
### Motivation
DDP is more useful than DP.
### Your contribution
If supported. Could you tell me how to use DDP? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28595/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28594/comments | https://api.github.com/repos/huggingface/transformers/issues/28594/events | https://github.com/huggingface/transformers/pull/28594 | 2,089,335,501 | PR_kwDOCUB6oc5kezMw | 28,594 | Test | {
"login": "ibarrionuevo",
"id": 27731841,
"node_id": "MDQ6VXNlcjI3NzMxODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/27731841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibarrionuevo",
"html_url": "https://github.com/ibarrionuevo",
"followers_url": "https://api.github.com/users/ibarrionuevo/followers",
"following_url": "https://api.github.com/users/ibarrionuevo/following{/other_user}",
"gists_url": "https://api.github.com/users/ibarrionuevo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibarrionuevo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibarrionuevo/subscriptions",
"organizations_url": "https://api.github.com/users/ibarrionuevo/orgs",
"repos_url": "https://api.github.com/users/ibarrionuevo/repos",
"events_url": "https://api.github.com/users/ibarrionuevo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibarrionuevo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | NONE | null | This is a test pull request greated for CI/CD vulnerability testing purposes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28594/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28594",
"html_url": "https://github.com/huggingface/transformers/pull/28594",
"diff_url": "https://github.com/huggingface/transformers/pull/28594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28594.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28593/comments | https://api.github.com/repos/huggingface/transformers/issues/28593/events | https://github.com/huggingface/transformers/issues/28593 | 2,088,952,370 | I_kwDOCUB6oc58guIy | 28,593 | ViltForTokenClassification not working for personalize multiclass classification. | {
"login": "matiasfreitas",
"id": 36213075,
"node_id": "MDQ6VXNlcjM2MjEzMDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/36213075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matiasfreitas",
"html_url": "https://github.com/matiasfreitas",
"followers_url": "https://api.github.com/users/matiasfreitas/followers",
"following_url": "https://api.github.com/users/matiasfreitas/following{/other_user}",
"gists_url": "https://api.github.com/users/matiasfreitas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matiasfreitas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matiasfreitas/subscriptions",
"organizations_url": "https://api.github.com/users/matiasfreitas/orgs",
"repos_url": "https://api.github.com/users/matiasfreitas/repos",
"events_url": "https://api.github.com/users/matiasfreitas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matiasfreitas/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @matiasfreitas \r\nThanks for the issue! Can you past the full traceback of the issue here? 🙏 ",
"```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n/tmp/ipykernel_133949/2760725053.py in <cell line: 84>()\r\n 82 \r\n 83 #The warning is going to be solved when fine tunning below\r\n---> 84 trainer.train()\r\n 85 trainer.save_model(final_step)\r\n\r\n~/.local/lib/python3.10/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1553 hf_hub_utils.enable_progress_bars()\r\n 1554 else:\r\n-> 1555 return inner_training_loop(\r\n 1556 args=args,\r\n 1557 resume_from_checkpoint=resume_from_checkpoint,\r\n\r\n~/.local/lib/python3.10/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)\r\n 1858 \r\n 1859 with self.accelerator.accumulate(model):\r\n-> 1860 tr_loss_step = self.training_step(model, inputs)\r\n 1861 \r\n 1862 if (\r\n\r\n~/.local/lib/python3.10/site-packages/transformers/trainer.py in training_step(self, model, inputs)\r\n 2723 \r\n 2724 with self.compute_loss_context_manager():\r\n-> 2725 loss = self.compute_loss(model, inputs)\r\n 2726 \r\n 2727 if self.args.n_gpu > 1:\r\n\r\n~/.local/lib/python3.10/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)\r\n 2746 else:\r\n 2747 labels = None\r\n-> 2748 outputs = model(**inputs)\r\n 2749 # Save past state if it exists\r\n 2750 # TODO: this needs to be fixed and made cleaner later.\r\n\r\n~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/.local/lib/python3.10/site-packages/transformers/models/vilt/modeling_vilt.py in forward(self, input_ids, attention_mask, token_type_ids, pixel_values, pixel_mask, head_mask, inputs_embeds, image_embeds, labels, output_attentions, output_hidden_states, return_dict)\r\n 1476 # move labels to correct device to enable PP\r\n 1477 labels = labels.to(logits.device)\r\n-> 1478 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n 1479 \r\n 1480 if not return_dict:\r\n\r\n~/.local/lib/python3.10/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\n~/.local/lib/python3.10/site-packages/torch/nn/modules/loss.py in forward(self, input, target)\r\n 1172 \r\n 1173 def forward(self, input: Tensor, target: Tensor) -> Tensor:\r\n-> 1174 return F.cross_entropy(input, target, weight=self.weight,\r\n 1175 ignore_index=self.ignore_index, reduction=self.reduction,\r\n 1176 label_smoothing=self.label_smoothing)\r\n\r\n~/.local/lib/python3.10/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)\r\n 3027 if size_average is not None or reduce is not None:\r\n 3028 reduction = _Reduction.legacy_get_string(size_average, reduce)\r\n-> 3029 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\n 3030 \r\n 3031 \r\n\r\nValueError: Expected input batch_size (640) to match target batch_size (16).\r\n```",
"I forgot to mention, but i changed the line 1472 to\r\n\r\n`logits = self.classifier(sequence_output[:, 0,:])`\r\n\r\nTo be more similar to ViT model. \r\n\r\nAnd I believe ViltForImagesAndTextClassification can have the same problem, but I'm not sure about since I didn't tested this model nor knows well the goal of the model (reasoning, but only one per time?).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
I had several errors trying to use the ViLt code for multiclass classification.
On the lines 1473-1478 of the [modeling_vilt.py](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/vilt/modeling_vilt.py#L1413) we have that code:
```
if labels is not None:
loss_fct = CrossEntropyLoss()
# move labels to correct device to enable PP
labels = labels.to(logits.device)
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
```
Based on my manual testing (I confess I'm not the most skilled to be sure about the theoretical correctness) and on the the file [modeling_vit.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.pyl)
I changed this lines to:
```
loss = None
if labels is not None:
# move labels to correct device to enable model parallelism
labels = labels.to(logits.device)
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
one_hot_labels = F.one_hot(labels, self.num_labels).float()
loss = loss_fct(logits, one_hot_labels)
```
And the results are fine here in my computer.
I think that should be change on the library.
### Who can help?
@ArthurZucker @amyeroberts @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running this code:
```
from transformers import DefaultDataCollator, TrainingArguments, Trainer
import evaluate
import numpy as np
from transformers import ViltForTokenClassification, TrainingArguments, Trainer
from torch import Tensor
def getModel(path = None):
if(path is None):
path = "dandelin/vilt-b32-finetuned-nlvr2"
model = ViltForTokenClassification.from_pretrained(
path,
num_labels=len(label['label2idx']),
id2label= label['idx2label'],
label2id= label['label2idx'],
return_dict=True,
problem_type = "multi_label_classification"
)
return model
#Very simple data collator that simply collates batches of dict-like
#objects and performs special handling for potential keys named label and label_ids
data_collator = DefaultDataCollator()
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
training_args = TrainingArguments(
output_dir=middle_step,
# Directory where model checkpoints and logs will be saved.
remove_unused_columns=False,
# Whether to remove unused columns from the input data before training.
evaluation_strategy="epoch",
# The evaluation strategy to adopt during training. "epoch" evaluates at the end of each epoch.
save_strategy="epoch",
# The checkpoint save strategy during training. "epoch" saves at the end of each epoch.
learning_rate=5e-5,
# The initial learning rate for the optimizer.
per_device_train_batch_size=16,
# Batch size per GPU or CPU for training.
gradient_accumulation_steps=4,
# Gradient accumulation involves updating the model's weights
# only after accumulating gradients over multiple batches.
# This can be useful when the effective batch size is too large to fit into GPU memory.
# Instead of processing the entire batch at once, the model processes
# smaller batches and accumulates gradients before updating the weights.
per_device_eval_batch_size=16,
# Batch size per GPU or CPU for evaluation.
num_train_epochs=6,
# Total number of training epochs.
warmup_ratio=0.1,
# Ratio of total training steps used for warmup.
logging_steps=10,
# Log every n updates steps.
load_best_model_at_end=True,
# Whether or not to load the best model found at the end of training.
metric_for_best_model="accuracy",
# Metric used to determine the best model, e.g., "accuracy".
)
trainer = Trainer(
model_init=getModel,
args=training_args,
data_collator=data_collator,
train_dataset=dataset["train"],
eval_dataset=dataset["val"],
compute_metrics=compute_metrics,
)
```
### Expected behavior
Not raise a error on the line loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) because of the non match sizes of the tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28593/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28592/comments | https://api.github.com/repos/huggingface/transformers/issues/28592/events | https://github.com/huggingface/transformers/issues/28592 | 2,088,835,235 | I_kwDOCUB6oc58gRij | 28,592 | Mixtral gets stuck at Loading checkpoint shards. | {
"login": "AdamLouly",
"id": 27873459,
"node_id": "MDQ6VXNlcjI3ODczNDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdamLouly",
"html_url": "https://github.com/AdamLouly",
"followers_url": "https://api.github.com/users/AdamLouly/followers",
"following_url": "https://api.github.com/users/AdamLouly/following{/other_user}",
"gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions",
"organizations_url": "https://api.github.com/users/AdamLouly/orgs",
"repos_url": "https://api.github.com/users/AdamLouly/repos",
"events_url": "https://api.github.com/users/AdamLouly/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdamLouly/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @AdamLouly, thanks for raising an issue! \r\n\r\nCould you share a reproducible snippet to show how you're loading the model? \r\nFor the transformers version - could you share the commit you're running on?\r\n\r\ncc @ArthurZucker @younesbelkada ",
"hi @AdamLouly \r\nAs stated by @amyeroberts , a reproducible snippet would be much appreciated! Also how do you run your script? `python xx.py` or `accelerate launch xxx.py` or `python -m torch.distributed.run xx.py` etc. 🙏 ",
"Hi @amyeroberts \r\nthis is how I install. transformers:\r\npip install git+https://github.com/huggingface/transformers\r\nso basically, latest version from source.\r\n\r\nHey @younesbelkada \r\nthis is the snippet:\r\n\r\n\r\n`#!/bin/bash\r\n\r\nds_config=$(mktemp --suffix \".json\")\r\necho \"The DeepSpeed config is put at $ds_config\"\r\n\r\ncat <<EOF > $ds_config\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 20000000,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 20000000,\r\n \"contiguous_gradients\": false,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": true\r\n }\r\n },\r\n \"zero_allow_untested_optimizer\": true,\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\"\r\n }\r\n },\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\"\r\n }\r\n },\r\n \"steps_per_print\": 2000,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\nEOF\r\n\r\nnum_gpus=8\r\n\r\ntorchrun --nproc_per_node $num_gpus \\\r\n run_glue.py \\\r\n --model_name_or_path mistralai/Mixtral-8x7B-v0.1 \\\r\n --task_name MRPC \\\r\n --per_device_train_batch_size 1 \\\r\n --do_train \\\r\n --num_train_epochs 5 \\\r\n --output_dir \"./results\" --overwrite_output_dir \\\r\n --save_strategy \"no\" \\\r\n --fp16 --max_steps 100 \\\r\n --gradient_accumulation_steps 1 \\\r\n --learning_rate 0.00001 \\\r\n --adam_beta1 0.9 \\\r\n --adam_beta2 0.999 \\\r\n --adam_epsilon 1e-8 \\\r\n --deepspeed $ds_config`\r\n\r\nThen I run the script this way : bash run_mixt.sh",
"Hi @AdamLouly \r\nNot sure what is wrong here, can you test on one of the latest pypi release to check if this is not a regression? for example you can try: `pip install -U transformers==4.36.2`. Can you also alternatively try with `accelerate`? First run `accelerate config`, select DeepSpeed and run `accelerate launch run_glue.py`",
"Hey @younesbelkada I tested both and still having the same issue.\r\n\r\nHave you managed to reproduce it using the example above?\r\n"
] | 1,705 | 1,708 | null | CONTRIBUTOR | null | ### System Info
Nightly transformers.
nightly torch
8 GPUs
### Who can help?
When trying to run mixtral using the example in transformers.
it gets stuck at Loading checkpoint shards
at this point:
Loading checkpoint shards: 42%|██████████████████████████████████████████████▋
Noticed this only happens when running on multiple GPUS
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the fine tunning example on transformers
### Expected behavior
stuck at
Loading checkpoint shards: 42%|██████████████████████████████████████████████▋ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28592/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28591/comments | https://api.github.com/repos/huggingface/transformers/issues/28591/events | https://github.com/huggingface/transformers/issues/28591 | 2,088,735,017 | I_kwDOCUB6oc58f5Ep | 28,591 | Idefics - AttentionMasks wrongly set with padding='longest' | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"Cc @ArthurZucker @younesbelkada ",
"Might be a tokenization issue will have a look "
] | 1,705 | 1,708 | null | MEMBER | null | ### System Info
transformers==4.36.2
### Reproduction
Reported by https://huggingface.co/VishnuSuganth
https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/discussions/11
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28591/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28590/comments | https://api.github.com/repos/huggingface/transformers/issues/28590/events | https://github.com/huggingface/transformers/pull/28590 | 2,088,717,209 | PR_kwDOCUB6oc5kcqsX | 28,590 | Fix id2label assignment in run_classification.py | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
This pull request addresses an issue in the `run_classification.py` script where the assignment of the `id2label` attribute in the model's config is incorrect. The current implementation copies `config.label2id` without modifying it, leading to an incorrect mapping. The proposed fix ensures that the `id2label` attribute is assigned based on the correct mapping (`label_to_id`) to resolve this issue.
**Changes Made:**
- Modified the assignment of `id2label` in the `run_classification.py` script to use the correct label-to-id mapping.
**Context:**
This issue was introduced with transformers version 4.36, and the incorrect assignment can lead to unexpected behavior in the script.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28589
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @ArthurZucker
- @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28590/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28590",
"html_url": "https://github.com/huggingface/transformers/pull/28590",
"diff_url": "https://github.com/huggingface/transformers/pull/28590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28590.patch",
"merged_at": 1705923091000
} |
https://api.github.com/repos/huggingface/transformers/issues/28589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28589/comments | https://api.github.com/repos/huggingface/transformers/issues/28589/events | https://github.com/huggingface/transformers/issues/28589 | 2,088,705,569 | I_kwDOCUB6oc58fx4h | 28,589 | Fix id2label assignment in run_classification.py | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"model.config.id2label = {id: label for label, id in config.label2id.items()}"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/
```
### Expected behavior
**Issue Description:**
The `run_classification.py` script currently has an issue where the assignment of `id2label` in the model's config is incorrect. The problem arises from copying `config.label2id` without modifying it later on. This issue was introduced with transformers version 4.36.
**Steps to Reproduce:**
1. Execute the `run_classification.py` script with a configuration file.
2. Inspect the `id2label` attribute in the model's config.
**Expected Behavior:**
The `id2label` attribute should be assigned correctly, reflecting the label-to-id mapping.
**Actual Behavior:**
The `id2label` attribute is assigned based on the original `config.label2id`, leading to incorrect mapping.
**Proposed Solution:**
Modify the following line in `run_classification.py`:
```python
model.config.id2label = {id: label for label, id in config.label2id.items()}
```
to:
```python
model.config.id2label = {id: label for label, id in label_to_id.items()}
```
This change ensures that the correct mapping is used. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28589/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28588/comments | https://api.github.com/repos/huggingface/transformers/issues/28588/events | https://github.com/huggingface/transformers/pull/28588 | 2,088,643,049 | PR_kwDOCUB6oc5kcab2 | 28,588 | Add tf_keras imports to prepare for Keras 3 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Quick note to myself: The failing test is likely caused by replacing `K.set_value()` with `tf.assign()` somewhere, since `tf.assign()` lacks an implicit dtype cast. The test fails whether or not other tests are run beforehand, so it's not a case of the mixed precision policy accidentally carrying over between tests (which happened to us before)",
"Quick note for @amyeroberts: Although there are a lot of changes in this PR, the core idea is simple, we `import tf_keras as keras` if possible, and if not we fall back to `import keras` and raise an error if the version is >= 3. To avoid copying the boilerplate all over the modeling files, I just do it once in `modeling_tf_utils` and then `from modeling_tf_utils import keras` in all files, but that's probably bad - let me know if you want me to just copy the import try/except block around instead!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28588). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@Rocketknight1 Have you run the slow model tests on this branch? ",
"I haven't, but I'll try it with a few models today!",
"Quick update @amyeroberts - I tested a couple of models with TF 2.14 + Keras 2.14, as well as TF 2.15 + Keras 3. Behaviour was as expected - the TF 2.15 run threw an error and asked me to pip install the `tf-keras` backwards compatibility package. After that was installed, the slow tests all passed.",
"@amyeroberts all comments should be addressed now! The last issue is the pytest stuff that should be resolved by rebasing once the PR for it is merged.",
"In case this isn't already on your radar, just wanted to let you know that the changes in this PR now cause an error for an import like `from transformers import AdamWeightDecay`. The error is:\r\n```\r\nRuntimeError: Failed to import transformers.optimization_tf because of the following error (look up to see its traceback):\r\nmodule 'keras.optimizers.schedules' has no attribute 'LearningRateSchedule'\r\n```\r\nI'm guessing due to the change in these lines here: https://github.com/huggingface/transformers/blob/main/src/transformers/optimization_tf.py#L32\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/optimization_tf.py#L215\r\n(so more generally, the error comes up with `from transformers import optimization_tf` too)\r\n\r\nIt looks like changing `keras.optimizers.schedules.LearningRateSchedule` to `keras.optimizers.schedules.learning_rate_schedule.LearningRateSchedule` might fix it?",
"good catch! Would you like to open a PR for a fix @echan5 ? ",
"Thanks for the warning @echan5! This is quite urgent, I'm on it, and I'm surprised our tests didn't catch it!",
"Thank you @Rocketknight1! (sorry, I didn't get a chance to get to it earlier)"
] | 1,705 | 1,707 | 1,706 | MEMBER | null | Keras 3 will break backward compatibility for our TF code, and is becoming the default Keras in TF 2.16. This PR uses the `tf_keras` package to maintain backward compatibility - it imports tf_keras if available, and if not then it attempts to import keras, but raises an issue if the version is >= 3.
Our future plan is to ensure that TF code remains backward compatible, but to support Keras 3 in all its framework-independent glory with new Keras classes (e.g. `TFBertModel` -> `KerasBertModel`). The PR for this is at #26224, but it's on hold until we handle the urgent issue of preserving backward compatibility. It was also blocked by the need for a couple of other PRs, but those are mostly in now. Because the full Keras 3 PR will require TF models to be rewritten with 100% Keras ops instead of TF ones, we'll likely need to do a community push once the core modelling code is ready to port everything.
cc @fchollet
Fixes #27377
Fixes #28296
TODO:
- [X] Replace `keras` or `tf.keras` with `tf_keras` falling back to `keras` in core modelling code
- [x] Replace `keras` or `tf.keras` with `tf_keras` falling back to `keras` in all model files
- [x] Confirm versions - should we reject Keras >= 3, or Keras >= 2.16? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28588/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28588",
"html_url": "https://github.com/huggingface/transformers/pull/28588",
"diff_url": "https://github.com/huggingface/transformers/pull/28588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28588.patch",
"merged_at": 1706635596000
} |
https://api.github.com/repos/huggingface/transformers/issues/28587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28587/comments | https://api.github.com/repos/huggingface/transformers/issues/28587/events | https://github.com/huggingface/transformers/pull/28587 | 2,088,598,424 | PR_kwDOCUB6oc5kcQm9 | 28,587 | Support gated Linear Layers for SwitchTransformers | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Looks good I would say ! Do you know if there are any checkpoint on the Hub that uses gated linear layers?\r\n\r\nYes, I am currently training a new model and have the checkpoint privately.\r\nI will release it soon after finishing training and testing :)",
"> Thanks, not really sure what the incentive is for that? This is against our philosophy + no checkpoints use this no?\r\n\r\nHi @ArthurZucker ,\r\n\r\nI have a new large model that was trained on the latest version of the switch-transformer, and it uses the gated method.\r\nI will release the model after training is finished and it is fully tested on downstream tasks. \r\n\r\nThe current implementation is exactly the same as T5 and T5v1.1 on huggingface :\r\n\r\nhttps://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/t5/modeling_t5.py#L333\r\n",
"@agemagician Thanks for your contribution! What I'd suggest is having this model code on the hub. This way you or anyone else in the community can use it. If we see many checkpoints using this, or users requesting it, then we can consider integrating it into the transformers repo. As @ArthurZucker points out, adding custom changes to our modeling code without checkpoints and increased if/else logic doesn't align with the repo's philosophy. ",
"OK, no problem. I will close this pull request and add it to the hub."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
The new version of SwitchTransformers uses a gated linear layer.
This pull request adds gated Linear Layers support for SwitchTransformers.
This is very similar to T5 and T5 v1.1 models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts , @ArthurZucker , @younesbelkada , @younesbelkada , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28587/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28587",
"html_url": "https://github.com/huggingface/transformers/pull/28587",
"diff_url": "https://github.com/huggingface/transformers/pull/28587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28587.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28585/comments | https://api.github.com/repos/huggingface/transformers/issues/28585/events | https://github.com/huggingface/transformers/pull/28585 | 2,088,483,355 | PR_kwDOCUB6oc5kb3U0 | 28,585 | Add w2v2bert to pipeline | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Also cc @ydshieh, it would be nice to add the model architecture to the tiny model repo !\r\n\r\nOK, will update next week (if it succeed to create the tiny model though)",
"> second approval\r\n\r\nnothing big from my side other than 2 tiny comments.\r\n\r\n> all slow ASR tests pass\r\n\r\nyes, confirmed"
] | 1,705 | 1,706 | 1,705 | COLLABORATOR | null | # What does this PR do?
https://github.com/huggingface/transformers/pull/28165 introduced a new W2V2-based model that uses a different feature extractor than classic CTC-based models.
In particular, it takes mel-spectrograms as `input_features`, instead of raw waveform as `input_values`.
The pipeline only takes `input_values` for this kind of models, which requires a bit of workaround.
Note that I've also run every slow tests from the ASR pipeline, just to make sure.
cc @amyeroberts @sanchit-gandhi
Also cc @ydshieh, it would be nice to add the model architecture to the tiny model repo ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28585",
"html_url": "https://github.com/huggingface/transformers/pull/28585",
"diff_url": "https://github.com/huggingface/transformers/pull/28585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28585.patch",
"merged_at": 1705663501000
} |
https://api.github.com/repos/huggingface/transformers/issues/28584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28584/comments | https://api.github.com/repos/huggingface/transformers/issues/28584/events | https://github.com/huggingface/transformers/pull/28584 | 2,088,480,788 | PR_kwDOCUB6oc5kb2wP | 28,584 | Don't save `processor_config.json` if a processor has no extra attribute | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Don't save `processor_config.json` if a processor has no extra attribute | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28584/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28584",
"html_url": "https://github.com/huggingface/transformers/pull/28584",
"diff_url": "https://github.com/huggingface/transformers/pull/28584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28584.patch",
"merged_at": 1705658355000
} |
https://api.github.com/repos/huggingface/transformers/issues/28583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28583/comments | https://api.github.com/repos/huggingface/transformers/issues/28583/events | https://github.com/huggingface/transformers/pull/28583 | 2,088,447,016 | PR_kwDOCUB6oc5kbvTa | 28,583 | [`docs`] Improve visualization for vertical parallelism | {
"login": "petergtz",
"id": 3618401,
"node_id": "MDQ6VXNlcjM2MTg0MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3618401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petergtz",
"html_url": "https://github.com/petergtz",
"followers_url": "https://api.github.com/users/petergtz/followers",
"following_url": "https://api.github.com/users/petergtz/following{/other_user}",
"gists_url": "https://api.github.com/users/petergtz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petergtz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petergtz/subscriptions",
"organizations_url": "https://api.github.com/users/petergtz/orgs",
"repos_url": "https://api.github.com/users/petergtz/repos",
"events_url": "https://api.github.com/users/petergtz/events{/privacy}",
"received_events_url": "https://api.github.com/users/petergtz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
The documentation says "We refer to this Model parallelism as “Vertical” because of how models are typically visualized.", but then visualizes the model horizontally. This change visualizes the model indeed vertically.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28583/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28583",
"html_url": "https://github.com/huggingface/transformers/pull/28583",
"diff_url": "https://github.com/huggingface/transformers/pull/28583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28583.patch",
"merged_at": 1706205311000
} |
https://api.github.com/repos/huggingface/transformers/issues/28582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28582/comments | https://api.github.com/repos/huggingface/transformers/issues/28582/events | https://github.com/huggingface/transformers/pull/28582 | 2,088,342,596 | PR_kwDOCUB6oc5kbYbr | 28,582 | Making CTC training example more general | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
#28165 introduced a new W2V2-based model that uses a different feature extractor than classic CTC-based models.
In particular, it takes mel-spectrograms as `input_features`, instead of raw waveform as `input_values`.
This runs well with the [example from the README](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#single-gpu-ctc), as well as with the new introduced model. Happy to try some other configurations as well.
cc @patrickvonplaten and @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28582",
"html_url": "https://github.com/huggingface/transformers/pull/28582",
"diff_url": "https://github.com/huggingface/transformers/pull/28582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28582.patch",
"merged_at": 1705597309000
} |
https://api.github.com/repos/huggingface/transformers/issues/28581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28581/comments | https://api.github.com/repos/huggingface/transformers/issues/28581/events | https://github.com/huggingface/transformers/pull/28581 | 2,088,324,666 | PR_kwDOCUB6oc5kbUgz | 28,581 | Fix phi model doc checkpoint | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28581). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Small fix c.f. https://github.com/huggingface/transformers/commit/d93ef7d7512e79612606f29e6ae308920f0a86cd#r137345713
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28581",
"html_url": "https://github.com/huggingface/transformers/pull/28581",
"diff_url": "https://github.com/huggingface/transformers/pull/28581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28581.patch",
"merged_at": 1705943707000
} |
https://api.github.com/repos/huggingface/transformers/issues/28580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28580/comments | https://api.github.com/repos/huggingface/transformers/issues/28580/events | https://github.com/huggingface/transformers/issues/28580 | 2,088,302,103 | I_kwDOCUB6oc58ePYX | 28,580 | This model has one file that has been marked as unsafe. [training_args.bin] | {
"login": "rizwan-ai",
"id": 34979598,
"node_id": "MDQ6VXNlcjM0OTc5NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/34979598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rizwan-ai",
"html_url": "https://github.com/rizwan-ai",
"followers_url": "https://api.github.com/users/rizwan-ai/followers",
"following_url": "https://api.github.com/users/rizwan-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/rizwan-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rizwan-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rizwan-ai/subscriptions",
"organizations_url": "https://api.github.com/users/rizwan-ai/orgs",
"repos_url": "https://api.github.com/users/rizwan-ai/repos",
"events_url": "https://api.github.com/users/rizwan-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/rizwan-ai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @rizwan-ai \r\nIf you uploaded the files by yourself, in theory it should be safe, however we advise you to double check things before running anything. Please refer to this documentation page: https://huggingface.co/docs/hub/security-pickle for more details",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
**Framework versions**
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
This model has one file that has been marked as unsafe.
[training_args.bin](https://huggingface.co/rizwan-ai/distilbert-base-uncased-finetuned-emotion/blob/main/training_args.bin)
### Git LFS Details
* **SHA256:** d672df2806e4b013fbfdf9d995526b2c4e4a7d56a8b84b77b1d6213241ea11f0
* **Pointer size:** 129 Bytes
* **Size of remote file:** 4.73 kB
#### Detected Pickle imports (9)
* "transformers.training_args.TrainingArguments",
* "transformers.training_args.OptimizerNames",
* "transformers.trainer_utils.SchedulerType",
* "accelerate.state.PartialState",
* "torch.device",
* "transformers.trainer_utils.HubStrategy",
* "accelerate.utils.dataclasses.DistributedType",
* "__builtin__.getattr",
* "transformers.trainer_utils.IntervalStrategy"
@ArthurZucker @younesbelkada @pc
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://huggingface.co/rizwan-ai/distilbert-base-uncased-finetuned-emotion
### Expected behavior
How to fix it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28580/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28579/comments | https://api.github.com/repos/huggingface/transformers/issues/28579/events | https://github.com/huggingface/transformers/pull/28579 | 2,088,191,321 | PR_kwDOCUB6oc5ka3IF | 28,579 | Fix: `generate()` with `max_new_tokens=0` produces a single token. | {
"login": "danielkorat",
"id": 32893314,
"node_id": "MDQ6VXNlcjMyODkzMzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/32893314?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielkorat",
"html_url": "https://github.com/danielkorat",
"followers_url": "https://api.github.com/users/danielkorat/followers",
"following_url": "https://api.github.com/users/danielkorat/following{/other_user}",
"gists_url": "https://api.github.com/users/danielkorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielkorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielkorat/subscriptions",
"organizations_url": "https://api.github.com/users/danielkorat/orgs",
"repos_url": "https://api.github.com/users/danielkorat/repos",
"events_url": "https://api.github.com/users/danielkorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielkorat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @danielkorat 👋 \r\n\r\nIn lower-level APIs like `generate`, we should strive to be more strict if some parameterization is causing problems to avoid creating unexpected patterns. In this case, generating with 0 tokens should not be possible, as there is no generation involved. \r\n\r\nAs such, I'd suggest increasing the severity of the [associated warning](https://github.com/huggingface/transformers/blob/b2748a6efd045dd771f8fd48e8b309cbc061c618/src/transformers/generation/utils.py#L1134) to an exception instead of the changes proposed here :)",
"> Hi @danielkorat 👋\r\n> \r\n> In lower-level APIs like `generate`, we should strive to be more strict if some parameterization is causing problems to avoid creating unexpected patterns. In this case, generating with 0 tokens should not be possible, as there is no generation involved.\r\n> \r\n> As such, I'd suggest increasing the severity of the [associated warning](https://github.com/huggingface/transformers/blob/b2748a6efd045dd771f8fd48e8b309cbc061c618/src/transformers/generation/utils.py#L1134) to an exception instead of the changes proposed here :)\r\n\r\nHi @gante, I made the requested change in a new PR: [28621](https://github.com/huggingface/transformers/pull/28621)."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Currently, setting `max_new_tokens=0` produces 1 token instead of 0, and the warning is unclear.
For example, for the following code:
```python
checkpoint = "bigcode/tiny_starcoder_py"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer("def print_hello_world():", return_tensors="pt")
max_new_tokens = 0
outputs = model.generate(**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=max_new_tokens)
input_length = len(inputs['input_ids'][0])
output_length = len(outputs[0])
print(f"\nTest:{input_length - output_length == max_new_tokens}")
```
The output is:
```bash
utils.py:1134: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Test: False
```
After the fix, this is the output:
```bash
`max_new_tokens`=0, no tokens will be generated.
utils.py:1134: UserWarning: Input length of input_ids is 7, but `max_length` is set to 7. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
Test: True
```
(Note the new warning).
Currently fixed only for `greedy_search()`. Once this PR is reviewed, I'll add the fix to all other generation modes.
@gante @amyeroberts
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28579/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28579",
"html_url": "https://github.com/huggingface/transformers/pull/28579",
"diff_url": "https://github.com/huggingface/transformers/pull/28579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28579.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28578/comments | https://api.github.com/repos/huggingface/transformers/issues/28578/events | https://github.com/huggingface/transformers/pull/28578 | 2,088,073,432 | PR_kwDOCUB6oc5kadRR | 28,578 | [SigLIP] Don't pad by default | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28569
Note: will require an update of the code snippets of the model cards + my demo notebook | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28578",
"html_url": "https://github.com/huggingface/transformers/pull/28578",
"diff_url": "https://github.com/huggingface/transformers/pull/28578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28578.patch",
"merged_at": 1705667400000
} |
https://api.github.com/repos/huggingface/transformers/issues/28577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28577/comments | https://api.github.com/repos/huggingface/transformers/issues/28577/events | https://github.com/huggingface/transformers/issues/28577 | 2,088,057,678 | I_kwDOCUB6oc58dTtO | 28,577 | Inconsistent behavior between tokenizer and fast tokenizer | {
"login": "xuzhenqi",
"id": 3806642,
"node_id": "MDQ6VXNlcjM4MDY2NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3806642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuzhenqi",
"html_url": "https://github.com/xuzhenqi",
"followers_url": "https://api.github.com/users/xuzhenqi/followers",
"following_url": "https://api.github.com/users/xuzhenqi/following{/other_user}",
"gists_url": "https://api.github.com/users/xuzhenqi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuzhenqi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuzhenqi/subscriptions",
"organizations_url": "https://api.github.com/users/xuzhenqi/orgs",
"repos_url": "https://api.github.com/users/xuzhenqi/repos",
"events_url": "https://api.github.com/users/xuzhenqi/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuzhenqi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Thanks for reporting! This is pretty much a known bug but will be fixed by the likes of #26678 (when propagated to Llama)"
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-4.18.0-193.6.3.el8_2.v1.4.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
``` python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf", trust_remote_code=True, use_fast=False)
fast_tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf", trust_remote_code=True, use_fast=True)
prompt = "▁<PRE>//"
inputs = tokenizer(prompt, return_tensors="pt")
print(f"tokenizer ids: {inputs.input_ids}")
inputs = fast_tokenizer(prompt, return_tensors="pt")
print(f"fast tokenizer ids: {inputs.input_ids}")
```
This scripts will output:
```
tokenizer ids: tensor([[ 1, 32007, 458]])
fast tokenizer ids: tensor([[ 1, 32007, 849]])
```
In the `tokenizer.json` from the model folder, we can see:
```
"//": 458,
"▁//": 849,
```
Fast tokenizer probably ignores the `<PRE>` token, is it a correct behavior?
### Expected behavior
Fast tokenizer should be consistent with normal tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28577/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28576/comments | https://api.github.com/repos/huggingface/transformers/issues/28576/events | https://github.com/huggingface/transformers/issues/28576 | 2,088,057,073 | I_kwDOCUB6oc58dTjx | 28,576 | Feature Request: Expose an Args to Set Prefetch Factor in Trainer | {
"login": "uygnef",
"id": 13539441,
"node_id": "MDQ6VXNlcjEzNTM5NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/13539441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uygnef",
"html_url": "https://github.com/uygnef",
"followers_url": "https://api.github.com/users/uygnef/followers",
"following_url": "https://api.github.com/users/uygnef/following{/other_user}",
"gists_url": "https://api.github.com/users/uygnef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uygnef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uygnef/subscriptions",
"organizations_url": "https://api.github.com/users/uygnef/orgs",
"repos_url": "https://api.github.com/users/uygnef/repos",
"events_url": "https://api.github.com/users/uygnef/events{/privacy}",
"received_events_url": "https://api.github.com/users/uygnef/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @uygnef, thanks for raising an issue!\r\n\r\nThere is a related PR open: #28498 cc @muellerzr @pacman100 ",
"> Hi @uygnef, thanks for raising an issue!\r\n> \r\n> There is a related PR open: #28498 cc @muellerzr @pacman100\r\n\r\noh, good job!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### Feature request
Currently, the trainer does not allow users to set the prefetch factor as an argument from the training script.
### Motivation
This can be a limitation when training large models, especially when the data is being fetched from a remote server, as the loading speed can become a bottleneck due to slow data download for the next partition.
### Your contribution
I can commit a PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28576/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28575/comments | https://api.github.com/repos/huggingface/transformers/issues/28575/events | https://github.com/huggingface/transformers/pull/28575 | 2,088,049,724 | PR_kwDOCUB6oc5kaYGD | 28,575 | Use `LoggingLevel` context manager in 3 tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't run CI multiple times with this PR - but quite confident this would fix the current situation.",
"I forgot to remove `is_flaky`. Will do before merge 😅 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28575). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
To avoid flaky failing tests due to transformers' root logger's level being changed by some (unknown yet) other tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28575/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28575",
"html_url": "https://github.com/huggingface/transformers/pull/28575",
"diff_url": "https://github.com/huggingface/transformers/pull/28575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28575.patch",
"merged_at": 1705585285000
} |
https://api.github.com/repos/huggingface/transformers/issues/28574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28574/comments | https://api.github.com/repos/huggingface/transformers/issues/28574/events | https://github.com/huggingface/transformers/pull/28574 | 2,088,021,115 | PR_kwDOCUB6oc5kaR5J | 28,574 | chore: Fix multiple typos | {
"login": "hugo-syn",
"id": 61210734,
"node_id": "MDQ6VXNlcjYxMjEwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/61210734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugo-syn",
"html_url": "https://github.com/hugo-syn",
"followers_url": "https://api.github.com/users/hugo-syn/followers",
"following_url": "https://api.github.com/users/hugo-syn/following{/other_user}",
"gists_url": "https://api.github.com/users/hugo-syn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugo-syn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugo-syn/subscriptions",
"organizations_url": "https://api.github.com/users/hugo-syn/orgs",
"repos_url": "https://api.github.com/users/hugo-syn/repos",
"events_url": "https://api.github.com/users/hugo-syn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugo-syn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fix multiple typos
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28574",
"html_url": "https://github.com/huggingface/transformers/pull/28574",
"diff_url": "https://github.com/huggingface/transformers/pull/28574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28574.patch",
"merged_at": 1705584909000
} |
https://api.github.com/repos/huggingface/transformers/issues/28573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28573/comments | https://api.github.com/repos/huggingface/transformers/issues/28573/events | https://github.com/huggingface/transformers/issues/28573 | 2,087,647,786 | I_kwDOCUB6oc58bvoq | 28,573 | data_collator in examples/pytorch/language-modeling/run_clm.py | {
"login": "pierowu",
"id": 61963313,
"node_id": "MDQ6VXNlcjYxOTYzMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/61963313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pierowu",
"html_url": "https://github.com/pierowu",
"followers_url": "https://api.github.com/users/pierowu/followers",
"following_url": "https://api.github.com/users/pierowu/following{/other_user}",
"gists_url": "https://api.github.com/users/pierowu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pierowu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pierowu/subscriptions",
"organizations_url": "https://api.github.com/users/pierowu/orgs",
"repos_url": "https://api.github.com/users/pierowu/repos",
"events_url": "https://api.github.com/users/pierowu/events{/privacy}",
"received_events_url": "https://api.github.com/users/pierowu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!"
] | 1,705 | 1,706 | 1,706 | NONE | null | ### System Info
According to the script, the trainer use dafault_data_collator to model causal language modelling.
https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/examples/pytorch/language-modeling/run_clm.py#L604
Shouldn't we use DataCollatorForLanguageModeling to shift input and output by 1 token instead? It seems that dafault_data_collator can't achieve this goal.
@ArthurZucker @younesbelkada @ArthurZucker @muellerzr and @pacman100
Thank you for answering!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Same as examples/pytorch/language-modeling/run_clm.py
### Expected behavior
Correctlt train causal language model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28572/comments | https://api.github.com/repos/huggingface/transformers/issues/28572/events | https://github.com/huggingface/transformers/pull/28572 | 2,087,586,420 | PR_kwDOCUB6oc5kYy8L | 28,572 | add YaRN RoPE scaling code for LLaMA | {
"login": "jquesnelle",
"id": 687076,
"node_id": "MDQ6VXNlcjY4NzA3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/687076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jquesnelle",
"html_url": "https://github.com/jquesnelle",
"followers_url": "https://api.github.com/users/jquesnelle/followers",
"following_url": "https://api.github.com/users/jquesnelle/following{/other_user}",
"gists_url": "https://api.github.com/users/jquesnelle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jquesnelle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jquesnelle/subscriptions",
"organizations_url": "https://api.github.com/users/jquesnelle/orgs",
"repos_url": "https://api.github.com/users/jquesnelle/repos",
"events_url": "https://api.github.com/users/jquesnelle/events{/privacy}",
"received_events_url": "https://api.github.com/users/jquesnelle/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | # What does this PR do?
This adds the [YaRN RoPE scaling method](https://arxiv.org/abs/2309.00071) to the LLaMA-class of models. It can be activated for finetuned models by setting `rope_scaling.type = 'yarn'` or for non-finetuned models by setting `rope_scaling.type = 'dynamic-yarn'`.
This PR enables the LLaMA family of models (LLaMA, Mistral, SOLAR, etc.) to use YaRN without `trust_remote_code=True`.
While we've [released several models](https://github.com/jquesnelle/yarn) that use `trust_remote_code`, it's nicer to not have to execute arbitrary code 🙂
cc: @gante | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28572/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28572/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28572",
"html_url": "https://github.com/huggingface/transformers/pull/28572",
"diff_url": "https://github.com/huggingface/transformers/pull/28572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28572.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28570/comments | https://api.github.com/repos/huggingface/transformers/issues/28570/events | https://github.com/huggingface/transformers/pull/28570 | 2,087,361,488 | PR_kwDOCUB6oc5kYBkM | 28,570 | [`Llava`] Fix convert_llava_weights_to_hf.py script | {
"login": "isaac-vidas",
"id": 80056737,
"node_id": "MDQ6VXNlcjgwMDU2NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/80056737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-vidas",
"html_url": "https://github.com/isaac-vidas",
"followers_url": "https://api.github.com/users/isaac-vidas/followers",
"following_url": "https://api.github.com/users/isaac-vidas/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-vidas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-vidas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-vidas/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-vidas/orgs",
"repos_url": "https://api.github.com/users/isaac-vidas/repos",
"events_url": "https://api.github.com/users/isaac-vidas/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-vidas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@isaac-vidas would you be happy to address the same fix for Vip-Llava? https://github.com/huggingface/transformers/blob/main/src/transformers/models/vipllava/convert_vipllava_weights_to_hf.py#L61",
"Thanks for the review @younesbelkada!\r\nI added the fix you mentioned, if you can review again please 😄 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28570). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | Fix call to `tokenizer.add_tokens` in `convert_llava_weights_to_hf.py` and `convert_vipllava_weights_to_hf.py` scripts.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@younesbelkada if you can review
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28570",
"html_url": "https://github.com/huggingface/transformers/pull/28570",
"diff_url": "https://github.com/huggingface/transformers/pull/28570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28570.patch",
"merged_at": 1705667485000
} |
https://api.github.com/repos/huggingface/transformers/issues/28569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28569/comments | https://api.github.com/repos/huggingface/transformers/issues/28569/events | https://github.com/huggingface/transformers/issues/28569 | 2,087,314,475 | I_kwDOCUB6oc58aeQr | 28,569 | Clarify usage / implementation of padding for SigLIP model processor | {
"login": "skysyk",
"id": 3191242,
"node_id": "MDQ6VXNlcjMxOTEyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3191242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skysyk",
"html_url": "https://github.com/skysyk",
"followers_url": "https://api.github.com/users/skysyk/followers",
"following_url": "https://api.github.com/users/skysyk/following{/other_user}",
"gists_url": "https://api.github.com/users/skysyk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skysyk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skysyk/subscriptions",
"organizations_url": "https://api.github.com/users/skysyk/orgs",
"repos_url": "https://api.github.com/users/skysyk/repos",
"events_url": "https://api.github.com/users/skysyk/events{/privacy}",
"received_events_url": "https://api.github.com/users/skysyk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good point. cc @ArthurZucker - I added this to make sure people use `max_length` as the pre-trained models use this.\r\n\r\nI guess it would be better to explicitly pass `padding=\"max_length\"` so that this is more explicit and users have the choice.",
"Thank you!"
] | 1,705 | 1,705 | 1,705 | NONE | null | ### Feature request
Change the implementation of `SiglipProcessor` to use the global default behavior for `padding` of `False` or update the documentation to indicate the usage is different and defaults to `'max_length'` if the `padding` argument is not provided.
### Motivation
In the HF documentation for padding (both the docs as well as the function comments for the processor class), the default behavior (argument) is described to be `False` or `'do_not_pad'`. For the `SiglipProcessor`, `max_length` is the default behavior implemented in [code](https://github.com/huggingface/transformers/blob/98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1/src/transformers/models/siglip/processing_siglip.py#L53) while the [example in the docs](https://huggingface.co/docs/transformers/main/en/model_doc/siglip#using-the-model-yourself) omits the padding argument. This is at odds with the overall documentation as well as behavior / usage examples provided in similar models such as CLIP (where explicitly `padding=True` in the usage example) and could give the wrong impression upon first glance that padding is not used for SigLIP.
### Your contribution
Opening this issue to discuss a clarification / improvement. I can help implement the preferred solution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28568/comments | https://api.github.com/repos/huggingface/transformers/issues/28568/events | https://github.com/huggingface/transformers/issues/28568 | 2,087,171,304 | I_kwDOCUB6oc58Z7To | 28,568 | Optimised 4bit inference kernels | {
"login": "nivibilla",
"id": 26687662,
"node_id": "MDQ6VXNlcjI2Njg3NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/26687662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nivibilla",
"html_url": "https://github.com/nivibilla",
"followers_url": "https://api.github.com/users/nivibilla/followers",
"following_url": "https://api.github.com/users/nivibilla/following{/other_user}",
"gists_url": "https://api.github.com/users/nivibilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nivibilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nivibilla/subscriptions",
"organizations_url": "https://api.github.com/users/nivibilla/orgs",
"repos_url": "https://api.github.com/users/nivibilla/repos",
"events_url": "https://api.github.com/users/nivibilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/nivibilla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada @SunMarc ",
"Thanks !\r\n@efrantar can confirm but looking at the code it looks like you essentially just need to replace all Linear layers with `marlin.Linear` ? (not 100% sure) if the interface is simple we can definitely add that support I think by passing `MarlinConfig` through `quantization_config` in `from_pretrained`.\r\n\r\nWe also do have HQQ in the backlog https://github.com/huggingface/transformers/issues/28328 but we are waiting to finalize https://github.com/huggingface/transformers/pull/26610 from @poedator before adding any new quantization scheme\r\n\r\ncc @Titus-von-Koeller just FYI",
"@qwopqwop200 seems to be working on adding marlin to AutoGPTQ. If it is merged, we will also have support with transformers quite easily. https://github.com/qwopqwop200/AutoGPTQ-add-marlin",
"Yes, replacing the layers is pretty much it. It might also be possible to write a (not too complex) kernel to convert a GPTQ format model (groupsize 128, sym, no act-order; or any other quantization method that produces such models) to Marlin format on-the-fly (when loading the model) in reasonable time, which could be useful to have only a single storage format. However, I am not sure how many of the current GPTQ models on the hub already use the required settings for Marlin.",
"Thank you very much @efrantar for the precision! We will update you as soon as we merge #26610 "
] | 1,705 | 1,705 | null | NONE | null | ### Feature request
Integration of new 4bit kernels
https://github.com/IST-DASLab/marlin
### Motivation
provide faster Inference than awq/exllama for batch sizes upto 32
### Your contribution
Just saw this today, can try provide sample notebook. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28568/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28567/comments | https://api.github.com/repos/huggingface/transformers/issues/28567/events | https://github.com/huggingface/transformers/pull/28567 | 2,086,938,368 | PR_kwDOCUB6oc5kWlHG | 28,567 | Fix the documentation checkpoint for xlm-roberta-xl | {
"login": "jeremyfowers",
"id": 80718789,
"node_id": "MDQ6VXNlcjgwNzE4Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/80718789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremyfowers",
"html_url": "https://github.com/jeremyfowers",
"followers_url": "https://api.github.com/users/jeremyfowers/followers",
"following_url": "https://api.github.com/users/jeremyfowers/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremyfowers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremyfowers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremyfowers/subscriptions",
"organizations_url": "https://api.github.com/users/jeremyfowers/orgs",
"repos_url": "https://api.github.com/users/jeremyfowers/repos",
"events_url": "https://api.github.com/users/jeremyfowers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremyfowers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"thanks for fixing!"
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
This is a small PR that corrects the name of the `xlm-roberta-xl` checkpoint in the `transformers` documentation.
I also noticed that the docstrings were referring to the model as either `XLM-RoBERTa-xlarge` or `XLM-Roberta-xlarge` and I corrected all of those instances to `XLM-RoBERTa-XL`.
<!-- Remove if not applicable -->
Fixes #28562
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@julien-c @stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28567/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28567",
"html_url": "https://github.com/huggingface/transformers/pull/28567",
"diff_url": "https://github.com/huggingface/transformers/pull/28567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28567.patch",
"merged_at": 1705585670000
} |
https://api.github.com/repos/huggingface/transformers/issues/28566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28566/comments | https://api.github.com/repos/huggingface/transformers/issues/28566/events | https://github.com/huggingface/transformers/pull/28566 | 2,086,880,853 | PR_kwDOCUB6oc5kWYmi | 28,566 | fix: suppress `GatedRepoError` to use cache file (fix #28558). | {
"login": "scruel",
"id": 16933298,
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scruel",
"html_url": "https://github.com/scruel",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"repos_url": "https://api.github.com/users/scruel/repos",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yeah, I think we should also have the same processing logic for `RepositoryNotFoundError` exception, should I commit it in this PR?\r\nI would also like to merge three private parameters into one parameter called `optional_file`, but consider I also not quite familiar with this repo, I decided not to do so, even IMO such change will be just fine.\r\nAny feedback will be welcome!",
"Hi.\r\n\r\n- confirmed this PR works with the example case (which fails without this PR) - i.e. for gated repositories\r\n- and indeed, so far this PR won't work with private repositories (`RepositoryNotFoundError`)\r\n- I was originally concerned that we will miss the precise information about a file couldn't be found on the Hub.\r\n - However, this PR only adds `_raise_exceptions_for_gated_repo` to the places where we already have `_raise_exceptions_for_xxx`, so this PR doesn't add extra concern I mentioned above (and if there is anything we can improve, it could be done in a separate PR by us)\r\n\r\nSo it's ✅ for me.\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28566). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,705 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
The repo will have some optional files missed, and we have `_raise_exceptions_for_missing_entries=False` to suppress such errors, however for gated repo we won't be able to know if files exists or not without passing the `token` parameter (or env variable), even we already fully downloaded the repo, we will still won't be able to use it.
For those optional files, we will suppress exceptions, and for those required, we will still get errors, this PR is for keeping this behavior same if `GatedRepoError` occurs.
Fixes #28558
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @ArthurZucker @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28566",
"html_url": "https://github.com/huggingface/transformers/pull/28566",
"diff_url": "https://github.com/huggingface/transformers/pull/28566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28566.patch",
"merged_at": 1706286309000
} |
https://api.github.com/repos/huggingface/transformers/issues/28565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28565/comments | https://api.github.com/repos/huggingface/transformers/issues/28565/events | https://github.com/huggingface/transformers/issues/28565 | 2,086,814,877 | I_kwDOCUB6oc58YkSd | 28,565 | Disabling adapters is not removing the adapter from active adapters | {
"login": "balachandra",
"id": 1454090,
"node_id": "MDQ6VXNlcjE0NTQwOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1454090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/balachandra",
"html_url": "https://github.com/balachandra",
"followers_url": "https://api.github.com/users/balachandra/followers",
"following_url": "https://api.github.com/users/balachandra/following{/other_user}",
"gists_url": "https://api.github.com/users/balachandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/balachandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balachandra/subscriptions",
"organizations_url": "https://api.github.com/users/balachandra/orgs",
"repos_url": "https://api.github.com/users/balachandra/repos",
"events_url": "https://api.github.com/users/balachandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/balachandra/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @balachandra, thanks for raising an issue! \r\n\r\nSo that we can be best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output\r\n* A minimal code snippet we can run to reproduce the error \r\n* All relevant details about the error including the full error traceback",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
Using AWS sagemaker notebook with conda_pytorch_p310 as env.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Add an adapter to the model using `add_adapter`
2. Disable all adapters using `disable_adapters`
3. List active adapters using `active_adapters`
### Expected behavior
Ideally, when all the adapters are disabled, the active adapters should return an empty list. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28565/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28564/comments | https://api.github.com/repos/huggingface/transformers/issues/28564/events | https://github.com/huggingface/transformers/pull/28564 | 2,086,644,413 | PR_kwDOCUB6oc5kVk3x | 28,564 | Fix Switch Transformers When sparse_step = 1 | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
In case sparse_step = 1, the current code will not work.
Because anything % 1 will always equal 0, even though there should be a spare layer at each block.
Fixes # (issue)
I didn't open an issue. I just solved the problem with this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker , @younesbelkada , @younesbelkada , @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28564",
"html_url": "https://github.com/huggingface/transformers/pull/28564",
"diff_url": "https://github.com/huggingface/transformers/pull/28564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28564.patch",
"merged_at": 1705526781000
} |
https://api.github.com/repos/huggingface/transformers/issues/28563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28563/comments | https://api.github.com/repos/huggingface/transformers/issues/28563/events | https://github.com/huggingface/transformers/pull/28563 | 2,086,638,401 | PR_kwDOCUB6oc5kVjjV | 28,563 | [Whisper] Fix audio classification with weighted layer sum | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Fixes #28002: the `WhisperForAudioClassfication` is corrected for compatibility with `use_weighted_layer_sum=True`. Implements 2 tests to confirm correctness. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28563/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28563",
"html_url": "https://github.com/huggingface/transformers/pull/28563",
"diff_url": "https://github.com/huggingface/transformers/pull/28563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28563.patch",
"merged_at": 1705596104000
} |
https://api.github.com/repos/huggingface/transformers/issues/28562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28562/comments | https://api.github.com/repos/huggingface/transformers/issues/28562/events | https://github.com/huggingface/transformers/issues/28562 | 2,086,612,194 | I_kwDOCUB6oc58Xyzi | 28,562 | The examples for xlm-roberta-xl reference a model that doesn't exist | {
"login": "jeremyfowers",
"id": 80718789,
"node_id": "MDQ6VXNlcjgwNzE4Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/80718789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeremyfowers",
"html_url": "https://github.com/jeremyfowers",
"followers_url": "https://api.github.com/users/jeremyfowers/followers",
"following_url": "https://api.github.com/users/jeremyfowers/following{/other_user}",
"gists_url": "https://api.github.com/users/jeremyfowers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeremyfowers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeremyfowers/subscriptions",
"organizations_url": "https://api.github.com/users/jeremyfowers/orgs",
"repos_url": "https://api.github.com/users/jeremyfowers/repos",
"events_url": "https://api.github.com/users/jeremyfowers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeremyfowers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice catch – would you be able to open a PR to fix the model reference?",
"> Nice catch – would you be able to open a PR to fix the model reference?\r\n\r\nyes @julien-c, if you're happy with the new model reference then I would be happy to make the PR."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.18
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example code for `XLMRobertaXLForMaskedLM` verbatim in python: https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLForMaskedLM.forward.example
Example code pasted here for convenience:
```
from transformers import AutoTokenizer, XLMRobertaXLForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-xlarge")
model = XLMRobertaXLForMaskedLM.from_pretrained("xlm-roberta-xlarge")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(
as_tuple=True
)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
```
This results in:
```
OSError: xlm-roberta-xlarge is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
```
### Expected behavior
This should find and run the model. However, it does not. Replacing the model string from `"xlm-roberta-xlarge"` to `"facebook/xlm-roberta-xl"` fixes the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28562/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28561/comments | https://api.github.com/repos/huggingface/transformers/issues/28561/events | https://github.com/huggingface/transformers/pull/28561 | 2,086,502,150 | PR_kwDOCUB6oc5kVF31 | 28,561 | Update image_processing_deformable_detr.py | {
"login": "sounakdey",
"id": 8640971,
"node_id": "MDQ6VXNlcjg2NDA5NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8640971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sounakdey",
"html_url": "https://github.com/sounakdey",
"followers_url": "https://api.github.com/users/sounakdey/followers",
"following_url": "https://api.github.com/users/sounakdey/following{/other_user}",
"gists_url": "https://api.github.com/users/sounakdey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sounakdey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sounakdey/subscriptions",
"organizations_url": "https://api.github.com/users/sounakdey/orgs",
"repos_url": "https://api.github.com/users/sounakdey/repos",
"events_url": "https://api.github.com/users/sounakdey/events{/privacy}",
"received_events_url": "https://api.github.com/users/sounakdey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reviewing it @amyeroberts, I have addressed your comment. This MR should be ready to be merged."
] | 1,705 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
This PR in `image_processing_deformable_detr.py` prevents the usage of `unbind` on `target_sizes` when it is of `NoneType` similar to https://github.com/huggingface/transformers/blob/d6ffe74dfa577b5e7d12e48aa1c686ad8d3ef557/src/transformers/models/detr/image_processing_detr.py#L1606
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28561/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28561",
"html_url": "https://github.com/huggingface/transformers/pull/28561",
"diff_url": "https://github.com/huggingface/transformers/pull/28561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28561.patch",
"merged_at": 1705936659000
} |
https://api.github.com/repos/huggingface/transformers/issues/28560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28560/comments | https://api.github.com/repos/huggingface/transformers/issues/28560/events | https://github.com/huggingface/transformers/issues/28560 | 2,086,424,006 | I_kwDOCUB6oc58XE3G | 28,560 | Cohere embed - Sagemaker deploy - Should have a `model_type` key in its config.json | {
"login": "pthd",
"id": 7238429,
"node_id": "MDQ6VXNlcjcyMzg0Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7238429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pthd",
"html_url": "https://github.com/pthd",
"followers_url": "https://api.github.com/users/pthd/followers",
"following_url": "https://api.github.com/users/pthd/following{/other_user}",
"gists_url": "https://api.github.com/users/pthd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pthd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pthd/subscriptions",
"organizations_url": "https://api.github.com/users/pthd/orgs",
"repos_url": "https://api.github.com/users/pthd/repos",
"events_url": "https://api.github.com/users/pthd/events{/privacy}",
"received_events_url": "https://api.github.com/users/pthd/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi @pthd, thanks for raising an issue! \r\n\r\nI'd suggest opening a discussion on the [model's hub page](https://huggingface.co/Cohere/Cohere-embed-multilingual-v3.0). As it looks like it might be related to the model itself and the owners of the repo will be able to address that. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,705 | 1,708 | null | NONE | null | ### System Info
- `transformers` version: 4.33.2
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.18.0
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@philschmid
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
deployed according to
https://huggingface.co/Cohere/Cohere-embed-multilingual-v3.0?sagemaker_deploy=true
W-Cohere__Cohere-embed-mult-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Unrecognized model in /.sagemaker/mms/models/Cohere__Cohere-embed-multilingual-v3.0. Should have a `model_type` key in its config.json, or contain one of the following strings in its name:
### Expected behavior
deployment succeeds but invocation raises error
```
W-Cohere__Cohere-embed-mult-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Unrecognized model in /.sagemaker/mms/models/Cohere__Cohere-embed-multilingual-v3.0. Should have a `model_type` key in its config.json, or contain one of the following strings in its name:
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28560/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28559/comments | https://api.github.com/repos/huggingface/transformers/issues/28559/events | https://github.com/huggingface/transformers/pull/28559 | 2,086,294,196 | PR_kwDOCUB6oc5kUZCd | 28,559 | ClearMLCallback enhancements: support multiple runs and handle logging better | {
"login": "eugen-ajechiloae-clearml",
"id": 97950284,
"node_id": "U_kgDOBdaaTA",
"avatar_url": "https://avatars.githubusercontent.com/u/97950284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eugen-ajechiloae-clearml",
"html_url": "https://github.com/eugen-ajechiloae-clearml",
"followers_url": "https://api.github.com/users/eugen-ajechiloae-clearml/followers",
"following_url": "https://api.github.com/users/eugen-ajechiloae-clearml/following{/other_user}",
"gists_url": "https://api.github.com/users/eugen-ajechiloae-clearml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eugen-ajechiloae-clearml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eugen-ajechiloae-clearml/subscriptions",
"organizations_url": "https://api.github.com/users/eugen-ajechiloae-clearml/orgs",
"repos_url": "https://api.github.com/users/eugen-ajechiloae-clearml/repos",
"events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/events{/privacy}",
"received_events_url": "https://api.github.com/users/eugen-ajechiloae-clearml/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @LysandreJik! Can you or someone from your team please review this PR?",
"Hi @eugen-ajechiloae-clearml, apologies for the delay. Reviewing now!",
"@eugen-ajechiloae-clearml We had some recent issues with new package releases and compatibility on our circle CI pipeline. Fixes have been merged into `main` rebasing and pushing the changes should resolve the tests that are currently failing 🤗 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@amyeroberts I believe I synced the branch appropriately and `tests_exotic_models` still fails. Is there something I'm missing?",
"@eugen-ajechiloae-clearml I'm not sure what's happening to be honest. From the commit history, it all looks good. And I'm pretty sure these failures are independent of the changes in this PR. I'm going to see if this is popping up elsewhere. ",
"OK, seems like there's some outstanding issues with package installs. There's a PR open to fix: #28834. Once that's merged in, a final rebase should do it :) "
] | 1,705 | 1,707 | 1,707 | CONTRIBUTOR | null | Currently, training multiple models in the same script might cause some model checkpoints logged to ClearML to be lost, as well as scalars, when using ClearMLCallback. This PR fixes these issues. Scalar visualization in ClearML has also been enhanced.
What the PR does:
1. We count the number of times `ClearMLCallback.setup` is called via class variables. When a second training run is created, we do one of the following: we create a new task that will be used to log all the models, metrics etc. OR we keep the same task and we suffix the metrics/checkpoint etc. with the setup number.
We keep the same task if the task was created externally or if we are running remotely. We create new tasks if `ClearMLCallback` is the one that created the first task (in this case we also close the task so we can create another one).
2. We now delete model checkpoints if `save_total_limit` is set and the limit has been exceeded.
3. We switched the title/series of the logged scalars for better visualization.
4. We now, by default, don't fetch the configurations/hparams from the backend when running remotely, as these can contain temp files or other variables that are related to the local environment. The user can still override these tho, by setting, in the UI/via scripts, `_ignore_hparams_ui_overrides_` or `_ignore_model_config_ui_overrides_` to False. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28559/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28559",
"html_url": "https://github.com/huggingface/transformers/pull/28559",
"diff_url": "https://github.com/huggingface/transformers/pull/28559.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28559.patch",
"merged_at": 1707163457000
} |