url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/28760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28760/comments
https://api.github.com/repos/huggingface/transformers/issues/28760/events
https://github.com/huggingface/transformers/pull/28760
2,105,891,568
PR_kwDOCUB6oc5lWN1x
28,760
DeepSpeed: hardcode `torch.arange` dtype on `float` usage to avoid incorrect initialization
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28760). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker I've added a test case that shows how it was failing before πŸ‘ It only covers one model, though, as the test has to be tailored to each model.\r\n\r\nThe test will also be useful to show to users the importance of this ugly pattern :D " ]
1,706
1,706
1,706
MEMBER
null
# What does this PR do? Addresses #28685 -- check the issue (and related issues) for a full discussion. TL;DR: some frameworks, such as DeepSpeed, may patch the initialization of a tensor. For instance, a `float32` tensor may be initialized as `bfloat16` instead. This is particularly problematic when `torch.arange` is used as a non-integer: its initialized value may be a source of problems if not in the right type. This PR casts `torch.arange` to `int64` at initialization time, preventing the frameworks' float type conversion, and subsequently casts the tensor to the desired type.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28760/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28760", "html_url": "https://github.com/huggingface/transformers/pull/28760", "diff_url": "https://github.com/huggingface/transformers/pull/28760.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28760.patch", "merged_at": 1706711948000 }
https://api.github.com/repos/huggingface/transformers/issues/28759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28759/comments
https://api.github.com/repos/huggingface/transformers/issues/28759/events
https://github.com/huggingface/transformers/pull/28759
2,105,665,598
PR_kwDOCUB6oc5lVcep
28,759
fix num_assistant_tokens with heuristic schedule
{ "login": "jmamou", "id": 19263306, "node_id": "MDQ6VXNlcjE5MjYzMzA2", "avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmamou", "html_url": "https://github.com/jmamou", "followers_url": "https://api.github.com/users/jmamou/followers", "following_url": "https://api.github.com/users/jmamou/following{/other_user}", "gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmamou/subscriptions", "organizations_url": "https://api.github.com/users/jmamou/orgs", "repos_url": "https://api.github.com/users/jmamou/repos", "events_url": "https://api.github.com/users/jmamou/events{/privacy}", "received_events_url": "https://api.github.com/users/jmamou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks for adding this!\r\n> \r\n> For the code quality checks, you'll need to run `make fixup` and push the changes.\r\n> \r\n> From the docstring, it mentions behaviour about `\"heuristic_transient\" being reset, but I don't see logic relating to it in this diff. Does this already happen?\r\n\r\nI just fixed docstring", "@jmamou Let's try and make the CI green :) You'll need to resolve the quality checks by running `make fixup` and pushing the changes. \r\n\r\nFor the other failing tests, you'll need to try rebasing on main.", "@amyeroberts \r\ntests_torch failed. \r\nAre you familiar with the error?", "@jmamou Tbh, I'm not sure what the cause of these failures are. I would first suggest rebasing on main to make sure you have all of the most recent commits. This will trigger a re-run of the CI too.", "@amyeroberts \r\nsame test still fails πŸ‘Ž ", "@jmamou Our apologies, [this PR](https://github.com/huggingface/transformers/pull/29027) fixed it. Could you try rebasing again please? πŸ€— ", "@amyeroberts both CI failures seem unrelated to this PR, including `check_repository_consistency` πŸ‘€ do you have an idea of what might be causing it?", "@jmamou Apologies for all of the current issues you've been experiencing with unrelated failures on this PR. \r\n\r\nThe two current batches of failing tests should have been resolved with #29037, #29043\r\n\r\nCould you try one (final 🀞) rebase to get this CI green 🟒 ? ", "@amyeroberts \r\ntests passed on CI πŸ‘ ", "@jmamou Thanks for this contribution and your patience with our misbehaving CI! ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28759). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? We have defined 2 different num_assistant_tokens_schedule values: - `heuristic`: When all _speculative_ tokens are correct, increase `num_assistant_tokens` by 2 else reduce by 1. `num_assistant_tokens` value is persistent over multiple generation calls with the same assistant model. - `heuristic_transient`: Same as `"heuristic` but `num_assistant_tokens` is reset to its initial value after each generation call. Fixes # (issue) https://github.com/huggingface/transformers/pull/27979#issuecomment-1908153882 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @gante @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28759", "html_url": "https://github.com/huggingface/transformers/pull/28759", "diff_url": "https://github.com/huggingface/transformers/pull/28759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28759.patch", "merged_at": 1708083898000 }
https://api.github.com/repos/huggingface/transformers/issues/28758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28758/comments
https://api.github.com/repos/huggingface/transformers/issues/28758/events
https://github.com/huggingface/transformers/pull/28758
2,105,636,363
PR_kwDOCUB6oc5lVWBb
28,758
Pin pytest version <8.0.0
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts you are too fast to address my first comment, so I am not sure if you see my 2nd one\r\n\r\n\r\n\r\nsrc/transformers/dependency_versions_table.py need to be updated by running make deps_table_update\r\n", "> @amyeroberts you are too fast to address my first comment, so I am not sure if you see my 2nd one\r\n> \r\n> src/transformers/dependency_versions_table.py need to be updated by running make deps_table_update\r\n\r\nI didn't - but I've run and pushed the change now :) ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28758). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? pytest released a new major version, 8 two days ago: https://pypi.org/project/pytest/8.0.0/ This breaks doctest runs on CI e.g. https://app.circleci.com/pipelines/github/huggingface/transformers/83241/workflows/7ca5119f-b434-4c93-89fb-28378e63c449/jobs/1073188 Pinning until we make our doctests compatible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28758/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28758", "html_url": "https://github.com/huggingface/transformers/pull/28758", "diff_url": "https://github.com/huggingface/transformers/pull/28758.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28758.patch", "merged_at": 1706541734000 }
https://api.github.com/repos/huggingface/transformers/issues/28757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28757/comments
https://api.github.com/repos/huggingface/transformers/issues/28757/events
https://github.com/huggingface/transformers/pull/28757
2,105,570,960
PR_kwDOCUB6oc5lVHpd
28,757
Mark test_constrained_beam_search_generate as flaky
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28757). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Test occasionally fails on CI runs e.g. https://app.circleci.com/pipelines/github/huggingface/transformers/83241/workflows/6cb424b9-229b-412f-abfd-71cc6cfc7392/jobs/1073186/tests#failed-test-0 Marking as flaky to trigger retries to help prevent failing CI runs on unrelated PRs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28757/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28757", "html_url": "https://github.com/huggingface/transformers/pull/28757", "diff_url": "https://github.com/huggingface/transformers/pull/28757.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28757.patch", "merged_at": 1706541745000 }
https://api.github.com/repos/huggingface/transformers/issues/28756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28756/comments
https://api.github.com/repos/huggingface/transformers/issues/28756/events
https://github.com/huggingface/transformers/pull/28756
2,105,537,006
PR_kwDOCUB6oc5lVAJt
28,756
Workaround for #27758 to avoid ZeroDivisionError
{ "login": "tleyden", "id": 296876, "node_id": "MDQ6VXNlcjI5Njg3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/296876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tleyden", "html_url": "https://github.com/tleyden", "followers_url": "https://api.github.com/users/tleyden/followers", "following_url": "https://api.github.com/users/tleyden/following{/other_user}", "gists_url": "https://api.github.com/users/tleyden/gists{/gist_id}", "starred_url": "https://api.github.com/users/tleyden/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tleyden/subscriptions", "organizations_url": "https://api.github.com/users/tleyden/orgs", "repos_url": "https://api.github.com/users/tleyden/repos", "events_url": "https://api.github.com/users/tleyden/events{/privacy}", "received_events_url": "https://api.github.com/users/tleyden/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28756). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker sounds good, I'll look into adding a test. If you know of any existing tests that I should look at, LMK!", "this one seems relevant https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L835" ]
1,706
1,706
null
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> It can speed up devloops to test with very small datasets which end up being a single batch. However, that can trigger the error described in #27758. This PR works around it by changing the division by zero to division by a very small number. The loss metric will already be meaningless if `self.state.global_step == 0`. This PR won't change that, however it will prevent the unhelpful `ZeroDivisionError` I have not written any tests yet, but would be happy to if the reviewers agree with the overall approach. <!-- Remove if not applicable --> Fixes #27758 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28756", "html_url": "https://github.com/huggingface/transformers/pull/28756", "diff_url": "https://github.com/huggingface/transformers/pull/28756.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28756.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28755/comments
https://api.github.com/repos/huggingface/transformers/issues/28755/events
https://github.com/huggingface/transformers/pull/28755
2,105,467,429
PR_kwDOCUB6oc5lUwxp
28,755
Expose `offload_buffers` parameter of `accelerate` to `PreTrainedModel.from_pretrained` method
{ "login": "notsyncing", "id": 2649806, "node_id": "MDQ6VXNlcjI2NDk4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/2649806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/notsyncing", "html_url": "https://github.com/notsyncing", "followers_url": "https://api.github.com/users/notsyncing/followers", "following_url": "https://api.github.com/users/notsyncing/following{/other_user}", "gists_url": "https://api.github.com/users/notsyncing/gists{/gist_id}", "starred_url": "https://api.github.com/users/notsyncing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/notsyncing/subscriptions", "organizations_url": "https://api.github.com/users/notsyncing/orgs", "repos_url": "https://api.github.com/users/notsyncing/repos", "events_url": "https://api.github.com/users/notsyncing/events{/privacy}", "received_events_url": "https://api.github.com/users/notsyncing/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@muellerzr would you mind taking a look? thanks!" ]
1,706
1,707
null
NONE
null
# What does this PR do? This PR will expose the `offload_buffers` parameter of the `dispatch_model` method in `accelerate` to the `PreTrainedModel.from_pretrained`, then we can make the following code easier if we want to use this parameter: ```python config = AutoConfig.from_pretrained(model_id) with accelerate.init_empty_weights(): empty_model = AutoModelForCausalLM.from_config(config) device_map = accelerate.infer_auto_device_map(empty_model, max_memory={0: "4GB", "cpu": "128GB"}, dtype=torch.bfloat16, no_split_module_classes=["LlamaDecoderLayer"]) model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device_map) ``` now simplifies to ```python model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "4GB", "cpu": "128GB"}, dtype=torch.bfload16, offload_buffers=True) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28755/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28755", "html_url": "https://github.com/huggingface/transformers/pull/28755", "diff_url": "https://github.com/huggingface/transformers/pull/28755.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28755.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28754/comments
https://api.github.com/repos/huggingface/transformers/issues/28754/events
https://github.com/huggingface/transformers/pull/28754
2,105,365,550
PR_kwDOCUB6oc5lUaJH
28,754
Fix max_position_embeddings default value for llama2 to 4096 #28241
{ "login": "karl-hajjar", "id": 24575019, "node_id": "MDQ6VXNlcjI0NTc1MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/24575019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karl-hajjar", "html_url": "https://github.com/karl-hajjar", "followers_url": "https://api.github.com/users/karl-hajjar/followers", "following_url": "https://api.github.com/users/karl-hajjar/following{/other_user}", "gists_url": "https://api.github.com/users/karl-hajjar/gists{/gist_id}", "starred_url": "https://api.github.com/users/karl-hajjar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karl-hajjar/subscriptions", "organizations_url": "https://api.github.com/users/karl-hajjar/orgs", "repos_url": "https://api.github.com/users/karl-hajjar/repos", "events_url": "https://api.github.com/users/karl-hajjar/events{/privacy}", "received_events_url": "https://api.github.com/users/karl-hajjar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The code quality check fails, saying `src/transformers/models/llama/convert_llama_weights_to_hf.py` needs to be reformatted, but I'm not sure which part is problematic.", "`make style` should automatically reformat it! Also could the history is a bit messed up, recommending you to merge and force push! ", "Yes, sorry things got messed up when I tried to merge and I didn't pay attention. I'll fix the formatting with `make style` and push.", "@ArthurZucker, @amyeroberts, I think this PR does the same as [this other PR](https://github.com/huggingface/transformers/pull/28767) which was opened 13 hours ago. I think only one of them should be kept, I'll let you decide what is the way forward considering there are duplicate PRs.", "Usually the first opened is the one we try to merge so yours in this case πŸ˜‰ ", "Hi @amyeroberts, @ArthurZucker, the PR is still awaiting your review before it can be merged ! Is there anything else you would want me to add ? For example, should I also add 'code' as one of the versions for codeLLama ? ", "Hi @karl-hajjar. At the moment, it seems there are many files with diffs unrelated to this PR. From the commit history, it looks like you might have rebased and pushed, but not force pushed. When rebasing it's necessary to do `git push -f`, as it's essentially rewriting the history. ", "Yes I have rebased when it wasn't necessary unfortunately. If I force push on the next commit, do you think this should solve the issue ? ", "@karl-hajjar It shouldn't matter when you rebase - it's just going the put the commits in the PR on top of the head of the `main` branch. Yes, force pushing should resolve this. ", "Two tests are failing because of an error when importing `transformers.models.nat.modeling_nat` and `transformers.models.detr.modeling_detr`. I don't know why these fail considering the very minor changes of the previous commit ...", "The issue is still that we can't review the PR with such a huge diff πŸ˜“ once you rebase, force push to erase the history of the commits ", "@ArthurZucker do I understand correctly that you suggest I rebase again and then force push ? (I have forced push after the last commit [42744f2](https://github.com/huggingface/transformers/pull/28754/commits/42744f2c315850bd2b38f4791551476a288f8239) but I don't think it has erased the commit history)", "I would do this:\r\n- `git reset HEAD~0dfd39ebb317707c6e3186b08737337b6f34aa11`\r\n- `git add . `\r\n- `git commit -m\"force push\"`\r\n- `git push -f`\r\n", "Two tests are failing because of an error when importing `transformers.models.nat.modeling_nat` and `transformers.models.detr.modeling_detr`. I don't know why these fail considering the very minor changes of the last commit", "@amyeroberts @ArthurZucker two tests are failing but I'm not sure why. an error when importing `transformers.models.nat.modeling_nat` and `transformers.models.detr.modeling_detr`. I don't understand how the small changes I made to the llama files could impact those imports...", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28754). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
1,707
CONTRIBUTOR
null
This PR fixes issue #28241 related the config of llama which has max_position_embeddings=2048 by default since this was the case for llama1, but the newest version llama2 should have max_position_embeddings=4096 by default, hence the fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28754/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28754", "html_url": "https://github.com/huggingface/transformers/pull/28754", "diff_url": "https://github.com/huggingface/transformers/pull/28754.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28754.patch", "merged_at": 1707474241000 }
https://api.github.com/repos/huggingface/transformers/issues/28753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28753/comments
https://api.github.com/repos/huggingface/transformers/issues/28753/events
https://github.com/huggingface/transformers/issues/28753
2,105,355,793
I_kwDOCUB6oc59fS4R
28,753
Adding CrossMAE
{ "login": "johko", "id": 2843485, "node_id": "MDQ6VXNlcjI4NDM0ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2843485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johko", "html_url": "https://github.com/johko", "followers_url": "https://api.github.com/users/johko/followers", "following_url": "https://api.github.com/users/johko/following{/other_user}", "gists_url": "https://api.github.com/users/johko/gists{/gist_id}", "starred_url": "https://api.github.com/users/johko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johko/subscriptions", "organizations_url": "https://api.github.com/users/johko/orgs", "repos_url": "https://api.github.com/users/johko/repos", "events_url": "https://api.github.com/users/johko/events{/privacy}", "received_events_url": "https://api.github.com/users/johko/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Opened a PR which leverages [PyTorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) on the original repo: https://github.com/TonyLianLong/CrossMAE/pull/2" ]
1,706
1,706
null
CONTRIBUTOR
null
### Model description Hey, the recently released [CrossMAE](https://crossmae.github.io/) seems like it would be a nice addition to transformers. Basically the model improves MAE by using Cross-Attention instead of Self-Attention on the tokens and thereby decreasing the needed FLOPS quite significantly. At the same time it seems to be able to keep the performance of MAE or even improve it a bit. Maybe there are already plans of integrating it @NielsRogge ? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Project Page: https://crossmae.github.io/ GitHub Repo: https://github.com/TonyLianLong/CrossMAE Paper: https://arxiv.org/pdf/2401.14391.pdf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28753/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28752/comments
https://api.github.com/repos/huggingface/transformers/issues/28752/events
https://github.com/huggingface/transformers/issues/28752
2,104,370,311
I_kwDOCUB6oc59biSH
28,752
Seq2SeqTrainingArguments.__init__() got an unexpected keyword argument 'save_only_model'
{ "login": "nic-olo", "id": 89006260, "node_id": "MDQ6VXNlcjg5MDA2MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/89006260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nic-olo", "html_url": "https://github.com/nic-olo", "followers_url": "https://api.github.com/users/nic-olo/followers", "following_url": "https://api.github.com/users/nic-olo/following{/other_user}", "gists_url": "https://api.github.com/users/nic-olo/gists{/gist_id}", "starred_url": "https://api.github.com/users/nic-olo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nic-olo/subscriptions", "organizations_url": "https://api.github.com/users/nic-olo/orgs", "repos_url": "https://api.github.com/users/nic-olo/repos", "events_url": "https://api.github.com/users/nic-olo/events{/privacy}", "received_events_url": "https://api.github.com/users/nic-olo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
NONE
null
### System Info ![image](https://github.com/huggingface/transformers/assets/89006260/a6c4672e-4d3c-460a-9407-2b3de3301d73) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction training_args = Seq2SeqTrainingArguments( output_dir=DIR, # Parameters per_device_train_batch_size=hyperparameters["batch_size"], per_device_eval_batch_size=hyperparameters["batch_size"], learning_rate=hyperparameters["learning_rate"], weight_decay=hyperparameters["weight_decay"], num_train_epochs=hyperparameters["nb_epochs"], fp16=False, optim="adamw_torch", # Logging logging_dir=f"{DIR}/training_logs", logging_strategy="epoch", # report_to=["wandb", "tensorboard"], report_to=["tensorboard"], # Saving save_strategy="epoch", # Evaluating evaluation_strategy="epoch", predict_with_generate=True, generation_max_length=550, generation_num_beams=3, save_safetensors=True, save_total_limit=1, # metric_for_best_model='eval_loss', load_best_model_at_end=True, save_only_model=True, # metric_for_best_model="Weighted_comb", # greater_is_better=True, ) ### Expected behavior
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28752/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28751/comments
https://api.github.com/repos/huggingface/transformers/issues/28751/events
https://github.com/huggingface/transformers/pull/28751
2,104,191,054
PR_kwDOCUB6oc5lQd6U
28,751
[Docs] Fix Typo in English & Japanese CLIP Model Documentation (TMBD -> TMDB)
{ "login": "Vinyzu", "id": 50874994, "node_id": "MDQ6VXNlcjUwODc0OTk0", "avatar_url": "https://avatars.githubusercontent.com/u/50874994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vinyzu", "html_url": "https://github.com/Vinyzu", "followers_url": "https://api.github.com/users/Vinyzu/followers", "following_url": "https://api.github.com/users/Vinyzu/following{/other_user}", "gists_url": "https://api.github.com/users/Vinyzu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Vinyzu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vinyzu/subscriptions", "organizations_url": "https://api.github.com/users/Vinyzu/orgs", "repos_url": "https://api.github.com/users/Vinyzu/repos", "events_url": "https://api.github.com/users/Vinyzu/events{/privacy}", "received_events_url": "https://api.github.com/users/Vinyzu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28751). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
Fixes Typo in TMBD to TMDB (for "TheMovieDatabase") ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). [/] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? [/] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [/] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). [/] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28751/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28751", "html_url": "https://github.com/huggingface/transformers/pull/28751", "diff_url": "https://github.com/huggingface/transformers/pull/28751.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28751.patch", "merged_at": 1706522812000 }
https://api.github.com/repos/huggingface/transformers/issues/28750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28750/comments
https://api.github.com/repos/huggingface/transformers/issues/28750/events
https://github.com/huggingface/transformers/pull/28750
2,104,165,406
PR_kwDOCUB6oc5lQYhk
28,750
Fix the StarCoder agent max_new_tokens input validation error
{ "login": "dashapetr", "id": 54349415, "node_id": "MDQ6VXNlcjU0MzQ5NDE1", "avatar_url": "https://avatars.githubusercontent.com/u/54349415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dashapetr", "html_url": "https://github.com/dashapetr", "followers_url": "https://api.github.com/users/dashapetr/followers", "following_url": "https://api.github.com/users/dashapetr/following{/other_user}", "gists_url": "https://api.github.com/users/dashapetr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dashapetr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dashapetr/subscriptions", "organizations_url": "https://api.github.com/users/dashapetr/orgs", "repos_url": "https://api.github.com/users/dashapetr/repos", "events_url": "https://api.github.com/users/dashapetr/events{/privacy}", "received_events_url": "https://api.github.com/users/dashapetr/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "> Thanks! Won't this default to `max_new_tokens=20` that is set in generate if the arg is not passed?\r\n\r\nI am not sure about this.\r\n\r\nI didn't manage to find what exact place the limitation of `max_new_token `<= 192 comes from. If we look at the tool level, each separate tool (model) has its own limitation of `max_new_tokens`. If we refer to [API's documentation](https://huggingface.co/docs/api-inference/detailed_parameters#detailed-parameters), there is no default value for `max_new_tokens`.\r\n\r\nOne more possible solution I see here is to set `max_new_tokens`=192 so the validation error won't appear.", "So the HFAgent and the Local agent who both use huggingface tools will be limited by the `max_new_tokens` that you give them / by the default `max_new_token` / `max_length` of the model on the hub. I think the goal of the agents where to have simple APIs so 200 tokens default seemed alright for me. Otherwise it can never stop for a lot of model. \r\nI'll add a feature request for setting this but think we should keep it that way for now!", "> I think the goal of the agents where to have simple APIs so 200 tokens default seemed alright for me. Otherwise it can never stop for a lot of model.\r\n\r\nAgree with this. But the issue is still here: \r\n\r\n![image](https://github.com/huggingface/transformers/assets/54349415/83eba330-4913-4e10-b5fb-5d3fdcb7ed31)\r\n\r\nI believe it is because the HFAgent limits `max_new_tokens` to **200**, but the `starcoder` API needs **192** tokens or less.\r\n", "Would you suggest adding a controllable argument? " ]
1,706
1,708
null
NONE
null
Committer: Darya Petrashka <[email protected]> # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28523 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. link: https://github.com/huggingface/transformers/issues/28523 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28750/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28750", "html_url": "https://github.com/huggingface/transformers/pull/28750", "diff_url": "https://github.com/huggingface/transformers/pull/28750.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28750.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28749/comments
https://api.github.com/repos/huggingface/transformers/issues/28749/events
https://github.com/huggingface/transformers/issues/28749
2,104,111,121
I_kwDOCUB6oc59ajAR
28,749
combining feature from two pre-train model in transformer then passing them into a classifer
{ "login": "Arwa491", "id": 117582570, "node_id": "U_kgDOBwIq6g", "avatar_url": "https://avatars.githubusercontent.com/u/117582570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arwa491", "html_url": "https://github.com/Arwa491", "followers_url": "https://api.github.com/users/Arwa491/followers", "following_url": "https://api.github.com/users/Arwa491/following{/other_user}", "gists_url": "https://api.github.com/users/Arwa491/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arwa491/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arwa491/subscriptions", "organizations_url": "https://api.github.com/users/Arwa491/orgs", "repos_url": "https://api.github.com/users/Arwa491/repos", "events_url": "https://api.github.com/users/Arwa491/events{/privacy}", "received_events_url": "https://api.github.com/users/Arwa491/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @Arwa491, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports." ]
1,706
1,706
null
NONE
null
I'm trying to extract the feature using two different per-train models combine these features in one vector and then pass this vector into a classifier for a final classification is that possible using hugging face pre-train models , because i already trained the model on my data and uploaded them into hugging face
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28749/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28748/comments
https://api.github.com/repos/huggingface/transformers/issues/28748/events
https://github.com/huggingface/transformers/issues/28748
2,104,076,485
I_kwDOCUB6oc59aajF
28,748
RuntimeError: CUDA error: device-side assert triggered
{ "login": "KaifAhmad1", "id": 98801504, "node_id": "U_kgDOBeOXYA", "avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaifAhmad1", "html_url": "https://github.com/KaifAhmad1", "followers_url": "https://api.github.com/users/KaifAhmad1/followers", "following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}", "gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions", "organizations_url": "https://api.github.com/users/KaifAhmad1/orgs", "repos_url": "https://api.github.com/users/KaifAhmad1/repos", "events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}", "received_events_url": "https://api.github.com/users/KaifAhmad1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I would recommend you to set `CUDA_LAUNCH_BLOCKING=1` and add a breakpoint to see where this is happening. Usually at the embedding layer, when the tokenizer has an extra token and the vocab was not resized πŸ€— " ]
1,706
1,706
null
NONE
null
### System Info ``` YAML OS: Windows 11 Driver Version: 532.09 CUDA Version: 12.1 bitsandbytes: 0.42.0 transformers: 4.37.1 trl: 0.7.10 torch: 2.1.0+cu121 peft: 0.7.1 optimum: 1.16.2 einops: 0.7.0 ``` I am using Google Colab T4 GPU for fine tuniing `mistralai/Mistral-7B-v0.1` in custom dataset ### Who can help? Hey, @ArthurZucker @muellerzr @SunMarc Please help me for this issue. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` Python # Configuration for quantization compute_dtype = getattr(torch, "bfloat16") bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=compute_dtype, ) ``` ``` Python model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=bnb_config, device_map="auto", use_cache=False, trust_remote_code=True, low_cpu_mem_usage=True ) ``` ``` Python trainer.train() ``` Here is the error detail ``` RuntimeError Traceback (most recent call last) [<ipython-input-88-6b7a202bc4f8>](https://localhost:8080/#) in <cell line: 1>() ----> 1 mode RuntimeError Traceback (most recent call last) [<ipython-input-88-6b7a202bc4f8>](https://localhost:8080/#) in <cell line: 1>() ----> 1 model = AutoModelForCausalLM.from_pretrained(model_name, 2 quantization_config=bnb_config, 3 device_map="auto", 4 use_cache=False, 5 trust_remote_code=True, 3 frames [/usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in get_max_memory(max_memory) 718 else: 719 for i in range(torch.cuda.device_count()): --> 720 _ = torch.tensor([0], device=i) 721 max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())} 722 # allocate everything in the mps device as the RAM is shared RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` tol = AutoModelForCausalLM.from_pretrained(model_name, 2 quantization_config=bnb_config, 3 device_map="auto", 4 use_cache=False, 5 trust_remote_code=True, 3 frames [/usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py](https://localhost:8080/#) in get_max_memory(max_memory) 718 else: 719 for i in range(torch.cuda.device_count()): --> 720 _ = torch.tensor([0], device=i) 721 max_memory = {i: torch.cuda.mem_get_info(i)[0] for i in range(torch.cuda.device_count())} 722 # allocate everything in the mps device as the RAM is shared RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Expected behavior This cell will be run without raising any error Use Colab for your refence for better understanding: https://colab.research.google.com/drive/1rPYEVeXVlRrR3_oM1odaIuL60R9-jm6j?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28748/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28747/comments
https://api.github.com/repos/huggingface/transformers/issues/28747/events
https://github.com/huggingface/transformers/issues/28747
2,104,070,205
I_kwDOCUB6oc59aZA9
28,747
In the RoPE paper they don't compute actual softmax
{ "login": "wendlerc", "id": 8095362, "node_id": "MDQ6VXNlcjgwOTUzNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8095362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wendlerc", "html_url": "https://github.com/wendlerc", "followers_url": "https://api.github.com/users/wendlerc/followers", "following_url": "https://api.github.com/users/wendlerc/following{/other_user}", "gists_url": "https://api.github.com/users/wendlerc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wendlerc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wendlerc/subscriptions", "organizations_url": "https://api.github.com/users/wendlerc/orgs", "repos_url": "https://api.github.com/users/wendlerc/repos", "events_url": "https://api.github.com/users/wendlerc/events{/privacy}", "received_events_url": "https://api.github.com/users/wendlerc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ok I guess that's because they implement a linear attention." ]
1,706
1,706
1,706
NONE
null
Is this a feature or a bug? https://github.com/huggingface/transformers/blob/03cc17775b961d16cc4d0d7ab0c8487120d0b708/src/transformers/models/llama/modeling_llama.py#L429C9-L429C22 In RoPE paper equation 19: https://arxiv.org/pdf/2104.09864.pdf they don't compute the actual softmax. Best, Chris
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28747/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28746/comments
https://api.github.com/repos/huggingface/transformers/issues/28746/events
https://github.com/huggingface/transformers/pull/28746
2,104,069,320
PR_kwDOCUB6oc5lQEes
28,746
Resolve DeepSpeed cannot resume training with PeftModel
{ "login": "lh0x00", "id": 9839768, "node_id": "MDQ6VXNlcjk4Mzk3Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/9839768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lh0x00", "html_url": "https://github.com/lh0x00", "followers_url": "https://api.github.com/users/lh0x00/followers", "following_url": "https://api.github.com/users/lh0x00/following{/other_user}", "gists_url": "https://api.github.com/users/lh0x00/gists{/gist_id}", "starred_url": "https://api.github.com/users/lh0x00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lh0x00/subscriptions", "organizations_url": "https://api.github.com/users/lh0x00/orgs", "repos_url": "https://api.github.com/users/lh0x00/repos", "events_url": "https://api.github.com/users/lh0x00/events{/privacy}", "received_events_url": "https://api.github.com/users/lh0x00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hiii, any feedback are welcome, I waiting to a new version to use instead of a fork version or give me feedback if my pr having any issues. many thanks @pacman100 @muellerzr ", "Nice! Assuming this works, really excited for it to get merged in, as I've had this issue with checkpoints.\r\n\r\nQuick one - it looks like your local formatter may have different line length settings to the default for `transformers`, since the diff shows a bunch of changes that aren't relevant to your updates. Up to you if you want to revert those, but I'd hate for them to hold up the PR :)\r\n\r\n(I don't expect it to be a problem since the style and quality checks passed, but it would make the PR look smaller and keep the file consistent)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28746). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @younesbelkada", "yeah, @nathan-az I was able to continue my fine tuning, and it worked well. You can use it through my branch while waiting for the PR to be merged.", "waiting for this merge ", "me too, waiting for this merge -.-", "Hi everyone! Sure yes, please give us some time to review properly this PR and we'll make sure the fix is going to be landed! ", "thanks for reviewed, @pacman100 @younesbelkada " ]
1,706
1,706
1,706
CONTRIBUTOR
null
Hi all, I found an issue about resume ft **PeftModel** with **DeepSpeed** while I using `accelerate launch` to follow [zephyr-7b-beta recipes](https://github.com/huggingface/alignment-handbook/tree/c74ed111710d57f563cfbf1806cfb8f07dd3dc67/recipes/zephyr-7b-beta) with **qlora**. In details, the process will be crashing on load resume checkpoint with **DeepSpeed** with **PeftModel**. I referred to https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/559#issuecomment-1585948697 and created a PR to resolve this issue. I was updated and it works accurately on my fork. Thanks for review @pacman100 @muellerzr
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28746/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28746/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28746", "html_url": "https://github.com/huggingface/transformers/pull/28746", "diff_url": "https://github.com/huggingface/transformers/pull/28746.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28746.patch", "merged_at": 1706705907000 }
https://api.github.com/repos/huggingface/transformers/issues/28745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28745/comments
https://api.github.com/repos/huggingface/transformers/issues/28745/events
https://github.com/huggingface/transformers/issues/28745
2,103,603,552
I_kwDOCUB6oc59YnFg
28,745
Pydantic V2 support
{ "login": "FanaticPythoner", "id": 45826736, "node_id": "MDQ6VXNlcjQ1ODI2NzM2", "avatar_url": "https://avatars.githubusercontent.com/u/45826736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FanaticPythoner", "html_url": "https://github.com/FanaticPythoner", "followers_url": "https://api.github.com/users/FanaticPythoner/followers", "following_url": "https://api.github.com/users/FanaticPythoner/following{/other_user}", "gists_url": "https://api.github.com/users/FanaticPythoner/gists{/gist_id}", "starred_url": "https://api.github.com/users/FanaticPythoner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FanaticPythoner/subscriptions", "organizations_url": "https://api.github.com/users/FanaticPythoner/orgs", "repos_url": "https://api.github.com/users/FanaticPythoner/repos", "events_url": "https://api.github.com/users/FanaticPythoner/events{/privacy}", "received_events_url": "https://api.github.com/users/FanaticPythoner/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @FanaticPythoner, thanks for opening this feature request! \r\n\r\nPydantic v2 is supported on the development branch c.f. #28728 #27933 \r\n\r\nTo use, install from source: `pip install git+https://github.com/huggingface/transformers`\r\n", "Great thank you. Is there an ETA for the next release?", "The schedule for releases is around once a month. The latest release was last week, so probably around 4 weeks or so. " ]
1,706
1,706
null
NONE
null
### Feature request Please migrate to the latest Pydantic version. ### Motivation The current library appears to be incompatible with Pydantic Version 2. I find that being able to utilize new features, such as the `model_dump` function, would be highly beneficial. This function is particularly useful for ensuring robust data validation in a scenario like mine, where JSON serialization of a Pydantic model is required for an HTTP request body. ### Your contribution I am not able to perform the migration myself due to lack of time. Let me know if you guys need screenshots/tracebacks of errors when attempting to use more recent versions of Pydantic.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28745/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28744/comments
https://api.github.com/repos/huggingface/transformers/issues/28744/events
https://github.com/huggingface/transformers/issues/28744
2,103,444,751
I_kwDOCUB6oc59YAUP
28,744
Handling offload when calling AutoModelForCausalLM.from_pretrained()
{ "login": "YourSaDady", "id": 99607923, "node_id": "U_kgDOBe_lcw", "avatar_url": "https://avatars.githubusercontent.com/u/99607923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YourSaDady", "html_url": "https://github.com/YourSaDady", "followers_url": "https://api.github.com/users/YourSaDady/followers", "following_url": "https://api.github.com/users/YourSaDady/following{/other_user}", "gists_url": "https://api.github.com/users/YourSaDady/gists{/gist_id}", "starred_url": "https://api.github.com/users/YourSaDady/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YourSaDady/subscriptions", "organizations_url": "https://api.github.com/users/YourSaDady/orgs", "repos_url": "https://api.github.com/users/YourSaDady/repos", "events_url": "https://api.github.com/users/YourSaDady/events{/privacy}", "received_events_url": "https://api.github.com/users/YourSaDady/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @YourSaDady, thanks for raising this issue! \r\n\r\nIf the program is killed when using `offload_folder` it would indicate that you're hitting OOM. If you look at your GPU memory utilization - `watch -n1 nvidia-smi` - and your CPU utilization e.g. with`top` are you able to see these topping out when loading the model?" ]
1,706
1,706
null
NONE
null
### System Info Transformers version: 4.33.3 Python version: 3.9.18 Platform: Win11 WSL ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction 1. `from transformers import AutoTokenizer,AutoModelForCausalLM` 2. `model = AutoModelForCausalLM.from_pretrained(args.model,device_map='auto')` 3. Error occurs: <img width="866" alt="error2" src="https://github.com/huggingface/transformers/assets/99607923/86201897-10a7-41b2-a1f8-3c961b198337"> ### Expected behavior Expected: model should be loaded without any errors. I also tried to specify the offload folder by changing the line 67 to `model = AutoModelForCausalLM.from_pretrained(args.model,device_map='auto', offload_folder='offload')', but the program is killed without any error or tracebacks shown.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28744/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28743/comments
https://api.github.com/repos/huggingface/transformers/issues/28743/events
https://github.com/huggingface/transformers/issues/28743
2,103,112,940
I_kwDOCUB6oc59WvTs
28,743
when running rag example, errors in the 'generating train split' step (wiki_dpr.py)
{ "login": "kiehls90", "id": 101498700, "node_id": "U_kgDOBgy_TA", "avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiehls90", "html_url": "https://github.com/kiehls90", "followers_url": "https://api.github.com/users/kiehls90/followers", "following_url": "https://api.github.com/users/kiehls90/following{/other_user}", "gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions", "organizations_url": "https://api.github.com/users/kiehls90/orgs", "repos_url": "https://api.github.com/users/kiehls90/repos", "events_url": "https://api.github.com/users/kiehls90/events{/privacy}", "received_events_url": "https://api.github.com/users/kiehls90/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @kiehls90, thanks for raising an issue! \r\n\r\nThe scripts under `research_examples` aren't actively maintained. If a dataset is very large, you can try [streaming it](https://huggingface.co/docs/datasets/stream) instead of downloading the whole thing. " ]
1,706
1,706
null
NONE
null
### System Info I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the line **id, text, title = line.strip().split("\t")** ValueError: not enough values ​​to unpack (expected 3, got 2) -> This part handles exceptions so that even if an error occurs, it passes. 2. **ID mismatch between lines {id} and vector {vec_id}** This error seems to occur at the line " assert int(id) == int(vec_id),". After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id. Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!! version: python 3.8, and others are referenced from requirements.txt. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. running rag example: python examples/research_projects/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 2. after downloading and extracting wiki_dpr, then error occurs in "generating train split" ### Expected behavior .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28743/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28743/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28742/comments
https://api.github.com/repos/huggingface/transformers/issues/28742/events
https://github.com/huggingface/transformers/issues/28742
2,103,109,910
I_kwDOCUB6oc59WukW
28,742
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
{ "login": "tamanna-mostafa", "id": 156403336, "node_id": "U_kgDOCVKGiA", "avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamanna-mostafa", "html_url": "https://github.com/tamanna-mostafa", "followers_url": "https://api.github.com/users/tamanna-mostafa/followers", "following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}", "gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions", "organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs", "repos_url": "https://api.github.com/users/tamanna-mostafa/repos", "events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}", "received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @tamanna-mostafa πŸ‘‹ \r\n\r\nYou'll have to help us figure out what's wrong: can you get us a short and reproducible script that showcases the issue on the `transformers` size? I see two exceptions in your pasted code, one about `text-generation-inference` and another about `safetensors`", "@gante \r\nThanks for your comments. Here are the codes I ran (please let me know if you need any further details):\r\n\r\n```\r\n#Config for SFT\r\nmistral-7b-sft-MM-RLAIF:\r\n dtype: bf16\r\n log_dir: \"mistral-7b-sft-MM-PS\"\r\n learning_rate: 2e-5\r\n model_name: /mnt/efs/workspace/sakhaki/models/Mistral-7B-v0.1\r\n deepspeed_config: configs/zero_config_sft_65b.json #configs/zero_config_pretrain.json\r\n output_dir: /mnt/efs/data/tammosta/files_t/output_sft_32k\r\n weight_decay: 0.01\r\n max_length: 4096\r\n warmup_steps: 100\r\n gradient_checkpointing: true\r\n gradient_accumulation_steps: 8\r\n per_device_train_batch_size: 1\r\n per_device_eval_batch_size: 1\r\n eval_steps: 500000\r\n save_steps: 100\r\n num_train_epochs: 2\r\n save_total_limit: 4\r\n use_flash_attention: false\r\n residual_dropout: 0.0\r\n residual_dropout_lima: true\r\n save_strategy: steps\r\n peft_model: false\r\n only_last_turn_loss: false\r\n use_custom_sampler: true\r\n datasets:\r\n - sft-custom:\r\n data_files: /mnt/efs/data/tammosta/files_t/SFT_inp_26787_RBS_plus_Optima.json\r\n #fraction : 0.75\r\n max_val_set: 300\r\n val_split: 0.0001\r\n - oasst_export:\r\n lang: \"bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk\" # sft-8.0\r\n hf_dataset_name: OpenAssistant/oasst1\r\n fraction : 0.5\r\n val_split: 0.0001\r\n max_val_set: 300\r\n top_k: 1\r\n#run SFT on mistral 7b model\r\ndeepspeed trainer_sft_d.py --configs mistral-7b-sft-MM-RLAIF --wandb-entity tammosta --show_dataset_stats --deepspeed\r\n\r\n#Run DPO on the SFT model\r\naccelerate launch --config_file ./accelerate_configs/ds_zero3.yaml rlhf_dpo.py \\\r\n--model_name_or_path=\"/mnt/efs/data/tammosta/files_t/output_sft_32k\" \\\r\n--output_dir=\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k\" \\\r\n--data_path=\"/mnt/efs/data/tammosta/files_t/DPO_data_rbs_clean_AIF.json\" \\\r\n--use_lamma2_peft_config False \\\r\n--beta 0.1 \\\r\n--optimizer_type adamw_hf \\\r\n--learning_rate 1e-6 \\\r\n--warmup_steps 50 \\\r\n--per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 \\\r\n--gradient_accumulation_steps 8 \\\r\n--lora_alpha 16 \\\r\n--lora_dropout 0.05 \\\r\n--lora_r 8 \\\r\n--max_prompt_length 2048 \\\r\n--max_length 4096 \\\r\n--num_train_epochs 4 \\\r\n--logging_steps 20 \\\r\n--save_steps 100 \\\r\n--save_total_limit 8 \\\r\n--eval_steps 50 \\\r\n--gradient_checkpointing True \\\r\n--report_to \"wandb\"\r\n\r\n```\r\n#Contents of the DPO output folder\r\n```\r\nubuntu@ip-172-31-8-218:/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k$ ls \r\nREADME.md adapter_model.safetensors checkpoint-100 checkpoint-300 checkpoint-500 checkpoint-700 global_step736 special_tokens_map.json tokenizer.model training_args.bin\r\nadapter_config.json added_tokens.json checkpoint-200 checkpoint-400 checkpoint-600 final_checkpoint latest tokenizer.json tokenizer_config.json zero_to_fp32.py\r\n\r\n```\r\n```\r\n# merge the lora adaptors\r\npython merge_peft_adaptors_gpu.py --base_model_name_or_path /mnt/efs/data/tammosta/files_t/output_sft_32k --peft_model_path /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k --output_dir /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k_merged --safe_serialization\r\n```\r\n\r\n```\r\n#Content of merge_peft_adators_gpu.py\r\n\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom peft import PeftModel\r\nimport torch\r\n\r\nimport os\r\nimport argparse\r\n\r\ndef get_args():\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"--base_model_name_or_path\", type=str)\r\n parser.add_argument(\"--peft_model_path\", type=str)\r\n parser.add_argument(\"--output_dir\", type=str)\r\n parser.add_argument(\"--device\", type=str, default=\"auto\")\r\n parser.add_argument(\"--safe_serialization\", action=\"store_true\")\r\n\r\n return parser.parse_args()\r\n####\r\ndef main():\r\n args = get_args()\r\n\r\n if args.device == 'auto':\r\n device_arg = { 'device_map': 'auto' }\r\n else:\r\n device_arg = { 'device_map': { \"\": args.device} }\r\n\r\n print(f\"Loading base model: {args.base_model_name_or_path}\")\r\n base_model = AutoModelForCausalLM.from_pretrained(\r\n args.base_model_name_or_path,\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n trust_remote_code=True,\r\n **device_arg\r\n )\r\n #device = torch.device('cpu')\r\n #base_model.to(device)\r\n\r\n print(f\"Loading PEFT: {args.peft_model_path}\")\r\n model = PeftModel.from_pretrained(base_model, args.peft_model_path)\r\n print(\"Peft Model : \", model.device)\r\n print(f\"Running merge_and_unload\")\r\n model = model.merge_and_unload()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path)\r\n\r\n\r\n model.save_pretrained(f\"{args.output_dir}\",max_shard_size='9GB',safe_serialization=args.safe_serialization)\r\n tokenizer.save_pretrained(f\"{args.output_dir}\",max_shard_size='9GB',safe_serialization=args.safe_serialization)\r\n print(f\"Model saved to {args.output_dir}\")\r\n####\r\nif __name__ == \"__main__\" :\r\n main()\r\n\r\n```\r\n```\r\n#The error I get while running the code above \r\n\r\nLoading base model: /mnt/efs/data/tammosta/files_t/output_sft_32k\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:04<00:00, 1.40s/it]\r\nLoading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py\", line 51, in <module>\r\n main()\r\n File \"/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py\", line 38, in main\r\n model = PeftModel.from_pretrained(base_model, args.peft_model_path)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 352, in from_pretrained\r\n model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 689, in load_adapter\r\n adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py\", line 270, in load_peft_weights\r\n adapters_weights = safe_load_file(filename, device=device)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/safetensors/torch.py\", line 308, in load_file\r\n with safe_open(filename, framework=\"pt\", device=device) as f:\r\nsafetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization\r\n\r\n\r\n```", "Hi @tamanna-mostafa πŸ‘‹ looking at your stack trace, it looks like a `peft` error, you should open an issue there :)", "@gante\r\nIssue opened here: https://github.com/huggingface/peft/issues/1443", "Closing this ticket since the issue is reported in the ticket above. " ]
1,706
1,707
1,707
NONE
null
### System Info transformers version: 4.35.2 Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31 Python version: 3.10.12 Huggingface_hub version: 0.20.2 Safetensors version: 0.4.1 Accelerate version: 0.26.1 Accelerate config: not found PyTorch version (GPU?): 2.1.2+cu121 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed ### Who can help? @gante @Rocketknight1 @muellerzr and @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. I ran supervised fine tuning of Mistral 7b model (with 32k preference data) 2. I ran DPO on the output of SFT 3. I ran the following code to load the DPO model and run docker: ``` model=/data/DPO_output_mistral_32k volume=/mnt/efs/data/tammosta/files_t:/data num_shard=8 docker run --gpus all --shm-size 1g -p 172.31.8.218:80:80 -v $volume ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model --num-shard $num_shard --max-input-length 4095 --max-total-tokens 12000 ``` However, the docker run failed with the following error: `OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/DPO_output_mistral_32k/None' for available files.` 5. Assuming I need to merge the lora adaptors while loading the model, I ran the following command (the content of the script is also given below): `python merge_peft_adaptors_gpu.py --base_model_name_or_path /mnt/efs/data/tammosta/files_t/output_sft_32k --peft_model_path /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k --output_dir /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k_merged --safe_serialization` Here is the content of `merge_peft_adaptors_gpu.py`: ``` from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch import os import argparse def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--base_model_name_or_path", type=str) parser.add_argument("--peft_model_path", type=str) parser.add_argument("--output_dir", type=str) parser.add_argument("--device", type=str, default="auto") parser.add_argument("--safe_serialization", action="store_true") return parser.parse_args() #### def main(): args = get_args() if args.device == 'auto': device_arg = { 'device_map': 'auto' } else: device_arg = { 'device_map': { "": args.device} } print(f"Loading base model: {args.base_model_name_or_path}") base_model = AutoModelForCausalLM.from_pretrained( args.base_model_name_or_path, return_dict=True, torch_dtype=torch.float16, trust_remote_code=True, **device_arg ) #device = torch.device('cpu') #base_model.to(device) print(f"Loading PEFT: {args.peft_model_path}") model = PeftModel.from_pretrained(base_model, args.peft_model_path) print("Peft Model : ", model.device) print(f"Running merge_and_unload") model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path) model.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization) tokenizer.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization) print(f"Model saved to {args.output_dir}") #### if __name__ == "__main__" : main() ``` However, I'm getting this error: ``` Loading base model: /mnt/efs/data/tammosta/files_t/output_sft_32k Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:04<00:00, 1.40s/it] Loading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k Traceback (most recent call last): File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 51, in <module> main() File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 38, in main model = PeftModel.from_pretrained(base_model, args.peft_model_path) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 352, in from_pretrained model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 689, in load_adapter adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 270, in load_peft_weights adapters_weights = safe_load_file(filename, device=device) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization ``` Any idea why I'm getting this error? ### Expected behavior The merged model will successfully load in the output directory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28742/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28741/comments
https://api.github.com/repos/huggingface/transformers/issues/28741/events
https://github.com/huggingface/transformers/pull/28741
2,103,022,287
PR_kwDOCUB6oc5lM8q8
28,741
Fix input data file extension in examples
{ "login": "khipp", "id": 9824526, "node_id": "MDQ6VXNlcjk4MjQ1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9824526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khipp", "html_url": "https://github.com/khipp", "followers_url": "https://api.github.com/users/khipp/followers", "following_url": "https://api.github.com/users/khipp/following{/other_user}", "gists_url": "https://api.github.com/users/khipp/gists{/gist_id}", "starred_url": "https://api.github.com/users/khipp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khipp/subscriptions", "organizations_url": "https://api.github.com/users/khipp/orgs", "repos_url": "https://api.github.com/users/khipp/repos", "events_url": "https://api.github.com/users/khipp/events{/privacy}", "received_events_url": "https://api.github.com/users/khipp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28741). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
1,706
CONTRIBUTOR
null
Ensure that the input data file extension is set correctly when running example scripts without specifying a training data file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28741/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28741", "html_url": "https://github.com/huggingface/transformers/pull/28741", "diff_url": "https://github.com/huggingface/transformers/pull/28741.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28741.patch", "merged_at": 1706522791000 }
https://api.github.com/repos/huggingface/transformers/issues/28740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28740/comments
https://api.github.com/repos/huggingface/transformers/issues/28740/events
https://github.com/huggingface/transformers/issues/28740
2,103,016,558
I_kwDOCUB6oc59WXxu
28,740
DETR: IndexError: Caught IndexError in replica 0 on device 0. IndexError: index 8 is out of bounds for dimension 0 with size 8
{ "login": "michaelgruner", "id": 717880, "node_id": "MDQ6VXNlcjcxNzg4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/717880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelgruner", "html_url": "https://github.com/michaelgruner", "followers_url": "https://api.github.com/users/michaelgruner/followers", "following_url": "https://api.github.com/users/michaelgruner/following{/other_user}", "gists_url": "https://api.github.com/users/michaelgruner/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelgruner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelgruner/subscriptions", "organizations_url": "https://api.github.com/users/michaelgruner/orgs", "repos_url": "https://api.github.com/users/michaelgruner/repos", "events_url": "https://api.github.com/users/michaelgruner/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelgruner/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.37.1 - Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.37 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Not explicitly, but Trainer is picking up 2 GPUs ### Who can help? @amyeroberts Hi. I'm getting the error in the title trying to reproduce [this example](https://huggingface.co/docs/transformers/tasks/object_detection). The error is real. I don't know what caused it, but I've narrowed the cause to DETR receiving `BatchSize x NumGPUs` number of targets, but expecting only `BatchSize` which causes the overflow. If I limit the amount of GPUs to 1 (via `CUDA_VISIBLE_DEVICES=0`, for example), it runs ok. Here's the stack trace: ``` Traceback (most recent call last): File "/home/mgruner/cellphones-in-the-wild/./train.py", line 116, in <module> trainer.train() File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 2768, in training_step loss = self.compute_loss(model, inputs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/trainer.py", line 2791, in compute_loss outputs = model(**inputs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 185, in forward outputs = self.parallel_apply(replicas, inputs, module_kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 200, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 110, in parallel_apply output.reraise() File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise raise exception IndexError: Caught IndexError in replica 0 on device 0. Original Traceback (most recent call last): File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in _worker output = module(*input, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 1603, in forward loss_dict = criterion(outputs_loss, labels) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2202, in forward indices = self.matcher(outputs_without_aux, targets) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2330, in forward indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] File "/opt/pyenv/versions/cellphones-in-the-wild/lib/python3.10/site-packages/transformers/models/detr/modeling_detr.py", line 2330, in <listcomp> indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] IndexError: index 8 is out of bounds for dimension 0 with size 8 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Follow this tutorial: https://huggingface.co/docs/transformers/tasks/object_detection ### Expected behavior I expect the model to train.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28740/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28739/comments
https://api.github.com/repos/huggingface/transformers/issues/28739/events
https://github.com/huggingface/transformers/pull/28739
2,102,816,029
PR_kwDOCUB6oc5lMPVQ
28,739
[docs] Backbone
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28739). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hmm not sure why the test is failing with `ERROR dummy.py - AttributeError: 'HfDoctestModule' object has no attribute 'getini'`. Any idea how to fix or if I can go ahead and merge?", "we can merge but let's rebase on main it should fix it!" ]
1,706
1,706
1,706
MEMBER
null
This PR adds some updates to the backbone docs: - have API references for `AutoBackbone`, `BackboneConfig`, `BackboneConfigMixin`, `TimmBackbone`, and `TimmBackboneConfig` in the docs so users can easily check them out - include a list of supported backbones - break up and move the content into `autoclass_tutorial.md` and `create_a_model.md` - move initializing backbone from config to `create_a_model.md` which is more similar to the other examples we have there for creating something from their config - update with loading a backbone by setting for example `config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=False)` - from #28214
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28739/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28739", "html_url": "https://github.com/huggingface/transformers/pull/28739", "diff_url": "https://github.com/huggingface/transformers/pull/28739.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28739.patch", "merged_at": 1706807777000 }
https://api.github.com/repos/huggingface/transformers/issues/28738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28738/comments
https://api.github.com/repos/huggingface/transformers/issues/28738/events
https://github.com/huggingface/transformers/issues/28738
2,102,804,438
I_kwDOCUB6oc59Vj_W
28,738
Any plans to support KV Cache offloading to CPU (and NVMe)?
{ "login": "goelayu", "id": 31916840, "node_id": "MDQ6VXNlcjMxOTE2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/31916840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goelayu", "html_url": "https://github.com/goelayu", "followers_url": "https://api.github.com/users/goelayu/followers", "following_url": "https://api.github.com/users/goelayu/following{/other_user}", "gists_url": "https://api.github.com/users/goelayu/gists{/gist_id}", "starred_url": "https://api.github.com/users/goelayu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goelayu/subscriptions", "organizations_url": "https://api.github.com/users/goelayu/orgs", "repos_url": "https://api.github.com/users/goelayu/repos", "events_url": "https://api.github.com/users/goelayu/events{/privacy}", "received_events_url": "https://api.github.com/users/goelayu/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @ArthurZucker " ]
1,706
1,706
null
NONE
null
### Feature request Similar to how model parameter and optimizer offload is supported using the [deepspeed library](https://github.com/huggingface/transformers/blob/de13a951b38b85195984164819f1ab05fe508677/docs/source/en/perf_train_gpu_one.md#deepspeed-zero), are there plans for natively supporting KV cache offloading as well? ### Motivation Apart from helping accommodate larger batch sizes on a single GPU, this can also significantly improve overall throughput, specially when batch sizes grow very large (resulting in a linear increase in KV cache size). ### Your contribution I see there already exists an implementation of this: https://github.com/tjruwase/transformers/tree/kvcache-offload-cpu, so maybe this is simply about incorporating those changes in the main repo?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28738/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28737/comments
https://api.github.com/repos/huggingface/transformers/issues/28737/events
https://github.com/huggingface/transformers/pull/28737
2,102,788,555
PR_kwDOCUB6oc5lMJSq
28,737
[`Siglip`] protect from imports if sentencepiece not installed
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28737). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Complement to #28636. Mistake on my part - when testing importing from top level `from transformers import *`, environment was running on different python / site packages that originally thought (sentence piece was installed) covering up these requirements.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28737", "html_url": "https://github.com/huggingface/transformers/pull/28737", "diff_url": "https://github.com/huggingface/transformers/pull/28737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28737.patch", "merged_at": 1706454614000 }
https://api.github.com/repos/huggingface/transformers/issues/28736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28736/comments
https://api.github.com/repos/huggingface/transformers/issues/28736/events
https://github.com/huggingface/transformers/pull/28736
2,102,641,052
PR_kwDOCUB6oc5lLpCl
28,736
Use the old style of cache-management when using DS-Inference
{ "login": "RezaYazdaniAminabadi", "id": 44502768, "node_id": "MDQ6VXNlcjQ0NTAyNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/44502768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RezaYazdaniAminabadi", "html_url": "https://github.com/RezaYazdaniAminabadi", "followers_url": "https://api.github.com/users/RezaYazdaniAminabadi/followers", "following_url": "https://api.github.com/users/RezaYazdaniAminabadi/following{/other_user}", "gists_url": "https://api.github.com/users/RezaYazdaniAminabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/RezaYazdaniAminabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RezaYazdaniAminabadi/subscriptions", "organizations_url": "https://api.github.com/users/RezaYazdaniAminabadi/orgs", "repos_url": "https://api.github.com/users/RezaYazdaniAminabadi/repos", "events_url": "https://api.github.com/users/RezaYazdaniAminabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/RezaYazdaniAminabadi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,706
null
CONTRIBUTOR
null
This PR intends to revert back the cache-management to the old style when optimizing HF models with DeepSpeed-Inference. I just added some changes to make it work with Llama. I will add a test to show the usage of this and why this is needed for the DS-Inference to work.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28736/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28736", "html_url": "https://github.com/huggingface/transformers/pull/28736", "diff_url": "https://github.com/huggingface/transformers/pull/28736.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28736.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28735/comments
https://api.github.com/repos/huggingface/transformers/issues/28735/events
https://github.com/huggingface/transformers/pull/28735
2,102,606,842
PR_kwDOCUB6oc5lLhh5
28,735
[Flax] Update no init test for Flax v0.7.1
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker for possible up-coming Flax model additions", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? PR to unblock new model additions in Flax. As of version 0.7.1, Flax defaults to returning regular dictionaries with the methods .init and .apply, not frozen dictionaries as was the case before: https://github.com/google/flax/discussions/3191. This means our "no automatic init" method returns regular dicts, instead of frozen dicts. Until we merge a bigger change to bring ourselves in-line with Flax (_c.f._ https://github.com/huggingface/transformers/issues/28368#issue-2068557686), we need to update our "no automatic init" test to account for both possible dicts, otherwise we'll have a red CI.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28735/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28735", "html_url": "https://github.com/huggingface/transformers/pull/28735", "diff_url": "https://github.com/huggingface/transformers/pull/28735.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28735.patch", "merged_at": 1706293240000 }
https://api.github.com/repos/huggingface/transformers/issues/28734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28734/comments
https://api.github.com/repos/huggingface/transformers/issues/28734/events
https://github.com/huggingface/transformers/pull/28734
2,102,554,379
PR_kwDOCUB6oc5lLWGc
28,734
Wrap Keras methods to support BatchEncoding
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28734). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "This has now been rebased following the tf-keras PR, will merge once the CI is green." ]
1,706
1,706
1,706
MEMBER
null
One last Keras PR before I go back to chat templates - a recurring annoyance that I (and the forum users) have always had with our Keras models is that our tokenizers output `BatchEncoding` by default, which behaves like a mixed dict/list. Keras doesn't understand this at all and fails to handle it when passed to `fit()` or `predict()`. The result is that you have to manually remember to convert tokenizer outputs to a dict or you get a confusing error. The right time to do this was about two and a half years ago, but late is better than never! This PR wraps the Keras methods to do that transparently, without changing other behaviour. Note that because we're changing the exact Keras we're importing, this PR shouldn't be merged before the `tf_keras` PR is in.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28734", "html_url": "https://github.com/huggingface/transformers/pull/28734", "diff_url": "https://github.com/huggingface/transformers/pull/28734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28734.patch", "merged_at": 1706707122000 }
https://api.github.com/repos/huggingface/transformers/issues/28733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28733/comments
https://api.github.com/repos/huggingface/transformers/issues/28733/events
https://github.com/huggingface/transformers/pull/28733
2,102,544,063
PR_kwDOCUB6oc5lLT4w
28,733
Fix `DepthEstimationPipeline`'s docstring
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28733). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @Wauplin Will update it (along with `top_k` too maybe!)", "FYI:\r\n\r\nAlso changed\r\n\r\n> Assign labels to the image(s) passed as inputs.\r\n\r\nto \r\n\r\n> Predict the depth(s) of the image(s) passed as inputs.\r\n\r\nand removed `top_k` part\r\n\r\nhttps://github.com/huggingface/transformers/pull/28733/commits/22ff25c6cec0eba56ea78646c3b88ec9f728e86a" ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Fix #28729
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28733", "html_url": "https://github.com/huggingface/transformers/pull/28733", "diff_url": "https://github.com/huggingface/transformers/pull/28733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28733.patch", "merged_at": 1706521376000 }
https://api.github.com/repos/huggingface/transformers/issues/28732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28732/comments
https://api.github.com/repos/huggingface/transformers/issues/28732/events
https://github.com/huggingface/transformers/issues/28732
2,102,510,335
I_kwDOCUB6oc59UcL_
28,732
Output logits differ for the same input text in a batch of size 1 with half precision on GPU
{ "login": "zhukpm", "id": 52332744, "node_id": "MDQ6VXNlcjUyMzMyNzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/52332744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhukpm", "html_url": "https://github.com/zhukpm", "followers_url": "https://api.github.com/users/zhukpm/followers", "following_url": "https://api.github.com/users/zhukpm/following{/other_user}", "gists_url": "https://api.github.com/users/zhukpm/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhukpm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhukpm/subscriptions", "organizations_url": "https://api.github.com/users/zhukpm/orgs", "repos_url": "https://api.github.com/users/zhukpm/repos", "events_url": "https://api.github.com/users/zhukpm/events{/privacy}", "received_events_url": "https://api.github.com/users/zhukpm/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This seems like a duplicate of https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535 ", "Yeah, it is. Thanks" ]
1,706
1,706
null
NONE
null
### System Info Linux 20.04.1-Ubuntu x86_64 GNU/Linux Python 3.10.12 transformers==4.37.1 torch==2.1.2+cu121 GPU A100 NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction We run an inference with a CausalLM model, providing the same text, but in different batches. One of the batches is of size `1`, and the other - of size `> 1`. Output logits differ *slightly* for the same input sequence. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed # MODEL_ID = 'mistralai/Mistral-7B-Instruct-v0.2' MODEL_ID = 'facebook/opt-350m' model = AutoModelForCausalLM.from_pretrained( MODEL_ID, torch_dtype=torch.bfloat16, device_map='auto', return_dict=True ) model.eval() tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) tokenizer.pad_token = tokenizer.eos_token model.config.pad_token_id = tokenizer.pad_token_id batches = [ ['hello, world'], ['hello, world', 'hello', 'world'] ] tokenized = [tokenizer(b, padding='longest', return_tensors='pt').to(model.device) for b in batches] assert (tokenized[0]['input_ids'][0] == tokenized[1]['input_ids'][0]).all().item() set_seed(0) with torch.inference_mode(): logits = [model(**t).logits for t in tokenized] assert torch.allclose(logits[0][0], logits[1][0], atol=1e-3) ``` ### Expected behavior Output logits should be the same (at least very close to other) regardless of the batch size. Note that we observe this problem only with `torch.float16` and `torch.bfloat16` on GPUs. The code above works without errors - on CPUs - when using `float32` - when comparing batches of sizes e.g. 2 and 3: ```python batches = [ ['hello, world', 'hello'], ['hello, world', 'hello', 'world'] ] ``` So for some reason the problem occurs for half precision and `batch_size=1` only. I think that [this thread](https://discuss.huggingface.co/t/results-of-model-generate-are-different-for-different-batch-sizes-of-the-decode-only-model/34878) might be related somehow, but I'm not sure.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28732/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28731/comments
https://api.github.com/repos/huggingface/transformers/issues/28731/events
https://github.com/huggingface/transformers/issues/28731
2,102,480,997
I_kwDOCUB6oc59UVBl
28,731
torch.bfloat16 inference failed with RuntimeError: cutlassF: no kernel found to launch!
{ "login": "VINUK0", "id": 58259367, "node_id": "MDQ6VXNlcjU4MjU5MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/58259367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VINUK0", "html_url": "https://github.com/VINUK0", "followers_url": "https://api.github.com/users/VINUK0/followers", "following_url": "https://api.github.com/users/VINUK0/following{/other_user}", "gists_url": "https://api.github.com/users/VINUK0/gists{/gist_id}", "starred_url": "https://api.github.com/users/VINUK0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VINUK0/subscriptions", "organizations_url": "https://api.github.com/users/VINUK0/orgs", "repos_url": "https://api.github.com/users/VINUK0/repos", "events_url": "https://api.github.com/users/VINUK0/events{/privacy}", "received_events_url": "https://api.github.com/users/VINUK0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @VINUK0, thanks for raising this issue! \r\n\r\nI believe this is a torch and hardware issue, rather than something in the transformers library. \r\n\r\nOther people have reported issues between torch and sdp, when loading in bf16 on T4. See [this issue](https://github.com/Lightning-AI/lit-gpt/issues/327) and [comment](https://github.com/Lightning-AI/lit-gpt/issues/327#issuecomment-1664674460), which others said helped resolve their issue. \r\n\r\nOther related issues:\r\n* https://github.com/pytorch/pytorch/issues/102029#issuecomment-1560071148\r\n* https://github.com/fishaudio/fish-speech/issues/7\r\n\r\nI'd suggest trying to load your model in fp16 instead of bf16. \r\n\r\ncc @fxmarty for reference", "Thank you, I can reproduce the issue on a T4:\r\n\r\n```python\r\nimport torch\r\n\r\nquery = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device=\"cuda\")\r\nkey = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device=\"cuda\")\r\nvalue = torch.rand(32, 8, 128, 64, dtype=torch.bfloat16, device=\"cuda\")\r\n\r\nres = torch.nn.functional.scaled_dot_product_attention(query,key,value)\r\n```\r\n\r\nUsing `torch.backends.cuda.enable_mem_efficient_sdp(False)` fixes the issue (https://pytorch.org/docs/master/backends.html#torch.backends.cuda.enable_mem_efficient_sdp) and can be used as a temporary workaround. Alternatively, you can load the model with the `attn_implementation=\"eager\"` argument to avoid using SDPA: `model = AutoModel.from_pretrained(..., attn_implementation=\"eager\")`.\r\n\r\n@drisspg Do you think this is a bug? Should the auto-dispatch of SDPA automatically fall back on the math path in case it is used with a device not supporting bf16?", "This definitely was a bug, but should have been fixed with this PR: https://github.com/pytorch/pytorch/pull/116272\r\nDo you know which version you are on for PyTorch when you repoed above?", "This was with torch==2.1.2, and appears to be fixed in torch==2.2.0. @VINUK0 Please update to 2.2.0 or use `torch.backends.cuda.enable_mem_efficient_sdp(False)`. Thank you!" ]
1,706
1,706
1,706
NONE
null
### System Info ***[Environment Information]*** *GPU :* `Nvidia T4 (15GB)` *Python Version:* `3.10.12` *Pytorch Version :* `2.1.1 (CUDA 12.1 | 11.8) [Both same results.]` *Transformers Version :* `4.37.1` *Accelerate Version :* `0.26.1` *Optimum Version:* `1.16.2` *Bitsandbytes:* `0.42.0` ***[Task Information]*** *Model Type:* `Text-Generation` *Model Architecture:* `llama` *Attention Implementation:* `sdpa "flash attention 1"` *Model Load In Float16:* `True` *Model Load In Bfloat16:* `True` *Model Generate With Float16:* `True` *Model Generate With Bfloat16:* `False (RuntimeError: cutlassF: no kernel found to launch!)` ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ***[Replicate Requirements]*** *Load a pre trained model with `attention_Implementation="sdpa", torch_dtype=torch.bfloat16` to generate a sequence of tokens. It will show the error.* ### Expected behavior *(RuntimeError: cutlassF: no kernel found to launch)*
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28731/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28731/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28730/comments
https://api.github.com/repos/huggingface/transformers/issues/28730/events
https://github.com/huggingface/transformers/issues/28730
2,102,472,373
I_kwDOCUB6oc59US61
28,730
Freely Long-Thinking Transformer (FraiLT)
{ "login": "akbayt", "id": 11097700, "node_id": "MDQ6VXNlcjExMDk3NzAw", "avatar_url": "https://avatars.githubusercontent.com/u/11097700?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akbayt", "html_url": "https://github.com/akbayt", "followers_url": "https://api.github.com/users/akbayt/followers", "following_url": "https://api.github.com/users/akbayt/following{/other_user}", "gists_url": "https://api.github.com/users/akbayt/gists{/gist_id}", "starred_url": "https://api.github.com/users/akbayt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akbayt/subscriptions", "organizations_url": "https://api.github.com/users/akbayt/orgs", "repos_url": "https://api.github.com/users/akbayt/repos", "events_url": "https://api.github.com/users/akbayt/events{/privacy}", "received_events_url": "https://api.github.com/users/akbayt/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "HeyΒ @akbayt. Thanks a lot for opening this new model request and contributing to the HF ecosystem!Β πŸ€—\r\nWe have recently been trying to push forΒ `model on the hub`Β and have as much support as we can there. It will also be easier to integrate it. Here is aΒ [tutorial](https://huggingface.co/docs/transformers/custom_models)Β if that sound good to you!", "Hi @amyeroberts. Thanks for your quick response and guidance. As far as I can see, sharing the models on hub will make it easier for me and will add a slight work when someone wants to use it. This is still acceptable. However, the models I have trained so far are very modest in scale (you can see the details in the paper), and I am actively seeking support and resources to train larger models. I believe that having my model integrated into the πŸ€— transformers library, rather than sharing it on the hub, could increase its visibility within the community. In turn, it might help me find some support easier. Thanks again.", "The best way to help make your model visible is by making sure the model is easy to find (be on the hub), easy to use (has a detailed model card with code snippet users showing how to get started) and a space demo showing the model's capabilities. Many models have become popular that were just on the hub. For example, falcon and phi-2 started on the hub. " ]
1,706
1,706
null
NONE
null
### Model description Hi! I am the author of the following study: https://arxiv.org/abs/2401.11626 I want to add this model to πŸ€— transformers. Implementation is in progress... πŸ‘¨β€πŸ’» *Abstract:* Freely Long-Thinking Transformer (FraiLT) is an improved transformer model designed to enhance processing capabilities without scaling up size. It utilizes a recursive approach, iterating over a subset of layers multiple times, and introduces iteration encodings to maintain awareness across these cycles. Iteration encoding allows FraiLT to achieve the interpretive depth of larger models in a compact form. When evaluated on a synthetic story dataset, FraiLT outperformed larger models, showcasing its ability to deliver high-quality performance while reducing memory demands. This model represents a step forward towards more efficient and accessible language models. ### Open source status - [X] The model explained in the paper ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2401.11626, https://www.academia.edu/113629981
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28730/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28729/comments
https://api.github.com/repos/huggingface/transformers/issues/28729/events
https://github.com/huggingface/transformers/issues/28729
2,102,351,194
I_kwDOCUB6oc59T1Va
28,729
Depth Estimation Pipeline docstrings are wrong
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Thanks @osanseviero ! Indeed, will open a PR to fix this" ]
1,706
1,706
1,706
MEMBER
null
The docstring is a paste from the image classification pipeline it seems https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/depth_estimation.py#L54-L83 . The correct output, as per https://huggingface.co/docs/transformers/main/tasks/monocular_depth_estimation, should be a dictionary with an image and a tensor cc @amyeroberts @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28729/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28728/comments
https://api.github.com/repos/huggingface/transformers/issues/28728/events
https://github.com/huggingface/transformers/pull/28728
2,102,214,135
PR_kwDOCUB6oc5lKMGs
28,728
Unpin pydantic
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28728). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Unpin pydantic as no failure anymore (CircleCI, docker image build). Fix #27933
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28728/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28728", "html_url": "https://github.com/huggingface/transformers/pull/28728", "diff_url": "https://github.com/huggingface/transformers/pull/28728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28728.patch", "merged_at": 1706287173000 }
https://api.github.com/repos/huggingface/transformers/issues/28727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28727/comments
https://api.github.com/repos/huggingface/transformers/issues/28727/events
https://github.com/huggingface/transformers/pull/28727
2,102,090,556
PR_kwDOCUB6oc5lJxWz
28,727
Fix typo of `Block`.
{ "login": "xkszltl", "id": 5203025, "node_id": "MDQ6VXNlcjUyMDMwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xkszltl", "html_url": "https://github.com/xkszltl", "followers_url": "https://api.github.com/users/xkszltl/followers", "following_url": "https://api.github.com/users/xkszltl/following{/other_user}", "gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}", "starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions", "organizations_url": "https://api.github.com/users/xkszltl/orgs", "repos_url": "https://api.github.com/users/xkszltl/repos", "events_url": "https://api.github.com/users/xkszltl/events{/privacy}", "received_events_url": "https://api.github.com/users/xkszltl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28727). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Nice catch on checking on GH !", "Sounds good to me, added.", "BTW why 4.40?\r\nAny convention and consideration regarding the deprecation cycle?", "> BTW why 4.40? Any convention and consideration regarding the deprecation cycle?\r\n\r\nWe typically deprecate in 2 release cycles and this commit will be part of 4.38. ", "@xkszltl For the failing tests on CI, running `make fixup` and pushing the changes should resolve the issue and trigger a new CI run", "> @xkszltl For the failing tests on CI, running `make fixup` and pushing the changes should resolve the issue and trigger a new CI run\r\n\r\nWould be great if we can actually lint in CI (not just check) and print out the git diff.\r\nThis way we don't have to run anything locally and can make whatever change needed directly from the log.\r\n\r\nAnd that's also how I usually setup linter CI, based on observation that not everyone likes the extra step.", "> This way we don't have to run anything locally and can make whatever change needed directly from the log.\r\n\r\nIsn't having to look at a diff in the CI run and manually apply the changes more work than just having the linter run locally? ", "> Isn't having to look at a diff in the CI run and manually apply the changes more work than just having the linter run locally?\r\n\r\nGood question.\r\nMy opinion is: not always.\r\n- For people continuously developing on the project, yes, a simple linter command is the easiest.\r\n- For people occasionally joining, or currently focus on other projects, spending extra effort to setup another environment locally with possibly conflict dependencies is a lot more than copy-pasting diff from CI log.", "Fixed.\r\nAlso change to log only once or it may be too annoying." ]
1,706
1,706
1,706
CONTRIBUTOR
null
Models: - text models: @ArthurZucker and @younesbelkada Was introduced in: - https://github.com/huggingface/transformers/pull/27942
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28727/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28727", "html_url": "https://github.com/huggingface/transformers/pull/28727", "diff_url": "https://github.com/huggingface/transformers/pull/28727.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28727.patch", "merged_at": 1706541900000 }
https://api.github.com/repos/huggingface/transformers/issues/28726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28726/comments
https://api.github.com/repos/huggingface/transformers/issues/28726/events
https://github.com/huggingface/transformers/issues/28726
2,102,067,406
I_kwDOCUB6oc59SwDO
28,726
Correct way for Wav2vec2 feature extraction from huggingface like Fairseq
{ "login": "hungdinhxuan", "id": 79694464, "node_id": "MDQ6VXNlcjc5Njk0NDY0", "avatar_url": "https://avatars.githubusercontent.com/u/79694464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hungdinhxuan", "html_url": "https://github.com/hungdinhxuan", "followers_url": "https://api.github.com/users/hungdinhxuan/followers", "following_url": "https://api.github.com/users/hungdinhxuan/following{/other_user}", "gists_url": "https://api.github.com/users/hungdinhxuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/hungdinhxuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hungdinhxuan/subscriptions", "organizations_url": "https://api.github.com/users/hungdinhxuan/orgs", "repos_url": "https://api.github.com/users/hungdinhxuan/repos", "events_url": "https://api.github.com/users/hungdinhxuan/events{/privacy}", "received_events_url": "https://api.github.com/users/hungdinhxuan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @sanchit-gandhi @ylacombe " ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NVIDIA RTX 4090 - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import fairseq from transformers import Wav2Vec2ForPreTraining, Wav2Vec2Config import torchaudio model_file = 'wav2vec_small.pt' model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([model_file]) original = model[0] waveform, _ = torchaudio.load('MMSTTS_ara_000008.wav') reference = original(waveform, mask=False, features_only=True)['x'] model_name = 'facebook/wav2vec2-base' config = Wav2Vec2Config.from_pretrained(model_name) w2vhf = Wav2Vec2ForPreTraining.from_pretrained(model_name, config=config) res = w2vhf(waveform, attention_mask=None, output_hidden_states = True).hidden_states[-1] torch.testing.assert_close(res, reference) ``` Error: ``` AssertionError: Tensor-likes are not close! Mismatched elements: 155126 / 155136 (100.0%) Greatest absolute difference: 5.448995590209961 at index (0, 140, 96) (up to 1e-05 allowed) Greatest relative difference: 89616.765625 at index (0, 6, 144) (up to 1.3e-06 allowed) ``` ### Expected behavior I am using pre-trained wav2vec-base model download from fairseq GitHub. I expected that the pre-trained model provided by huggingface is also the same as fairseq. It means feature extractions from fairseq and hugging face should be as close as possible. What am I doing wrong, or is my feature extraction using Wav2Vec2ForPreTraining not correct? Pre-trained model provides by huggingface with the Pre-trained model from Fairseq are the same? Are there any differences between their model architectures?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28726/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28725/comments
https://api.github.com/repos/huggingface/transformers/issues/28725/events
https://github.com/huggingface/transformers/pull/28725
2,102,030,424
PR_kwDOCUB6oc5lJkLz
28,725
Fix `weights_only`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28725). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? The changes in #28506 is incorrect, and after that we still have issue with torch < 1.13. This PR fixes the issue in a correct way. Fix #28720
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28725", "html_url": "https://github.com/huggingface/transformers/pull/28725", "diff_url": "https://github.com/huggingface/transformers/pull/28725.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28725.patch", "merged_at": 1706270449000 }
https://api.github.com/repos/huggingface/transformers/issues/28724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28724/comments
https://api.github.com/repos/huggingface/transformers/issues/28724/events
https://github.com/huggingface/transformers/pull/28724
2,101,948,592
PR_kwDOCUB6oc5lJTCc
28,724
Fix symbolic_trace with kv cache
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28724). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thank you @amyeroberts, added a test that fails on the current `main` due to the controlflow not being correctly captured." ]
1,706
1,706
1,706
COLLABORATOR
null
We should NOT trace models with 0-shaped concrete metas as we otherwise miss https://github.com/huggingface/transformers/blob/bb6aa8bc5ff8537f58c4b6ac80611101ba556226/src/transformers/modeling_attn_mask_utils.py#L162-L163 in the captured graph.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28724/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28724", "html_url": "https://github.com/huggingface/transformers/pull/28724", "diff_url": "https://github.com/huggingface/transformers/pull/28724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28724.patch", "merged_at": 1706777102000 }
https://api.github.com/repos/huggingface/transformers/issues/28723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28723/comments
https://api.github.com/repos/huggingface/transformers/issues/28723/events
https://github.com/huggingface/transformers/issues/28723
2,101,847,913
I_kwDOCUB6oc59R6dp
28,723
`UserWarning: TypedStorage is deprecated.` on loading `pytorch_model.bin` files from disk.
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @tomaarsen, thanks for raising this! \r\n\r\nIndeed, this shouldn't be happening. Looking into it", "Looking into it, this a consequence of https://github.com/huggingface/transformers/pull/27282 - when `weights_only=True`, then [_legacy_load is used by torch to load the state dict](https://github.com/pytorch/pytorch/blob/a72190fd51f19cbfb5c09ae3088729f94aef7141/torch/serialization.py#L1036). The error is thrown if the `storage_dtype` is type e.g. `torch.FloatStorage`.\r\n\r\ncc'ing in @Narsil who knows more about the tensors and how they're stored. In particular, in the warning message we have: \r\n\r\n```\r\n/data/ml/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n```\r\n\r\nWill this cause issue for use? I'm unsure what is meant by \"using storages directly\". ", "If it's not relevant for us (e.g. if we do not \"use storages directly\", then we could adopt this to avoid the warning: https://github.com/pytorch/pytorch/blob/a72190fd51f19cbfb5c09ae3088729f94aef7141/torch/storage.py#L475C9-L475C36\r\n", "No it's just using a legacy load for this particular file.\r\nNewer files can load perfectly with weights_only=True: https://github.com/pytorch/pytorch/blob/a72190fd51f19cbfb5c09ae3088729f94aef7141/torch/serialization.py#L1016\r\n\r\nThis should be fixed upstream, as I doubt it's our calls that trigger the warning." ]
1,706
1,707
null
MEMBER
null
### System Info - `transformers` version: 4.37.1 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.9.17 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-uncased", use_safetensors=False) ``` This resulted in: ``` C:\Users\tom\.conda\envs\transformers\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() ``` See also https://github.com/UKPLab/sentence-transformers/issues/2450 Notably, I do not get this warning at transformers v4.36.2. ### Expected behavior I don't expect any warnings from loading a model with `pytorch_model.bin`. - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28723/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28722/comments
https://api.github.com/repos/huggingface/transformers/issues/28722/events
https://github.com/huggingface/transformers/issues/28722
2,101,830,611
I_kwDOCUB6oc59R2PT
28,722
AWQ models including activation as previous operation seems broken
{ "login": "kevin3314", "id": 37268015, "node_id": "MDQ6VXNlcjM3MjY4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/37268015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevin3314", "html_url": "https://github.com/kevin3314", "followers_url": "https://api.github.com/users/kevin3314/followers", "following_url": "https://api.github.com/users/kevin3314/following{/other_user}", "gists_url": "https://api.github.com/users/kevin3314/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevin3314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevin3314/subscriptions", "organizations_url": "https://api.github.com/users/kevin3314/orgs", "repos_url": "https://api.github.com/users/kevin3314/repos", "events_url": "https://api.github.com/users/kevin3314/events{/privacy}", "received_events_url": "https://api.github.com/users/kevin3314/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,706
null
NONE
null
### System Info transformer==4.37.0 autoawq==0.16.0 ### Who can help? @SunMarc @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` >>> from transformers import AutoModelForCausalLM /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (2.1.0) or chardet (5.2.0) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " >>> model = AutoModelForCausalLM.from_pretrained("casperhansen/falcon-7b-awq", trust_remote_code=True) You have loaded an AWQ model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model. /usr/local/lib/python3.10/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() Some weights of the model checkpoint at casperhansen/falcon-7b-awq were not used when initializing RWForCausalLM: ['transformer.h.0.mlp.act.scales', 'transformer.h.1.mlp.act.scales', 'transformer.h.10.mlp.act.scales', 'transformer.h.11.mlp.act.scales', 'transformer.h.12.mlp.act.scales', 'transformer.h.13.mlp.act.scales', 'transformer.h.14.mlp.act.scales', 'transformer.h.15.mlp.act.scales', 'transformer.h.16.mlp.act.scales', 'transformer.h.17.mlp.act.scales', 'transformer.h.18.mlp.act.scales', 'transformer.h.19.mlp.act.scales', 'transformer.h.2.mlp.act.scales', 'transformer.h.20.mlp.act.scales', 'transformer.h.21.mlp.act.scales', 'transformer.h.22.mlp.act.scales', 'transformer.h.23.mlp.act.scales', 'transformer.h.24.mlp.act.scales', 'transformer.h.25.mlp.act.scales', 'transformer.h.26.mlp.act.scales', 'transformer.h.27.mlp.act.scales', 'transformer.h.28.mlp.act.scales', 'transformer.h.29.mlp.act.scales', 'transformer.h.3.mlp.act.scales', 'transformer.h.30.mlp.act.scales', 'transformer.h.31.mlp.act.scales', 'transformer.h.4.mlp.act.scales', 'transformer.h.5.mlp.act.scales', 'transformer.h.6.mlp.act.scales', 'transformer.h.7.mlp.act.scales', 'transformer.h.8.mlp.act.scales', 'transformer.h.9.mlp.act.scales'] - This IS expected if you are initializing RWForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RWForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` ### Expected behavior Some weights of the model checkpoint at casperhansen/falcon-7b-awq were not used when initializing RWForCausalLM:... should not appear. I suspect that the root cause is that only Linear layers are replaced. This does not work if the precede is not Linear layer (e.g. Activation) and it is the case for falcon. https://github.com/huggingface/transformers/blob/8eb74c1c8961e3dc8549bb1a76463c7658a63d43/src/transformers/integrations/awq.py#L108
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28722/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28721/comments
https://api.github.com/repos/huggingface/transformers/issues/28721/events
https://github.com/huggingface/transformers/issues/28721
2,101,804,893
I_kwDOCUB6oc59Rv9d
28,721
Load an EncoderDecoderModel as AutoModel
{ "login": "Bachstelze", "id": 19904888, "node_id": "MDQ6VXNlcjE5OTA0ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bachstelze", "html_url": "https://github.com/Bachstelze", "followers_url": "https://api.github.com/users/Bachstelze/followers", "following_url": "https://api.github.com/users/Bachstelze/following{/other_user}", "gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions", "organizations_url": "https://api.github.com/users/Bachstelze/orgs", "repos_url": "https://api.github.com/users/Bachstelze/repos", "events_url": "https://api.github.com/users/Bachstelze/events{/privacy}", "received_events_url": "https://api.github.com/users/Bachstelze/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @Bachstelze, thanks for raising an issue! \r\n\r\nThe `EncoderDecoder` models are composite models which use `AutoModel` to load the encoder and decoder respectively. As per [the BertGeneration docs](https://github.com/huggingface/transformers/blob/bbe30c6968c66bc77f7f1f246e64743d74419770/docs/source/en/model_doc/bert-generation.md?plain=1#L89), you can load the model using:\r\n\r\n```py\r\nfrom transformers import EncoderDecoderModel\r\nmodel = EncoderDecoderModel.from_pretrained(\"Bachstelze/instructionRoberta-base\", output_attentions=True)\r\n```", "@amyeroberts Yes it is possible to load it as EncoderDecoderModel, though many libraries load generic models just with the Automodel, so EncoderDecoderModels yield an error." ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.35.0 - Platform: Linux-5.15.0-91-lowlatency-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Bachstelze/instructionRoberta-base") model = AutoModel.from_pretrained("Bachstelze/instructionRoberta-base", output_attentions=True) ### Expected behavior Load the EncoderDecoderModel as AutoModel. "BertGenerationConfig" is supported, though this seems outdated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28721/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28720/comments
https://api.github.com/repos/huggingface/transformers/issues/28720/events
https://github.com/huggingface/transformers/issues/28720
2,101,730,896
I_kwDOCUB6oc59Rd5Q
28,720
Current version 4.37.1 only match torch>=1.13.0, not torch > 1.11
{ "login": "StrivedTye", "id": 19620650, "node_id": "MDQ6VXNlcjE5NjIwNjUw", "avatar_url": "https://avatars.githubusercontent.com/u/19620650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StrivedTye", "html_url": "https://github.com/StrivedTye", "followers_url": "https://api.github.com/users/StrivedTye/followers", "following_url": "https://api.github.com/users/StrivedTye/following{/other_user}", "gists_url": "https://api.github.com/users/StrivedTye/gists{/gist_id}", "starred_url": "https://api.github.com/users/StrivedTye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StrivedTye/subscriptions", "organizations_url": "https://api.github.com/users/StrivedTye/orgs", "repos_url": "https://api.github.com/users/StrivedTye/repos", "events_url": "https://api.github.com/users/StrivedTye/events{/privacy}", "received_events_url": "https://api.github.com/users/StrivedTye/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ydshieh ", "Sorry, indeed, it's my bad\r\n\r\n#28506 didn't fix #27282 properly, I will open a PR", "@StrivedTye Fix is merged. I tried it and it works. Let me know if you have further issue with this. Thanks for reporting the issue." ]
1,706
1,706
1,706
NONE
null
when using torch<1.13.0, current version (4.37.1) will report a OSError, because `torch.load()` in torch==1.12 does not has the keyword of `weights_only`. This issue occurs in the file `modeling_utils, Line 533`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28720/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28719/comments
https://api.github.com/repos/huggingface/transformers/issues/28719/events
https://github.com/huggingface/transformers/pull/28719
2,101,656,110
PR_kwDOCUB6oc5lIUOM
28,719
[`docs`] Update preprocessing.md
{ "login": "velaia", "id": 1515904, "node_id": "MDQ6VXNlcjE1MTU5MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1515904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/velaia", "html_url": "https://github.com/velaia", "followers_url": "https://api.github.com/users/velaia/followers", "following_url": "https://api.github.com/users/velaia/following{/other_user}", "gists_url": "https://api.github.com/users/velaia/gists{/gist_id}", "starred_url": "https://api.github.com/users/velaia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/velaia/subscriptions", "organizations_url": "https://api.github.com/users/velaia/orgs", "repos_url": "https://api.github.com/users/velaia/repos", "events_url": "https://api.github.com/users/velaia/events{/privacy}", "received_events_url": "https://api.github.com/users/velaia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @velaia, thanks for contributing to improving our docs! \r\n\r\nCould you combine the changes in the PR with the others in #28718? ", "Hi @amyeroberts , combining was what I originally had in mind but didn't know how. I think I did it somehow by making the changes in my fork of the repo. Can you check pls?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28719). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
adjust ImageProcessor link to working target (same as in lower section of file) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28719/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28719", "html_url": "https://github.com/huggingface/transformers/pull/28719", "diff_url": "https://github.com/huggingface/transformers/pull/28719.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28719.patch", "merged_at": 1706270337000 }
https://api.github.com/repos/huggingface/transformers/issues/28718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28718/comments
https://api.github.com/repos/huggingface/transformers/issues/28718/events
https://github.com/huggingface/transformers/pull/28718
2,101,633,403
PR_kwDOCUB6oc5lIPa8
28,718
Update preprocessing.md
{ "login": "velaia", "id": 1515904, "node_id": "MDQ6VXNlcjE1MTU5MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1515904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/velaia", "html_url": "https://github.com/velaia", "followers_url": "https://api.github.com/users/velaia/followers", "following_url": "https://api.github.com/users/velaia/following{/other_user}", "gists_url": "https://api.github.com/users/velaia/gists{/gist_id}", "starred_url": "https://api.github.com/users/velaia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/velaia/subscriptions", "organizations_url": "https://api.github.com/users/velaia/orgs", "repos_url": "https://api.github.com/users/velaia/repos", "events_url": "https://api.github.com/users/velaia/events{/privacy}", "received_events_url": "https://api.github.com/users/velaia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
fixed link to old version of documentation # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28718/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28718", "html_url": "https://github.com/huggingface/transformers/pull/28718", "diff_url": "https://github.com/huggingface/transformers/pull/28718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28718.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28717/comments
https://api.github.com/repos/huggingface/transformers/issues/28717/events
https://github.com/huggingface/transformers/pull/28717
2,101,603,477
PR_kwDOCUB6oc5lIJQO
28,717
Initialize _tqdm_active with hf_hub_utils.are_progress_bars_disabled(…
{ "login": "ShukantPal", "id": 22450567, "node_id": "MDQ6VXNlcjIyNDUwNTY3", "avatar_url": "https://avatars.githubusercontent.com/u/22450567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShukantPal", "html_url": "https://github.com/ShukantPal", "followers_url": "https://api.github.com/users/ShukantPal/followers", "following_url": "https://api.github.com/users/ShukantPal/following{/other_user}", "gists_url": "https://api.github.com/users/ShukantPal/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShukantPal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShukantPal/subscriptions", "organizations_url": "https://api.github.com/users/ShukantPal/orgs", "repos_url": "https://api.github.com/users/ShukantPal/repos", "events_url": "https://api.github.com/users/ShukantPal/events{/privacy}", "received_events_url": "https://api.github.com/users/ShukantPal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28717). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? …) to respect HF_HUB_DISABLE_PROGRESS_BARS It seems like enable_progress_bar() and disable_progress_bar() sync up with huggingface_hub, but the initial value is always True. This changes will make sure the user's preference is respected implicity on initialization. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28717", "html_url": "https://github.com/huggingface/transformers/pull/28717", "diff_url": "https://github.com/huggingface/transformers/pull/28717.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28717.patch", "merged_at": 1706270374000 }
https://api.github.com/repos/huggingface/transformers/issues/28716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28716/comments
https://api.github.com/repos/huggingface/transformers/issues/28716/events
https://github.com/huggingface/transformers/issues/28716
2,101,597,907
I_kwDOCUB6oc59Q9bT
28,716
PermissionError occurs when calling Trainer.trainer using transformers
{ "login": "Mickls", "id": 41884581, "node_id": "MDQ6VXNlcjQxODg0NTgx", "avatar_url": "https://avatars.githubusercontent.com/u/41884581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mickls", "html_url": "https://github.com/Mickls", "followers_url": "https://api.github.com/users/Mickls/followers", "following_url": "https://api.github.com/users/Mickls/following{/other_user}", "gists_url": "https://api.github.com/users/Mickls/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mickls/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mickls/subscriptions", "organizations_url": "https://api.github.com/users/Mickls/orgs", "repos_url": "https://api.github.com/users/Mickls/repos", "events_url": "https://api.github.com/users/Mickls/events{/privacy}", "received_events_url": "https://api.github.com/users/Mickls/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @Mickls, thanks for raising this issue! \r\n\r\nA fix was merged in #28637. Could you try installing transformers from source and see if this resolves the issue for you? \r\n\r\n`pip install git+https://github.com/huggingface/transformers`", "> Hi @Mickls, thanks for raising this issue!\r\n> \r\n> A fix was merged in #28637. Could you try installing transformers from source and see if this resolves the issue for you?\r\n> \r\n> `pip install git+https://github.com/huggingface/transformers`\r\n\r\nYes it works fine now, thanks" ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.37.1 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @muellerzr @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The error code block comes from the official example https://huggingface.co/docs/transformers/tasks/sequence_classification The following is the specific code ```python training_args = TrainingArguments( output_dir="my_awesome_model", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, push_to_hub=False, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_imdb["train"], eval_dataset=tokenized_imdb["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() ``` ### Expected behavior In trainer.py:2418 line `fd = os.open(output_dir, os.O_RDONLY)`, if the windows system tries to open a folder, a PermissionError exception will be triggered, so this piece of code will cause the train function to save the trained model to be interrupted.If possible, I hope you can be compatible with windows platform
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28716/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28715/comments
https://api.github.com/repos/huggingface/transformers/issues/28715/events
https://github.com/huggingface/transformers/pull/28715
2,101,332,854
PR_kwDOCUB6oc5lHXhs
28,715
[docs] Fix datasets in guides
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
MEMBER
null
An issue was raised at https://github.com/huggingface/datasets/issues/6605 that the ELI5 dataset is no longer accessible, impacting the causal/masked language modeling guides. This PR replaces it with the [ELI5-Category](https://huggingface.co/datasets/eli5_category) dataset, which should work fine as a drop-in replacement.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28715/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28715", "html_url": "https://github.com/huggingface/transformers/pull/28715", "diff_url": "https://github.com/huggingface/transformers/pull/28715.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28715.patch", "merged_at": 1706290147000 }
https://api.github.com/repos/huggingface/transformers/issues/28714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28714/comments
https://api.github.com/repos/huggingface/transformers/issues/28714/events
https://github.com/huggingface/transformers/issues/28714
2,101,234,552
I_kwDOCUB6oc59Pkt4
28,714
Models with a sentencepiece tokenizers have problems with special tokens and encode decode
{ "login": "ekgren", "id": 1921821, "node_id": "MDQ6VXNlcjE5MjE4MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1921821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekgren", "html_url": "https://github.com/ekgren", "followers_url": "https://api.github.com/users/ekgren/followers", "following_url": "https://api.github.com/users/ekgren/following{/other_user}", "gists_url": "https://api.github.com/users/ekgren/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekgren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekgren/subscriptions", "organizations_url": "https://api.github.com/users/ekgren/orgs", "repos_url": "https://api.github.com/users/ekgren/repos", "events_url": "https://api.github.com/users/ekgren/events{/privacy}", "received_events_url": "https://api.github.com/users/ekgren/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "#26678 fixed this, I can't push everything to the hub but Llama tokenizer will have a fix soon. \r\nThis is a duplicate of #26455", "Thank you for all the hard work @ArthurZucker, closing this issue then!" ]
1,706
1,706
1,706
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1vujbKaRkIpk7qli7eUKAZQDRksHSRW51?usp=sharing ### Expected behavior Huggingface tokenizers with sentencepiece in the back have inconsistent encoding decoding behaviour. If you encode and decode a string with special characters white spaces are inserted. Expected behaviour would be to get the exact same string back. This is both present with the Llama2 tokenizer, the gpt-sw3 tokenizers and more
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28714/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28713/comments
https://api.github.com/repos/huggingface/transformers/issues/28713/events
https://github.com/huggingface/transformers/pull/28713
2,100,928,500
PR_kwDOCUB6oc5lGAZA
28,713
Add FlashAttention2 for XLM-RoBERTa
{ "login": "DavidAfonsoValente", "id": 74915610, "node_id": "MDQ6VXNlcjc0OTE1NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/74915610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidAfonsoValente", "html_url": "https://github.com/DavidAfonsoValente", "followers_url": "https://api.github.com/users/DavidAfonsoValente/followers", "following_url": "https://api.github.com/users/DavidAfonsoValente/following{/other_user}", "gists_url": "https://api.github.com/users/DavidAfonsoValente/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidAfonsoValente/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidAfonsoValente/subscriptions", "organizations_url": "https://api.github.com/users/DavidAfonsoValente/orgs", "repos_url": "https://api.github.com/users/DavidAfonsoValente/repos", "events_url": "https://api.github.com/users/DavidAfonsoValente/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidAfonsoValente/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@DavidAfonsoValente Thanks for opening a PR and adding FA2 for this model! Let us know when the PR is ready to be reviewed πŸ€— ", "> @DavidAfonsoValente Thanks for opening a PR and adding FA2 for this model! Let us know when the PR is ready to be reviewed πŸ€—\r\n\r\nThanks! I believe it's ready to be reviewed.", "The tests that are failing are due to the changes made in the functions that are labeled as:\r\n Copied from transformers.models.roberta.modeling_roberta.RobertaSelfAttention with Roberta->XLMRoberta\r\n\r\nWhat is the standard procedure when I have made changes to these functions in order to accommodate the new feature?\r\n", "@DavidAfonsoValente If you click on the CI runs, you'll see the error messages relating to the code quality failures. These detail how to resolve the issue: run `make fix-copies` and push the changes. \r\n\r\nIf another model is copying this model's attention classes, then it's great because you're adding FA2 for two models πŸ₯³ You just need to apply the equivalent changes to the model that's copying. ", "Hello there! \r\n\r\nI'm working on integrating scaled_dot_product_attention to BERT #28802, and there might be some merge conflicts with this change. \r\n\r\nMost of the changes I'm making will propagate through to XML-RoBERTa (through fix-copies). From what I can, our approaches are a bit different from this in 2 areas:\r\n1. I've kept the `#Copies from` line at the class level, so any changes to BERT's init() will be propagated through to the fix-copies classes. I've modified the `#Copies from` line to use the corresponding ATTENTION_CLASS as necessary.\r\n2. I've created a BertSdpaSelfAttention to go with BertSelfAttention, instead of creating a BertSdpaAttention. The rationale behind this is that the code in BertSdpaSelfAttention is adapted from the code in BertSelfAttention, so I figured it's better to name it as such.\r\n\r\nLet me know if you have any questions about these. I think we should discuss about which approach is better and adopt it. Would you mind looking through #28802 and let me know what you think?", "> Hello there!\r\n> \r\n> I'm working on integrating scaled_dot_product_attention to BERT #28802, and there might be some merge conflicts with this change.\r\n> \r\n> Most of the changes I'm making will propagate through to XML-RoBERTa (through fix-copies). From what I can, our approaches are a bit different from this in 2 areas:\r\n> \r\n> 1. I've kept the `#Copies from` line at the class level, so any changes to BERT's init() will be propagated through to the fix-copies classes. I've modified the `#Copies from` line to use the corresponding ATTENTION_CLASS as necessary.\r\n> 2. I've created a BertSdpaSelfAttention to go with BertSelfAttention, instead of creating a BertSdpaAttention. The rationale behind this is that the code in BertSdpaSelfAttention is adapted from the code in BertSelfAttention, so I figured it's better to name it as such.\r\n> \r\n> Let me know if you have any questions about these. I think we should discuss about which approach is better and adopt it. Would you mind looking through #28802 and let me know what you think?\r\n\r\nIt seems like your implementation doesnt create complicated conflitcts, it should be simple to merge, what do you think?", "It should be simple to merge. I think the main question is whether or not we want to use the XLM_ROBERTA_SELF_ATTENTION_CLASSES or XLM_ROBERTA_ATTENTION_CLASSES approach?", "I think in order to be consistent with other models we should keep the name XLM_ROBERTA_ATTENTION_CLASSES since this is how other models have their attention classes named.", "Thanks for your input.\r\n\r\nI am leaning towards using the \"SelfAttention\" convention in this case, because XLMRobertaFlashAttention2 actually tries to copy the logic of both XLMRobertaAttention and XLMRobertaSelfAttention into one. I think it's cleaner to have a XLMRobertaSelfFlashAttention2 that mirrors XLMRobertaSelfAttention, and then reuse the logic inside XLMRobertaAttention for both types of self attentions.\r\n\r\nReusing the code in XLMRobertaAttention should help avoid some bugs. Judging from the existing code, I think there's already a bit of an issue with the way self.output() is called in XLMRobertaFlashAttention2 (I think you need more than the call to self.output.dense(), otherwise you're missing the dropout and LayerNorm).\r\n", "Okk, maybe it'll be easier if you merge your PR and I work on top of your version.", "Thanks :thumbsup: I'll iterate as fast as possible to get it merged, and will let you know as soon as it's done!" ]
1,706
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27957 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28713", "html_url": "https://github.com/huggingface/transformers/pull/28713", "diff_url": "https://github.com/huggingface/transformers/pull/28713.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28713.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28712/comments
https://api.github.com/repos/huggingface/transformers/issues/28712/events
https://github.com/huggingface/transformers/pull/28712
2,100,919,565
PR_kwDOCUB6oc5lF-gR
28,712
Stop confusing the TF compiler with ModelOutput objects
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts sorry, let me explain what I meant with the model output being 'used'!\r\n\r\nThe issue occurs when the main `BlipModel` calls the `BlipText` sub-model, and the sub-model returns a `ModelOutput` which the main `BlipModel` reads values from. As far as I can tell, the issue does **not** arise if the `BlipText` model is called directly. So this seems to be some issue with the TF compiler not really understanding what kind of object a `ModelOutput` is when it has to compile through one inside a model `call()`.", "I ran the slow tests and all passed!" ]
1,706
1,706
1,706
MEMBER
null
The `test_saved_model_creation` test was failing for BLIP with the rather unusual symptom that the `loss` key of one of the intermediate outputs had transformed into a strange generator object. I still don't know **why** this happened, but it requires the following: 1) TF compilation (doesn't happen in eager mode) 2) `ModelOutput` dicts with `loss` as the first, optional key 3) The `ModelOutput` dicts have to be returned internally and then used in a subsequent step, rather than returned as the last step of the outermost model Since this is a nightmare zone, I'm going to work around the issue by just setting `return_dict` to `False` when calling the sub-model and get the tensors we need from the output tuple instead. This should be invisible for our users! I also slipped a quick fix into the loss calculation to avoid potential NaNs from passing negative labels to one of the built-in TF loss functions. Even though all the negative-label positions should be masked, NaNs tend to persist (because `nan * 0 == nan`).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28712/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28712", "html_url": "https://github.com/huggingface/transformers/pull/28712", "diff_url": "https://github.com/huggingface/transformers/pull/28712.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28712.patch", "merged_at": 1706271750000 }
https://api.github.com/repos/huggingface/transformers/issues/28711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28711/comments
https://api.github.com/repos/huggingface/transformers/issues/28711/events
https://github.com/huggingface/transformers/pull/28711
2,100,880,202
PR_kwDOCUB6oc5lF2Au
28,711
[WIP] Improve multimodal processors - rely less on kwargs
{ "login": "molbap", "id": 39954772, "node_id": "MDQ6VXNlcjM5OTU0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/molbap", "html_url": "https://github.com/molbap", "followers_url": "https://api.github.com/users/molbap/followers", "following_url": "https://api.github.com/users/molbap/following{/other_user}", "gists_url": "https://api.github.com/users/molbap/gists{/gist_id}", "starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/molbap/subscriptions", "organizations_url": "https://api.github.com/users/molbap/orgs", "repos_url": "https://api.github.com/users/molbap/repos", "events_url": "https://api.github.com/users/molbap/events{/privacy}", "received_events_url": "https://api.github.com/users/molbap/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28711). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR aims at a better control on the logic flow through `Processor` classes, in particular those leveraging `ImageProcessor` with a `Tokenizer`. Linked with #27768. `ImageProcessors` compared to `Nougat` (as a reference point) have different signatures in their preprocess. One can list them here ``` TvltImageProcessor: videos, patch_size, crop_size, do_center_crop, is_mixed, num_frames IdeficsImageProcessor: transform, image_num_channels, image_size ViTImageProcessor: No difference in args Mask2FormerImageProcessor: segmentation_maps, ignore_index, size_divisor, reduce_labels, instance_id_to_semantic_id MaskFormerImageProcessor: segmentation_maps, ignore_index, size_divisor, do_reduce_labels, instance_id_to_semantic_id YolosImageProcessor: format, return_segmentation_masks, annotations, masks_path MobileNetV1ImageProcessor: do_center_crop, crop_size DeiTImageProcessor: do_center_crop, crop_size EfficientNetImageProcessor: include_top, do_center_crop, rescale_offset, crop_size BeitImageProcessor: do_reduce_labels, do_center_crop, segmentation_maps, crop_size MobileViTImageProcessor: do_flip_channel_order, do_center_crop, segmentation_maps, crop_size PerceiverImageProcessor: do_center_crop, crop_size DeformableDetrImageProcessor: format, return_segmentation_masks, annotations, masks_path EfficientFormerImageProcessor: do_center_crop, crop_size SegformerImageProcessor: do_reduce_labels, segmentation_maps LayoutLMv2ImageProcessor: apply_ocr, ocr_lang, tesseract_config BridgeTowerImageProcessor: do_center_crop, size_divisor SamImageProcessor: segmentation_maps, pad_size, do_convert_rgb, mask_pad_size, mask_size BlipImageProcessor: do_convert_rgb Owlv2ImageProcessor: No difference in args LayoutLMv3ImageProcessor: apply_ocr, ocr_lang, tesseract_config DetaImageProcessor: format, return_segmentation_masks, annotations, masks_path BitImageProcessor: do_center_crop, do_convert_rgb, crop_size ViTHybridImageProcessor: do_center_crop, do_convert_rgb, crop_size FuyuImageProcessor: patch_size, padding_mode, padding_value PvtImageProcessor: No difference in args Pix2StructImageProcessor: max_patches, header_text, do_convert_rgb, patch_size VitMatteImageProcessor: trimaps, size_divisibility VideoMAEImageProcessor: videos, do_center_crop, crop_size MobileNetV2ImageProcessor: do_center_crop, crop_size OneFormerImageProcessor: segmentation_maps, ignore_index, task_inputs, do_reduce_labels, instance_id_to_semantic_id FlavaImageProcessor: crop_size, codebook_crop_size, codebook_rescale_factor, mask_group_max_patches, mask_group_min_patches, mask_group_max_aspect_ratio, codebook_image_mean, codebook_do_resize, return_image_mask, input_size_patches, codebook_do_center_crop, codebook_resample, mask_group_min_aspect_ratio, codebook_do_normalize, codebook_do_map_pixels, return_codebook_pixels, codebook_image_std, do_center_crop, codebook_size, codebook_do_rescale, total_mask_patches DonutImageProcessor: random_padding TvpImageProcessor: videos, crop_size, constant_values, do_flip_channel_order, do_center_crop, pad_size, pad_mode GLPNImageProcessor: size_divisor PoolFormerImageProcessor: crop_pct, do_center_crop, crop_size CLIPImageProcessor: do_center_crop, do_convert_rgb, crop_size DPTImageProcessor: ensure_multiple_of, keep_aspect_ratio, size_divisor ViltImageProcessor: size_divisor Swin2SRImageProcessor: pad_size ImageGPTImageProcessor: clusters, do_color_quantize SiglipImageProcessor: No difference in args VivitImageProcessor: videos, do_center_crop, offset, crop_size ConvNextImageProcessor: crop_pct OwlViTImageProcessor: do_center_crop, crop_size ChineseCLIPImageProcessor: do_center_crop, do_convert_rgb, crop_size LevitImageProcessor: do_center_crop, crop_size ConditionalDetrImageProcessor: format, return_segmentation_masks, annotations, masks_path DetrImageProcessor: format, return_segmentation_masks, annotations, masks_path ``` This helps standardize a bit in the first place, and then, will allow uniformizing `Processors`. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28711/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28711/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28711", "html_url": "https://github.com/huggingface/transformers/pull/28711", "diff_url": "https://github.com/huggingface/transformers/pull/28711.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28711.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28710/comments
https://api.github.com/repos/huggingface/transformers/issues/28710/events
https://github.com/huggingface/transformers/pull/28710
2,100,865,170
PR_kwDOCUB6oc5lFyww
28,710
Flass attention 2 for xml roberta
{ "login": "DavidAfonsoValente", "id": 74915610, "node_id": "MDQ6VXNlcjc0OTE1NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/74915610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidAfonsoValente", "html_url": "https://github.com/DavidAfonsoValente", "followers_url": "https://api.github.com/users/DavidAfonsoValente/followers", "following_url": "https://api.github.com/users/DavidAfonsoValente/following{/other_user}", "gists_url": "https://api.github.com/users/DavidAfonsoValente/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidAfonsoValente/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidAfonsoValente/subscriptions", "organizations_url": "https://api.github.com/users/DavidAfonsoValente/orgs", "repos_url": "https://api.github.com/users/DavidAfonsoValente/repos", "events_url": "https://api.github.com/users/DavidAfonsoValente/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidAfonsoValente/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27957 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28710", "html_url": "https://github.com/huggingface/transformers/pull/28710", "diff_url": "https://github.com/huggingface/transformers/pull/28710.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28710.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28709/comments
https://api.github.com/repos/huggingface/transformers/issues/28709/events
https://github.com/huggingface/transformers/pull/28709
2,100,726,140
PR_kwDOCUB6oc5lFUmE
28,709
Don't fail when `LocalEntryNotFoundError` during `processor_config.json` loading
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28709). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Fix #28697.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28709", "html_url": "https://github.com/huggingface/transformers/pull/28709", "diff_url": "https://github.com/huggingface/transformers/pull/28709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28709.patch", "merged_at": 1706256153000 }
https://api.github.com/repos/huggingface/transformers/issues/28708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28708/comments
https://api.github.com/repos/huggingface/transformers/issues/28708/events
https://github.com/huggingface/transformers/pull/28708
2,100,722,330
PR_kwDOCUB6oc5lFTx2
28,708
Fixed nll with label_smoothing to just nll
{ "login": "nileshkokane01", "id": 8201108, "node_id": "MDQ6VXNlcjgyMDExMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nileshkokane01", "html_url": "https://github.com/nileshkokane01", "followers_url": "https://api.github.com/users/nileshkokane01/followers", "following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}", "gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}", "starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions", "organizations_url": "https://api.github.com/users/nileshkokane01/orgs", "repos_url": "https://api.github.com/users/nileshkokane01/repos", "events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}", "received_events_url": "https://api.github.com/users/nileshkokane01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@younesbelkada ,\r\nCan you please review the changes? ", "@younesbelkada ,\r\nI rebased and resolved the conflict. I hope its a right way, or else let me know.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @younesbelkada ", "@ArthurZucker IMO this is not really breaking, it is even the opposite as it fixes some subtle bugs with respect to training with BLIP- see @NielsRogge ' comment here: https://github.com/huggingface/transformers/issues/28167#issuecomment-1867398976", "thinking a bit about it, indeed maybe we should make that configurable through a variable in the config so that potentially users could revert to original behaviour if needed. \r\n@nileshkokane01 would be happy to adjust the PR accordingly? You just need to add a new variable `label_smoothing` in the blip config class and set it to 0", "@younesbelkada sure! I'll do that.", "thank you @nileshkokane01 !", "@younesbelkada ,\r\n\r\nDo I have to change the nll loss to the following as well: \r\n\r\n`\r\nloss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=self.config.lable_smoothing)`", "hi @nileshkokane01 \r\nYes please, this sounds great", "@younesbelkada ,\r\ncan you have a look ? " ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? This PR fixes #28167 by making label_smoothing= 0 . <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28167 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younesbelkada Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28708/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28708", "html_url": "https://github.com/huggingface/transformers/pull/28708", "diff_url": "https://github.com/huggingface/transformers/pull/28708.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28708.patch", "merged_at": 1708390335000 }
https://api.github.com/repos/huggingface/transformers/issues/28707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28707/comments
https://api.github.com/repos/huggingface/transformers/issues/28707/events
https://github.com/huggingface/transformers/issues/28707
2,100,555,786
I_kwDOCUB6oc59M_AK
28,707
MixTral 8*7B GPU Memory usage keeps increasing during inference
{ "login": "oroojlooy", "id": 20797260, "node_id": "MDQ6VXNlcjIwNzk3MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/20797260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oroojlooy", "html_url": "https://github.com/oroojlooy", "followers_url": "https://api.github.com/users/oroojlooy/followers", "following_url": "https://api.github.com/users/oroojlooy/following{/other_user}", "gists_url": "https://api.github.com/users/oroojlooy/gists{/gist_id}", "starred_url": "https://api.github.com/users/oroojlooy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oroojlooy/subscriptions", "organizations_url": "https://api.github.com/users/oroojlooy/orgs", "repos_url": "https://api.github.com/users/oroojlooy/repos", "events_url": "https://api.github.com/users/oroojlooy/events{/privacy}", "received_events_url": "https://api.github.com/users/oroojlooy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @oroojlooy, thanks for raising this issue! \r\n\r\nI believe there's an accumulation of gradients happening due to the multiple forward passes on the model. \r\n\r\nPutting the pipeline calls in the `torch.no_grad` context should help:\r\n\r\n```py\r\nwith torch.no_grad():\r\n outputs = self.pipeline(prompt, max_new_tokens=self.max_new_tokens, do_sample=self.do_sample, temperature=self.temperature, top_k=self.top_k, top_p=self.top_p)\r\n```\r\n\r\nAs a side note - if do_sample=False, then parameters like top_p won't have any effect on the generation. ", "Hi @amyeroberts \r\nThanks for your reply. I tried adding `torch.no_grad`, it does not help and the memory keeps increasing. I also tried running the model via TGI which is supposed to manage the process efficiently, still the memory increases with that; although, no CUDA memory error when I use that. The CUDA memory usage with TGI increases up to around 39Gb on all cores and stays there. \r\n\r\nCould not be this related to the structure of MixTral 8*7B, where it has eight expert models? i.e., I observe a memory jump when one of the expert models gets loaded? \r\n\r\nAlso thanks for the tip about `do_sample`!", "Hi @oroojlooy, \r\n\r\nYou'll need to provide some more details about how the memory increases, in particular for the non-TGI case: is there a sudden spike? Does it go up and down? After how many calls do you see this increase? \r\n\r\nNote: in the case of generation, you're making autoregressive calls to the model i.e. the model is being repeatedly called with an increasing input length. If options such as `use_cache` aren't selected, then you would expect a memory increase, even after the model has been loaded. \r\n\r\nI'd suggest not using the pipeline, and using the modeling code directly. This will give you more control and enable you to monitor better what is causing increases in memory. \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\n\r\nclass GenerationModel:\r\n def __init__(self, model_id, temperature=0.0, max_new_tokens=356, do_sample=False, top_k=50, top_p=0.7):\r\n self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"auto\", torch_dtype=torch.float16)\r\n self.tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n self.temperature = temperature\r\n self.max_new_tokens = max_new_tokens\r\n self.do_sample = do_sample\r\n self.top_k = top_k\r\n self.top_p = top_p\r\n\r\n if do_sample and temperature == 0.0:\r\n raise ValueError(\r\n \"`temperature` (=0.0) has to be a strictly positive float, otherwise your next token scores will be \"\r\n \"invalid. If you're looking for greedy decoding strategies, set `do_sample=False`\")\r\n\r\n def __call__(self, raw_messages: str) -> str:\r\n \"\"\"\r\n An example of message is:\r\n messages = [{\"role\": \"user\", \"content\": \"Explain what a Mixture of Experts is in less than 100 words.\"}]\r\n \"\"\"\r\n try:\r\n messages = [{\"role\": \"user\", \"content\": raw_messages}]\r\n prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\r\n inputs = self.tokenizer(prompt, return_tensors=\"pt\").to(self.model.device)\r\n with torch.no_grad():\r\n outputs = self.model.generate(**inputs, max_length=len(prompt[0]) + self.max_new_tokens, use_cache=True)\r\n\r\n generated_text = self.tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\n return generated_text\r\n except Exception as e:\r\n print(e)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n model = GenerationModel(\r\n model_id=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\r\n temperature=0.0,\r\n max_new_tokens=356,\r\n do_sample=False,\r\n )\r\n messages = \"Explain what a Mixture of Experts is in less than 100 words.\"\r\n out = model(messages)\r\n print(out)\r\n```\r\n\r\ncc @gante The generation wizard who will know more about getting this to run well\r\n\r\n> Could not be this related to the structure of MixTral 8*7B, where it has eight expert models? i.e., I observe a memory jump when one of the expert models gets loaded?\r\n\r\nYou can run an experiment with a non-MoE model and see :) \r\n\r\n\r\n", "Hi @oroojlooy :wave:\r\n\r\nAs @amyeroberts wrote, the memory consumption in `transformers` is expected to grow throughout generation (i.e. the pipeline call in your script), as the input/output grows longer. This is because we don't pre-allocate memory, contrarily to TGI (that's why you see a fixed memory footprint after the model gets loaded). It is also independent of being a MoE model, it's how text generation works.\r\n\r\nTo confirm that there is no memory leak, you can try a simple test: call your pipeline repeatedly with the same input and with `do_sample=False`. You should not see memory increases as you repeat the calls.", "Thanks @amyeroberts and @gante for the replies. \r\n@amyeroberts I tried non-MoE model as you suggested and I. can confirm what you pointed out! \r\n@gante I actually have tried what you suggested and did not see any memory jump. \r\n\r\nBut, I am still confused about how the inference works within language models. I understanding was, for a fixed batch-size, the memory usage of the network should be fixed. Because, regardless of how big is an input+output, the max-len of input+output is always capped by the context size of the LLM, which a memory is allocated for that we load the model. So, for batch-size `b`, the memory utilization of the model is equal to `dtype_size*b*num_params` + `dtype_size*num_operations`. Should not this memory-size utilization be fixed through the inference time? \r\n\r\n\r\n", "@oroojlooy to understand why the memory grows (and what you can do about it), have a look at [this guide](https://huggingface.co/docs/transformers/llm_tutorial_optimization) -- especially section 2, which covers the self-attention layer :)", "@gante So, if I understand correctly, the matrix `QK^T` with different size of `N^2` stays at cache, and that cause the surge of memory usage? \r\n", "@oroojlooy There are sources of memory requirements increase as the sequence length (`N`) increases when caches are used:\r\n1. The materialization of the `QK^T` multiplication, which may grow as quickly as `N^2` (flash attention decreases this), as you wrote;\r\n2. The cached key and values, which grow linearly with `N`.\r\n\r\n`transformers` is eager, it only allocates when needed. TGI checks the maximum possible memory usage at startup time. Both will have roughly the same peak memory usage, for a given model/maximum input length/attention implementation." ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.4.17-2136.323.8.2.el7uek.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.0 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> The machine include 8*A100-40Gb, ### Who can help? @Narsil @SunMarc ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am creating an instance of the `MixTralModel` class and call it in a loop with the prompts that I have. ``` import transformers import torch class MixTralModel: def __init__(self, temperature=0.0, max_new_tokens=356, do_sample=False, top_k=50, top_p=0.7): self.temperature = temperature self.max_new_tokens = max_new_tokens self.do_sample = do_sample self.top_k = top_k self.top_p = top_p if do_sample and temperature == 0.0: raise ValueError( "`temperature` (=0.0) has to be a strictly positive float, otherwise your next token scores will be " "invalid. If you're looking for greedy decoding strategies, set `do_sample=False`") self.pipeline = transformers.pipeline( "text-generation", model="mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto", # device="cuda:0", # model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, model_kwargs={"torch_dtype": torch.float16}, ) def __call__(self, raw_messages: str) -> str: """ An example of message is: messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] """ try: messages = [{"role": "user", "content": raw_messages}] prompt = self.pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = self.pipeline(prompt, max_new_tokens=self.max_new_tokens, do_sample=self.do_sample, temperature=self.temperature, top_k=self.top_k, top_p=self.top_p) return outputs[0]["generated_text"] except Exception as e: print(e) if __name__ == "__main__": model = MixTralModel(temperature=0.0, max_new_tokens=356, do_sample=False, top_k=50, top_p=0.7) messages = "Explain what a Mixture of Experts is in less than 100 words." out = model(messages) print(out) ``` ### Expected behavior When I call the instance of the above class with my data, the GPU memory keeps increasing over time until I get a CUDA memory error. It seems there is a memory leakage or it maybe keeps the gradient (?) on the memory. The memory jumps over by 3Gb on each core each time. For example, below shows the gpu memory usage before and after a jump: ``` +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 18234MiB | | 1 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB | | 2 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB | | 3 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB | | 4 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB | | 5 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 20764MiB | | 6 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 15430MiB | | 7 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 414MiB | +---------------------------------------------------------------------------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 22230MiB | | 1 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB | | 2 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24762MiB | | 3 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB | | 4 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB | | 5 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 24760MiB | | 6 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 19426MiB | | 7 N/A N/A 93020 C .../miniconda3/envs/mixtral/bin/python 414MiB | +---------------------------------------------------------------------------------------+ ``` Note that this does not happen in each call of the model, and overall it gets killed after about 120 calls.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28707/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28706/comments
https://api.github.com/repos/huggingface/transformers/issues/28706/events
https://github.com/huggingface/transformers/pull/28706
2,100,275,754
PR_kwDOCUB6oc5lDy-H
28,706
Add AutoFeatureExtractor support to Wav2Vec2ProcessorWithLM
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28706). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,707
null
COLLABORATOR
null
# What does this PR do? Using a N-GRAM based language-model on top of Wav2Vec2-based models is an easy way to get a performance boost. At the moment, Wav2Vec2ProcessorWithLM was only compatible with Wav2Vec2FeatureExtractor. W2V2-Bert could also benefit from this boost, but need its feature extractor to also be compatible with Wav2Vec2ProcessorWithLM. The easiest way to do it is to add AutoFeatureExtractor instead of Wav2Vec2FeatureExtractor in the code, since the processor only changes the tokenizer behaviour. cc @sanchit-gandhi @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28706/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28706", "html_url": "https://github.com/huggingface/transformers/pull/28706", "diff_url": "https://github.com/huggingface/transformers/pull/28706.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28706.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28705/comments
https://api.github.com/repos/huggingface/transformers/issues/28705/events
https://github.com/huggingface/transformers/pull/28705
2,100,266,371
PR_kwDOCUB6oc5lDw6c
28,705
[Docs] Add resources
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28705). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? This PR adds some more resources regarding various models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28705/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28705", "html_url": "https://github.com/huggingface/transformers/pull/28705", "diff_url": "https://github.com/huggingface/transformers/pull/28705.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28705.patch", "merged_at": 1708352549000 }
https://api.github.com/repos/huggingface/transformers/issues/28704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28704/comments
https://api.github.com/repos/huggingface/transformers/issues/28704/events
https://github.com/huggingface/transformers/issues/28704
2,100,171,431
I_kwDOCUB6oc59LhKn
28,704
FalconForCausalLM does not support Flash Attention 2.0 yet
{ "login": "menouarazib", "id": 99955425, "node_id": "U_kgDOBfUy4Q", "avatar_url": "https://avatars.githubusercontent.com/u/99955425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/menouarazib", "html_url": "https://github.com/menouarazib", "followers_url": "https://api.github.com/users/menouarazib/followers", "following_url": "https://api.github.com/users/menouarazib/following{/other_user}", "gists_url": "https://api.github.com/users/menouarazib/gists{/gist_id}", "starred_url": "https://api.github.com/users/menouarazib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/menouarazib/subscriptions", "organizations_url": "https://api.github.com/users/menouarazib/orgs", "repos_url": "https://api.github.com/users/menouarazib/repos", "events_url": "https://api.github.com/users/menouarazib/events{/privacy}", "received_events_url": "https://api.github.com/users/menouarazib/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @menouarazib, thanks for raising this issue! \r\n\r\nThis is because the model being loading with this checkpoint, is from [code on the hub](https://huggingface.co/tiiuae/falcon-7b/blob/main/modeling_falcon.py) -- [mapping here](https://huggingface.co/tiiuae/falcon-7b/blob/898df1396f35e447d5fe44e0a3ccaaaa69f30d36/config.json#L14). \r\n\r\nWhereas the code in the library [does support FA2](https://github.com/huggingface/transformers/blob/f40b87de0ca234df61f76928956c4a2118c0b548/src/transformers/models/falcon/modeling_falcon.py#L946). \r\n\r\nThis divergence is happening because the model was originally on the hub and then was ported into this library. The good news is, [FA has already been implemented for this model](https://github.com/huggingface/transformers/blob/f40b87de0ca234df61f76928956c4a2118c0b548/src/transformers/models/falcon/modeling_falcon.py#L540) and so it should be pretty easy to add to the hub model. I'd suggest opening a discussion there and requesting the addition with the model repo owners. \r\n\r\ncc @Rocketknight1 who know more about the intended hub vs transformers code usage and the recommended way to map between the two. \r\n\r\nIf you want to use FA2 directly with Falcon, you can import the model from transformers directly instead of using the auto model: \r\n\r\n```py\r\nimport torch\r\nfrom transformers import AutoTokenizer, BitsAndBytesConfig, FalconForCausalLM\r\n\r\n# Hugging Face Falcon-7B model ID\r\nmodel_id = \"tiiuae/falcon-7b\"\r\n\r\n# BitsAndBytesConfig for 4-bit integers\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\n# Load the model and tokenizer\r\nmodel = FalconForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n device_map=\"auto\",\r\n attn_implementation=\"flash_attention_2\",\r\n torch_dtype=torch.bfloat16,\r\n quantization_config=bnb_config\r\n)\r\n```", "Thanks, @amyeroberts, for this clarification. It works when using `FalconForCausalLM from transformers `. \r\nI have created a discussion to add Flash Attention to the model hosted on the hub:\r\nhttps://huggingface.co/tiiuae/falcon-7b/discussions/98\r\n\r\nThanks" ]
1,706
1,706
1,706
NONE
null
I attempted to use Flash Attention with the Falcon-7B model, but encountered the following error: **ValueError: FalconForCausalLM does not support Flash Attention 2.0 yet.** This error was raised by the transformers/modeling_utils.py: ``` if not cls._supports_flash_attn_2: raise ValueError( f"{cls.__name__} does not support Flash Attention 2.0 yet. Please request to add support where" f" the model is hosted, on its model hub page: https://huggingface.co/{config._name_or_path}/discussions/new" " or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new" ) ``` I installed the Transformers library from the GitHub repository using the following command: `pip install git+https://github.com/huggingface/transformers` Here is the code I used: ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig # Hugging Face Falcon-7B model ID model_id = "tiiuae/falcon-7b" # BitsAndBytesConfig for 4-bit integers bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, device_map="auto", attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, quantization_config=bnb_config ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28703/comments
https://api.github.com/repos/huggingface/transformers/issues/28703/events
https://github.com/huggingface/transformers/pull/28703
2,100,153,707
PR_kwDOCUB6oc5lDYOa
28,703
[DO NOT MERGE] Hf quantizer refactor
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you see possibility of people mixing and matching different quantization params or even different quantizers? Use cases:\r\n- less compression in MLP vs attention\r\n- wild mix of methods to get that extra 1/1000s of perplexity\r\n\r\nIf supported, what should be the way? \r\n1) from_pretrained() calling the right quntizer as it loads each module?\r\n1) custom HFQuantizer subclass that applies methods as they fit the modules", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28703). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @poedator for your comments and ideas - I think the way forward would be to extend the `xxxQuantizer` by adding new arguments in the corresponding quantization config object - e.g. `quantize_mlp_only` . I feel mixing different quantization approaches (your second point) might be a bit too much of an edge case but contributors can always create a new quantizer for it `MixedQuantizer` with a `MixedQuantizationConfig`. ", "I will merge the commits of this PR directly in https://github.com/huggingface/transformers/pull/26610 to properly credit @poedator from his great work ! closing this - thanks @ArthurZucker for the review and offline discussions! " ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Built on top of https://github.com/huggingface/transformers/pull/26610 - this PR is just to see if I don't any surprising diff similar as in https://github.com/poedator/transformers/pull/4
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28703/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28703", "html_url": "https://github.com/huggingface/transformers/pull/28703", "diff_url": "https://github.com/huggingface/transformers/pull/28703.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28703.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28702/comments
https://api.github.com/repos/huggingface/transformers/issues/28702/events
https://github.com/huggingface/transformers/issues/28702
2,100,102,165
I_kwDOCUB6oc59LQQV
28,702
Numpy version check failures
{ "login": "Iron-Bound", "id": 7122848, "node_id": "MDQ6VXNlcjcxMjI4NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/7122848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iron-Bound", "html_url": "https://github.com/Iron-Bound", "followers_url": "https://api.github.com/users/Iron-Bound/followers", "following_url": "https://api.github.com/users/Iron-Bound/following{/other_user}", "gists_url": "https://api.github.com/users/Iron-Bound/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iron-Bound/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iron-Bound/subscriptions", "organizations_url": "https://api.github.com/users/Iron-Bound/orgs", "repos_url": "https://api.github.com/users/Iron-Bound/repos", "events_url": "https://api.github.com/users/Iron-Bound/events{/privacy}", "received_events_url": "https://api.github.com/users/Iron-Bound/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Iron-Bound, thanks for raising this issue! \r\n\r\nWhat do you get as output if you import numpy and check the version in a python session? \r\n\r\n```py\r\nimport numpy as np\r\nprint(np.__version__)\r\n```", "let me try to find the container taht I had the issue on, side note I think this may be due to conda changing package paths" ]
1,706
1,707
1,707
NONE
null
### System Info latest docker container from `rocm/pytorch` ### Packages pip/conda numpy 1.26.3 transformers 4.37.1 peft 0.7.1 accelerate 0.26.1 ### Error Python 3.9.18 (main, Sep 11 2023, 13:41:44) β”‚[GCC 11.2.0] :: Anaconda, Inc. on linux β”‚Type "help", "copyright", "credits" or "license" for more information. β”‚>>> import transformers β”‚Traceback (most recent call last): β”‚ File "<stdin>", line 1, in <module> β”‚ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module> β”‚ from . import dependency_versions_check β”‚ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 57, in <module> β”‚ β”‚ require_version_core(deps[pkg]) β”‚ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 117, in require_version_core β”‚ return require_version(requirement, hint) β”‚ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 111, in require_version β”‚ _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) β”‚ File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/versions.py", line 39, in _compare_versions β”‚ raise ValueError( β”‚ValueError: Unable to compare versions for numpy>=1.17: need=1.17 found=None. This is unusual. Consider reinstalling numpy. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps: 1. $ pip install transformers 2. $ python3 3. import transformers Hacky fix, disabled the check in `transformers/utils/versions.py` to get past the error. ### Expected behavior loads without issue, is this to simple an answer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28702/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28701/comments
https://api.github.com/repos/huggingface/transformers/issues/28701/events
https://github.com/huggingface/transformers/issues/28701
2,100,041,061
I_kwDOCUB6oc59LBVl
28,701
HfArgumentParser does not match exact arguments
{ "login": "ahmedkooli", "id": 56259512, "node_id": "MDQ6VXNlcjU2MjU5NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/56259512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahmedkooli", "html_url": "https://github.com/ahmedkooli", "followers_url": "https://api.github.com/users/ahmedkooli/followers", "following_url": "https://api.github.com/users/ahmedkooli/following{/other_user}", "gists_url": "https://api.github.com/users/ahmedkooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahmedkooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahmedkooli/subscriptions", "organizations_url": "https://api.github.com/users/ahmedkooli/orgs", "repos_url": "https://api.github.com/users/ahmedkooli/repos", "events_url": "https://api.github.com/users/ahmedkooli/events{/privacy}", "received_events_url": "https://api.github.com/users/ahmedkooli/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @ahmedkooli, thanks for raising this issue! \r\n\r\nDigging into this, it seems this is coming from argparser itself, and is due to [this line](https://github.com/python/cpython/blob/d2da4e417ed9e6217e925e1df2820a2bf090efb3/Lib/argparse.py#L2325). As the option `text_column_names` starts with `text_column_na` and it's selected. \r\n\r\nFor example - just using argparser: \r\n\r\n```py\r\n>>> import argparse\r\n>>> parser = argparse.ArgumentParser()\r\n>>> parser.add_argument('--foo', default=None)\r\n>>> parser.parse_args(['--fo', 'a'])\r\nNamespace(foo='a')\r\n```", "Thanks for the answer :) Would you suggest trying to enforce it in the transformers library or should I take the discussion up in the python repo directly?", "@ahmedkooli As it's not something that we've encountered causing issues for our users, enforcing this isn't a feature we'd add at the moment. If it becomes something that is requested by many people (I'll measure as πŸ‘ on this comment) or is a common pain point then we can revisit. " ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: macOS-14.1.1-arm64-arm-64bit - Python version: 3.11.0 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (False) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction While running `examples/pytorch/text-classification/run_classification.py`, I noticed that the argument parser does not match the exact keywords, but rather a substring of the expected keyword. For example, running: ```bash python run_classification.py \ --model_name_or_path bert-base-uncased \ --dataset_name glue \ --dataset_config_name mrpc \ --shuffle_train_dataset \ --max_train_samples 20 \ --max_eval_samples 20 \ --metric_name accuracy \ --text_column_na "sentence1,sentence2" \ --do_train \ --do_eval \ --do_predict \ --max_seq_length 512 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --output_dir /tmp/glue_mrpc/ \ --overwrite_output_dir \ ``` works, whereas the argument `text_column_na` doesn't exist, and it replaces `text_column_names`. Is this meant to be? I think this can lead to unexpected behaviours. Thanks in advance. ### Expected behavior I expected an error due to a non existing keyword, such as: ```bash raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--text_column_na', 'sentence1,sentence2'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28701/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28700/comments
https://api.github.com/repos/huggingface/transformers/issues/28700/events
https://github.com/huggingface/transformers/pull/28700
2,100,030,420
PR_kwDOCUB6oc5lC9Ek
28,700
Fixed interpolation for ViT to BICUBIC as the original implementation…
{ "login": "nileshkokane01", "id": 8201108, "node_id": "MDQ6VXNlcjgyMDExMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nileshkokane01", "html_url": "https://github.com/nileshkokane01", "followers_url": "https://api.github.com/users/nileshkokane01/followers", "following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}", "gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}", "starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions", "organizations_url": "https://api.github.com/users/nileshkokane01/orgs", "repos_url": "https://api.github.com/users/nileshkokane01/repos", "events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}", "received_events_url": "https://api.github.com/users/nileshkokane01/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I do not know why the repo-consistency and setup_and_quality is failing. I think I broke compatibility for ViT I guess. @NielsRogge any clue? " ]
1,706
1,706
null
CONTRIBUTOR
null
# What does this PR do? This PR fixes the default interpolation mismatch between the hugging face library and the original implementation - as the original implementation uses BICUBIC by default but the hugging face default was BILINEAR <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28180 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28700/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28700", "html_url": "https://github.com/huggingface/transformers/pull/28700", "diff_url": "https://github.com/huggingface/transformers/pull/28700.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28700.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28699/comments
https://api.github.com/repos/huggingface/transformers/issues/28699/events
https://github.com/huggingface/transformers/pull/28699
2,099,992,832
PR_kwDOCUB6oc5lC02h
28,699
fix: corrected misleading log message in save_pretrained function
{ "login": "mturetskii", "id": 96064903, "node_id": "U_kgDOBbnVhw", "avatar_url": "https://avatars.githubusercontent.com/u/96064903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mturetskii", "html_url": "https://github.com/mturetskii", "followers_url": "https://api.github.com/users/mturetskii/followers", "following_url": "https://api.github.com/users/mturetskii/following{/other_user}", "gists_url": "https://api.github.com/users/mturetskii/gists{/gist_id}", "starred_url": "https://api.github.com/users/mturetskii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mturetskii/subscriptions", "organizations_url": "https://api.github.com/users/mturetskii/orgs", "repos_url": "https://api.github.com/users/mturetskii/repos", "events_url": "https://api.github.com/users/mturetskii/events{/privacy}", "received_events_url": "https://api.github.com/users/mturetskii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28699). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? This PR extends the fix implemented in a previous PR ([#28181](https://github.com/huggingface/transformers/pull/28181)), covering all cases where the saved file name might differ from the expected `WEIGHTS_NAME`. The earlier fix did not account for scenarios where the saved file could be named `ADAPTER_WEIGHTS_NAME` or `ADAPTER_SAFE_WEIGHTS_NAME`, leaving a potential for misleading log messages. This update ensures that all such cases are covered, and the log message accurately reflects the name of the file being saved in the `save_pretrained` function. Fixes # https://github.com/huggingface/transformers/issues/28076 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28699/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28699", "html_url": "https://github.com/huggingface/transformers/pull/28699", "diff_url": "https://github.com/huggingface/transformers/pull/28699.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28699.patch", "merged_at": 1706269974000 }
https://api.github.com/repos/huggingface/transformers/issues/28698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28698/comments
https://api.github.com/repos/huggingface/transformers/issues/28698/events
https://github.com/huggingface/transformers/issues/28698
2,099,901,347
I_kwDOCUB6oc59KfOj
28,698
WhitespaceSplit not working
{ "login": "pradeepdev-1995", "id": 41164884, "node_id": "MDQ6VXNlcjQxMTY0ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pradeepdev-1995", "html_url": "https://github.com/pradeepdev-1995", "followers_url": "https://api.github.com/users/pradeepdev-1995/followers", "following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}", "gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions", "organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs", "repos_url": "https://api.github.com/users/pradeepdev-1995/repos", "events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}", "received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @ArthurZucker - possibly an issue for the tokenizers library?", "I am not sure I have ever seen the `pretokenizer` argument πŸ˜… \r\nYou should do `tokenizer._tokenizer.pre_tokenizer = Whitespace()`", "@ArthurZucker \r\ncan you show me the full code, please?\r\n\r\nI am getting this error\r\n```\r\nModuleNotFoundError: No module named 'tokenizer._tokenizer'\r\n```", "```diff\r\n from transformers import AutoTokenizer\r\n tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.2\", trust_remote_code=True,\r\n use_fast=False)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = \"right\"\r\n sentence = \"Transformers tokenization testing\"\r\n tokenized_sentence = tokenizer.tokenize(sentence)\r\n print(\"without WhitespaceSplit\")\r\n print(tokenized_sentence)\r\n \r\n from tokenizers.pre_tokenizers import WhitespaceSplit\r\n- tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.2\", trust_remote_code=True,pretokenizer=WhitespaceSplit(), use_fast=False)\r\n+ tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.2\", trust_remote_code=True,use_fast=True)\r\n+ tokenizer._tokenizer.pre_tokenizer = WhitespaceSplit()\r\n tokenizer.pad_token = tokenizer.eos_token\r\n tokenizer.padding_side = \"right\"\r\n tokenized_sentence = tokenizer.tokenize(sentence)\r\n print(\"with WhitespaceSplit \")\r\n print(tokenized_sentence)\r\n``` \r\nπŸ€— ", "@ArthurZucker \r\n\r\n```\r\nAttributeError: 'LlamaTokenizer' object has no attribute '_tokenizer'\r\n```\r\nSince I am using mistral model which is Llama architecture based i think.\r\nany way to solve this?", "If you are using a slow tokenizer this cannot work. Updated the script to use fast", "@ArthurZucker \r\nNot working\r\n![Screenshot from 2024-01-30 16-13-39](https://github.com/huggingface/transformers/assets/41164884/e043c52d-fd46-4195-b7e6-f43dd8992b71)\r\n", "That is because you still have a normalizer:\r\n```python \r\nfrom tokenizers import normalizers\r\ntokenizer._tokenizer.normalizer = normalizers.Sequence([])\r\n```\r\n", "Not worked as expected\r\n@ArthurZucker \r\n![Screenshot from 2024-01-30 19-09-53](https://github.com/huggingface/transformers/assets/41164884/31266311-06e3-4318-a36f-779211c93375)\r\n", "I am not sure I understand, if you follow the logic of the tokenization [see the doc here](https://huggingface.co/docs/tokenizers/pipeline#pretokenization) the pretokenization will split, but the final tokens are not given by the pre tokenized text:\r\n\r\n> The role of the model is to split your β€œwords” into tokens, using the rules it has learned. It’s also responsible for mapping those tokens to their corresponding IDs in the vocabulary of the model.\r\n\r\nthus `Transformers` is split into it's known tokens", "@ArthurZucker \r\n\r\nSo splitting the sentence **\"Transformers tokenization testing\"** into **[\"Transformers\", \"tokenization\", \"testing\"]** is not possible using **mistralai/Mistral-7B-Instruct-v0.2** tokenizer?", "You can just use `sentence.split(\" \")` but no, the token `Transformers` is not part of the vocab so it's not a token " ]
1,706
1,706
null
NONE
null
### System Info torch==2.0.1 transformers==4.37.1 tokenizers==0.15.1 Python 3.8.16 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to split a sentence by subword manner and word by word manner using WhitespaceSplit ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", trust_remote_code=True, use_fast=False) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" sentence = "Transformers tokenization testing" tokenized_sentence = tokenizer.tokenize(sentence) print("without WhitespaceSplit") print(tokenized_sentence) from tokenizers.pre_tokenizers import WhitespaceSplit tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", trust_remote_code=True, pretokenizer=WhitespaceSplit(), use_fast=False) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" tokenized_sentence = tokenizer.tokenize(sentence) print("with WhitespaceSplit ") print(tokenized_sentence) ``` in both cases i am getting the same splitted data as below ``` without WhitespaceSplit ['▁Trans', 'form', 'ers', '▁token', 'ization', '▁testing'] with WhitespaceSplit ['▁Trans', 'form', 'ers', '▁token', 'ization', '▁testing'] ``` ### Expected behavior In WhitespaceSplit ,the sentence should split in word by word such as ``` ["Transformers", "tokenization", "testing"] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28698/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28697/comments
https://api.github.com/repos/huggingface/transformers/issues/28697/events
https://github.com/huggingface/transformers/issues/28697
2,099,883,324
I_kwDOCUB6oc59Ka08
28,697
Bug happens in processor use model cached in local
{ "login": "wwx007121", "id": 13541369, "node_id": "MDQ6VXNlcjEzNTQxMzY5", "avatar_url": "https://avatars.githubusercontent.com/u/13541369?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwx007121", "html_url": "https://github.com/wwx007121", "followers_url": "https://api.github.com/users/wwx007121/followers", "following_url": "https://api.github.com/users/wwx007121/following{/other_user}", "gists_url": "https://api.github.com/users/wwx007121/gists{/gist_id}", "starred_url": "https://api.github.com/users/wwx007121/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wwx007121/subscriptions", "organizations_url": "https://api.github.com/users/wwx007121/orgs", "repos_url": "https://api.github.com/users/wwx007121/repos", "events_url": "https://api.github.com/users/wwx007121/events{/privacy}", "received_events_url": "https://api.github.com/users/wwx007121/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @wwx007121, thanks for raising an issue! \r\n\r\nCould you give some more details about exactly the bug that is occurring i.e. the error being encountered (including full traceback) and a minimal code snippet to reproduce the issue? \r\n\r\ncc @ydshieh ", "> Hi @wwx007121, thanks for raising an issue!\r\n> \r\n> Could you give some more details about exactly the bug that is occurring i.e. the error being encountered (including full traceback) and a minimal code snippet to reproduce the issue?\r\n> \r\n> cc @ydshieh\r\n\r\n```\r\n model_id = \"openai/whisper-large-v3\"\r\n pretrain_model = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, cache_dir=model_cache\r\n )\r\n pretrain_model.to(device)\r\n print(\"load model done\")\r\n\r\n processor = AutoProcessor.from_pretrained(model_id, cache_dir=model_cache)\r\n```\r\nmodel in cache was downloaded in others process which shared same docker environments.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"script/get_whisper_result.py\", line 28, in <module>\r\n processor = AutoProcessor.from_pretrained(model_id, cache_dir=model_cache)\r\n File \"/opt/miniconda/lib/python3.8/site-packages/transformers/models/auto/processing_auto.py\", line 313, in from_pretrained\r\n return processor_class.from_pretrained(\r\n File \"/opt/miniconda/lib/python3.8/site-packages/transformers/processing_utils.py\", line 464, in from_pretrained\r\n processor_dict, kwargs = cls.get_processor_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/opt/miniconda/lib/python3.8/site-packages/transformers/processing_utils.py\", line 308, in get_processor_dict\r\n resolved_processor_file = cached_file(\r\n File \"/opt/miniconda/lib/python3.8/site-packages/transformers/utils/hub.py\", line 425, in cached_file\r\n raise EnvironmentError(\r\nOSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like distil-whisper/distil-large-v2 is not the path to a directory containing a file named processor_config.json.\r\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.\r\n```", "Hi @wwx007121 , it looks like I should modify the condition indeed, thank you for reporting this.\r\n\r\nHowever, to make sure, I would really like to be able to reproduce the issue. So far, I am doing the following, which should be the situation you described, but this code snippet works without any error.\r\n\r\nCould you describe in more detail how to reproduce it, please?\r\n\r\nYou mentioned that `cache was downloaded in others process`. When running the provided code example, **is the connection cut/disabled?**\r\n\r\n```python\r\nfrom transformers import AutoProcessor, AutoModelForSpeechSeq2Seq\r\n\r\nmodel_id = \"openai/whisper-large-v3\"\r\n\r\nmodel_cache = \"my_cache\"\r\n\r\npretrain_model = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, use_safetensors=True, cache_dir=model_cache\r\n)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id, cache_dir=model_cache)\r\n\r\n```", "Well, I tried to disable the internet connection and I can reproduce the issue. I will open a PR to fix it, thanks again for reporting", "@wwx007121\r\n\r\nThe fix is merged into `main`. Thanks again!" ]
1,706
1,706
1,706
NONE
null
### System Info version: transformers>=4.37.0 bug occurs in https://github.com/huggingface/transformers/blob/main/src/transformers/processing_utils.py ,line 466 I understand the purpose of this code, but this creats a conflict occurred with code in 'utils/hub.py line 426' that the error detail descriptions may have been changed. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I have made a simple solution that change my local code "if "does not appear to have a file named processor_config.json." in str(e):" to if "processor_config.json." in str(e): " .otherwise , Reduce version to 4.36.2 is also working。 ### Expected behavior I think it may have a better solution.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28697/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28697/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28696/comments
https://api.github.com/repos/huggingface/transformers/issues/28696/events
https://github.com/huggingface/transformers/pull/28696
2,099,856,257
PR_kwDOCUB6oc5lCXll
28,696
Add French translation: french README.md
{ "login": "ThibaultLengagne", "id": 11950126, "node_id": "MDQ6VXNlcjExOTUwMTI2", "avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThibaultLengagne", "html_url": "https://github.com/ThibaultLengagne", "followers_url": "https://api.github.com/users/ThibaultLengagne/followers", "following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}", "gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions", "organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs", "repos_url": "https://api.github.com/users/ThibaultLengagne/repos", "events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}", "received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello,\r\n\r\nThanks @stevhliu and @Sarapuce for the review. I have fixed every remark. \r\n\r\nSeems good to go :+1: ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28696). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Add the French version of README.md ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28696/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28696", "html_url": "https://github.com/huggingface/transformers/pull/28696", "diff_url": "https://github.com/huggingface/transformers/pull/28696.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28696.patch", "merged_at": 1706551669000 }
https://api.github.com/repos/huggingface/transformers/issues/28695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28695/comments
https://api.github.com/repos/huggingface/transformers/issues/28695/events
https://github.com/huggingface/transformers/pull/28695
2,099,803,282
PR_kwDOCUB6oc5lCMJK
28,695
[`chore`] Add missing space in warning
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
MEMBER
null
# What does this PR do? Adds a missing space in a warning message. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? @amyeroberts - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28695/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28695", "html_url": "https://github.com/huggingface/transformers/pull/28695", "diff_url": "https://github.com/huggingface/transformers/pull/28695.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28695.patch", "merged_at": 1706175293000 }
https://api.github.com/repos/huggingface/transformers/issues/28694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28694/comments
https://api.github.com/repos/huggingface/transformers/issues/28694/events
https://github.com/huggingface/transformers/pull/28694
2,099,760,344
PR_kwDOCUB6oc5lCCwB
28,694
Update question_answering.md
{ "login": "yusyel", "id": 25446622, "node_id": "MDQ6VXNlcjI1NDQ2NjIy", "avatar_url": "https://avatars.githubusercontent.com/u/25446622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yusyel", "html_url": "https://github.com/yusyel", "followers_url": "https://api.github.com/users/yusyel/followers", "following_url": "https://api.github.com/users/yusyel/following{/other_user}", "gists_url": "https://api.github.com/users/yusyel/gists{/gist_id}", "starred_url": "https://api.github.com/users/yusyel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yusyel/subscriptions", "organizations_url": "https://api.github.com/users/yusyel/orgs", "repos_url": "https://api.github.com/users/yusyel/repos", "events_url": "https://api.github.com/users/yusyel/events{/privacy}", "received_events_url": "https://api.github.com/users/yusyel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28694). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? fix typo: from: "model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")" to: model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased") <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28694/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28694", "html_url": "https://github.com/huggingface/transformers/pull/28694", "diff_url": "https://github.com/huggingface/transformers/pull/28694.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28694.patch", "merged_at": 1706191598000 }
https://api.github.com/repos/huggingface/transformers/issues/28693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28693/comments
https://api.github.com/repos/huggingface/transformers/issues/28693/events
https://github.com/huggingface/transformers/pull/28693
2,099,623,383
PR_kwDOCUB6oc5lBkyx
28,693
Added code to match the default interpolation for convnext
{ "login": "nileshkokane01", "id": 8201108, "node_id": "MDQ6VXNlcjgyMDExMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8201108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nileshkokane01", "html_url": "https://github.com/nileshkokane01", "followers_url": "https://api.github.com/users/nileshkokane01/followers", "following_url": "https://api.github.com/users/nileshkokane01/following{/other_user}", "gists_url": "https://api.github.com/users/nileshkokane01/gists{/gist_id}", "starred_url": "https://api.github.com/users/nileshkokane01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nileshkokane01/subscriptions", "organizations_url": "https://api.github.com/users/nileshkokane01/orgs", "repos_url": "https://api.github.com/users/nileshkokane01/repos", "events_url": "https://api.github.com/users/nileshkokane01/events{/privacy}", "received_events_url": "https://api.github.com/users/nileshkokane01/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,706
null
CONTRIBUTOR
null
# What does this PR do? This PR fixes the default interpolation type for Convnext to bicubic based on the original implementation . Also it adds assert in the image_processing_convnext_pytorch.py <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28180 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28693/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28693", "html_url": "https://github.com/huggingface/transformers/pull/28693", "diff_url": "https://github.com/huggingface/transformers/pull/28693.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28693.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28692/comments
https://api.github.com/repos/huggingface/transformers/issues/28692/events
https://github.com/huggingface/transformers/pull/28692
2,099,512,869
PR_kwDOCUB6oc5lBNct
28,692
Verify if output has logits or prediction logits in fill-mask pipeline
{ "login": "pedrogengo", "id": 27240528, "node_id": "MDQ6VXNlcjI3MjQwNTI4", "avatar_url": "https://avatars.githubusercontent.com/u/27240528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pedrogengo", "html_url": "https://github.com/pedrogengo", "followers_url": "https://api.github.com/users/pedrogengo/followers", "following_url": "https://api.github.com/users/pedrogengo/following{/other_user}", "gists_url": "https://api.github.com/users/pedrogengo/gists{/gist_id}", "starred_url": "https://api.github.com/users/pedrogengo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pedrogengo/subscriptions", "organizations_url": "https://api.github.com/users/pedrogengo/orgs", "repos_url": "https://api.github.com/users/pedrogengo/repos", "events_url": "https://api.github.com/users/pedrogengo/events{/privacy}", "received_events_url": "https://api.github.com/users/pedrogengo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Maybe we should have a different error message for these models?" ]
1,706
1,706
null
CONTRIBUTOR
null
# What does this PR do? It checks if the output has the key "logits" or "prediction_logits" to avoid breaking the fill-mask pipeline for some models like BertForPreTraining that returns: ``` return BertForPreTrainingOutput( loss=total_loss, prediction_logits=prediction_scores, seq_relationship_logits=seq_relationship_score, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` Error without this change: <img width="1122" alt="image" src="https://github.com/huggingface/transformers/assets/27240528/a7491281-db09-4b23-a795-5f102ec9d911"> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28692", "html_url": "https://github.com/huggingface/transformers/pull/28692", "diff_url": "https://github.com/huggingface/transformers/pull/28692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28692.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28691/comments
https://api.github.com/repos/huggingface/transformers/issues/28691/events
https://github.com/huggingface/transformers/issues/28691
2,099,511,128
I_kwDOCUB6oc59I_9Y
28,691
error with DataCollatorForLanguageModeling
{ "login": "minmie", "id": 40080081, "node_id": "MDQ6VXNlcjQwMDgwMDgx", "avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minmie", "html_url": "https://github.com/minmie", "followers_url": "https://api.github.com/users/minmie/followers", "following_url": "https://api.github.com/users/minmie/following{/other_user}", "gists_url": "https://api.github.com/users/minmie/gists{/gist_id}", "starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minmie/subscriptions", "organizations_url": "https://api.github.com/users/minmie/orgs", "repos_url": "https://api.github.com/users/minmie/repos", "events_url": "https://api.github.com/users/minmie/events{/privacy}", "received_events_url": "https://api.github.com/users/minmie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey πŸ€— thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,706
1,706
1,706
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-3.10.0-1160.99.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I want to fine-tune gpt2 for text summarization, but an error occurs when I create the batch input with DataCollatorForLanguageModeling. My code is as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForTokenClassification, \ DataCollatorWithPadding, DataCollatorForLanguageModeling, DataCollatorForSeq2Seq model_path = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_path) # model = AutoModelForCausalLM.from_pretrained(model_path) tokenizer.pad_token = tokenizer.eos_token content = [ "x x", "x x x" ] summary = [ "y y", "y y y" ] batch = [] for c, s in zip(content, summary): sample_input_ids = tokenizer.encode('<content>' + c + '<summary>') label_input_ids = tokenizer.encode(s) + [tokenizer.eos_token_id] input_ids = sample_input_ids + label_input_ids labels = [-100] * len(sample_input_ids) + label_input_ids batch.append({"input_ids": input_ids, "labels": labels}) data_collator1 = DataCollatorForTokenClassification(tokenizer) # data_collator2 = DataCollatorWithPadding(tokenizer) data_collator3 = DataCollatorForLanguageModeling(tokenizer, mlm=False) data_collator4 = DataCollatorForSeq2Seq(tokenizer) # i used this collator to create batch input and an error occured. print(data_collator3(batch)) """ ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected). """ # I aslo try this two collator and there's an extra attention_mask returned,but gpt2 dose't need it. # print(data_collator1(batch)) # print(data_collator4(batch)) """ {'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331, 50256, 50256, 50256], [ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88, 331, 331, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331, 50256, -100, -100], [ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88, 331, 331, 50256]])} {'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331, 50256, 50256, 50256], [ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88, 331, 331, 50256]]), 'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331, 50256, -100, -100], [ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88, 331, 331, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} """ print(1) ``` ### Expected behavior Considering I am fine-tuning gpt2 for text summarization, I expect to get a batch input like this: ``` { 'input_ids': tensor([[ 27, 11299, 29, 87, 2124, 27, 49736, 29, 88, 331, 50256, 50256, 50256], [ 27, 11299, 29, 87, 2124, 2124, 27, 49736, 29, 88, 331, 331, 50256]]), 'labels': tensor([[ -100, -100, -100, -100, -100, -100, -100, -100, 88, 331,50256, -100, -100], [ -100, -100, -100, -100, -100, -100, -100, -100, -100, 88,331, 331, 50256]]) } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28690/comments
https://api.github.com/repos/huggingface/transformers/issues/28690/events
https://github.com/huggingface/transformers/issues/28690
2,099,443,948
I_kwDOCUB6oc59Ivjs
28,690
Running into AttributeErrorAttributeError from 4.37.0
{ "login": "ningziwen", "id": 8747309, "node_id": "MDQ6VXNlcjg3NDczMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/8747309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ningziwen", "html_url": "https://github.com/ningziwen", "followers_url": "https://api.github.com/users/ningziwen/followers", "following_url": "https://api.github.com/users/ningziwen/following{/other_user}", "gists_url": "https://api.github.com/users/ningziwen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ningziwen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ningziwen/subscriptions", "organizations_url": "https://api.github.com/users/ningziwen/orgs", "repos_url": "https://api.github.com/users/ningziwen/repos", "events_url": "https://api.github.com/users/ningziwen/events{/privacy}", "received_events_url": "https://api.github.com/users/ningziwen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Seems related to this recent change. https://github.com/huggingface/transformers/pull/28447", "Found the PT commit adding this interface. https://github.com/pytorch/pytorch/commit/df14650f0b14b80db132b0c1797dc595fbee1054\r\n\r\nThis is only added in PT 2.0. PT 1.13.1 does not have it. https://github.com/pytorch/pytorch/blob/49444c3e546bf240bed24a101e747422d1f8a0ee/torch/nn/functional.py#L4808\r\n\r\nEven if PT 1.10 is not supported from 4.37.0, PT 1.13.1 is still supported right?", "Hi @ningziwen thank you, indeed I did not add a guard on torch>=2.0. Will fix, thank you.", "Hi @ningziwen, this is fixed on main with https://github.com/huggingface/transformers/pull/28774. Thank you!" ]
1,706
1,706
1,706
NONE
null
### System Info Only happens from 4.37.0 ``` - `transformers` version: 4.37.0 - Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` Switching back to 4.36.2 and it works well. ``` - `transformers` version: 4.36.2 - Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @fxmarty, @michaelbenayoun, @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run `torchrun` against one the simple file. ``` if __name__ == '__main__': from transformers.utils.fx import HFTracer ``` Running into ``` Traceback (most recent call last): Traceback (most recent call last): File "/test/bin/pytorch_tests/testCustom", line 2, in <module> File "/test/bin/pytorch_tests/testCustom", line 2, in <module> from transformers.utils.fx import HFTracerfrom transformers.utils.fx import HFTracer File "/usr/local/lib/python3.10/site-packages/transformers/utils/fx.py", line 611, in <module> File "/usr/local/lib/python3.10/site-packages/transformers/utils/fx.py", line 611, in <module> torch.nn.functional.scaled_dot_product_attention: torch_nn_functional_scaled_dot_product_attention,torch.nn.functional.scaled_dot_product_attention: torch_nn_functional_scaled_dot_product_attention, AttributeErrorAttributeError: : module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'. Did you mean: '. Did you mean: '_scaled_dot_product_attention_scaled_dot_product_attention'?'? ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31) of binary: /usr/local/bin/python3.10 Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main run(args) File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /test/bin/pytorch_tests/testCustom FAILED ``` ### Expected behavior Should succeed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28689/comments
https://api.github.com/repos/huggingface/transformers/issues/28689/events
https://github.com/huggingface/transformers/issues/28689
2,099,408,215
I_kwDOCUB6oc59Im1X
28,689
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
{ "login": "tamanna-mostafa", "id": 156403336, "node_id": "U_kgDOCVKGiA", "avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamanna-mostafa", "html_url": "https://github.com/tamanna-mostafa", "followers_url": "https://api.github.com/users/tamanna-mostafa/followers", "following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}", "gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions", "organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs", "repos_url": "https://api.github.com/users/tamanna-mostafa/repos", "events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}", "received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @tamanna-mostafa, thanks for raising an issue! \r\n\r\nBased on the error message, it looks as though the weights for the peft file are corrupted or possibly empty. Outside of the script, are you able to run the following: \r\n\r\n```py\r\nfrom peft import PeftModel \r\n\r\nmodel = PeftModel.from_pretrained(\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k\")\r\n```\r\n\r\nIf you look at the size of the files and shards for `/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k`, what do you see? \r\n\r\nNote: your script won't run because `get_args` doesn't return anything. Also, you don't need to pass sharing or safe serialization paramters when saving the tokenizer. ", "@amyeroberts \r\nThanks for your comments. When I run the code you suggested, I get this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/files_t/debig_amy.py\", line 4, in <module>\r\n model = PeftModel.from_pretrained(model_id)\r\nTypeError: PeftModel.from_pretrained() missing 1 required positional argument: 'model_id'\r\n```\r\n>If you look at the size of the files and shards for /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k, what do you see?\r\n\r\n```\r\n(ml_v4) ubuntu@ip-172-31-8-218:/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k$ ls -lh *\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 18:13 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 18:13 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 29 19:19 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 18:13 added_tokens.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 18:14 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 18:13 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 18:13 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 18:13 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 18:13 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 18:13 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 18:14 zero_to_fp32.py\r\n\r\ncheckpoint-100:\r\ntotal 2.4M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 23 21:32 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 23 21:32 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 23 21:32 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 23 21:32 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 23 21:33 global_step100\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 23 21:34 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 23 21:34 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 23 21:34 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 23 21:32 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 23 21:32 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 23 21:32 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 23 21:32 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.4K Jan 23 21:34 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 23 21:32 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 23 21:34 zero_to_fp32.py\r\n\r\ncheckpoint-200:\r\ntotal 2.4M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 00:47 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 00:47 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 00:47 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 00:47 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 00:48 global_step200\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 00:49 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 00:49 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 00:49 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 00:47 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 00:47 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 00:47 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 00:47 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 7.9K Jan 24 00:49 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 00:47 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 00:49 zero_to_fp32.py\r\n\r\ncheckpoint-300:\r\ntotal 2.4M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 04:02 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 04:02 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 04:02 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 04:02 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 04:03 global_step300\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 04:04 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 04:04 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 04:04 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 04:02 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 04:02 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 04:02 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 04:02 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 12K Jan 24 04:04 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 04:02 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 04:04 zero_to_fp32.py\r\n\r\ncheckpoint-400:\r\ntotal 2.4M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 07:17 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 07:17 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 07:17 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 07:17 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 07:18 global_step400\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 07:19 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 07:19 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 07:19 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 07:17 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 07:17 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 07:17 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 07:17 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 15K Jan 24 07:19 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 07:17 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 07:19 zero_to_fp32.py\r\n\r\ncheckpoint-500:\r\ntotal 2.5M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 10:32 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 10:32 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 10:32 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 10:32 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 10:33 global_step500\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 10:34 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 10:34 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 10:34 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 10:32 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 10:32 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 10:32 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 10:32 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 19K Jan 24 10:34 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 10:32 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 10:34 zero_to_fp32.py\r\n\r\ncheckpoint-600:\r\ntotal 2.5M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 13:47 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 13:47 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 13:47 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 13:47 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 13:48 global_step600\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 13:50 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 13:50 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 13:50 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 13:47 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 13:47 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 13:47 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 13:47 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 22K Jan 24 13:50 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 13:47 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 13:50 zero_to_fp32.py\r\n\r\ncheckpoint-700:\r\ntotal 2.5M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 17:03 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 17:03 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 48 Jan 24 17:03 adapter_model.safetensors\r\n-rw-rw-r-- 1 ubuntu ubuntu 133 Jan 24 17:03 added_tokens.json\r\ndrwxrwxr-x 2 ubuntu ubuntu 6.0K Jan 24 17:04 global_step700\r\n-rw-rw-r-- 1 ubuntu ubuntu 14 Jan 24 17:05 latest\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_0.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_1.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_2.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_3.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_4.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_5.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_6.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 16K Jan 24 17:05 rng_state_7.pth\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.1K Jan 24 17:05 scheduler.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 829 Jan 24 17:03 special_tokens_map.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.8M Jan 24 17:03 tokenizer.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 482K Jan 24 17:03 tokenizer.model\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.9K Jan 24 17:03 tokenizer_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 26K Jan 24 17:05 trainer_state.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.7K Jan 24 17:03 training_args.bin\r\n-rwxrw-r-- 1 ubuntu ubuntu 24K Jan 24 17:05 zero_to_fp32.py\r\n\r\nfinal_checkpoint:\r\ntotal 20M\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.1K Jan 24 18:14 README.md\r\n-rw-rw-r-- 1 ubuntu ubuntu 676 Jan 24 18:14 adapter_config.json\r\n-rw-rw-r-- 1 ubuntu ubuntu 20M Jan 24 18:14 adapter_model.safetensors\r\n\r\nglobal_step736:\r\ntotal 14G\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 31M Jan 24 18:14 bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_0_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_1_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_2_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_3_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_4_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_5_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_6_mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 1.7G Jan 24 18:14 zero_pp_rank_7_mp_rank_00_model_states.pt\r\n```\r\nAlso, if I take `LLAMA2 7b ` as the base model, then the `merge_peft_adaptors_gpu.py` script works fine if I put `final_checkpoint` as the PEFT model path. Hence, in this case too (mistral 7b), I tried running the `merge_peft_adaptors_gpu.py` script with `final_checkpoint` as the PEFT model path. Then I get this error:\r\n\r\n```\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:04<00:00, 1.53s/it]\r\nLoading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k/final_checkpoint\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py\", line 51, in <module>\r\n main()\r\n File \"/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py\", line 38, in main\r\n model = PeftModel.from_pretrained(base_model, args.peft_model_path)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 354, in from_pretrained\r\n model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 698, in load_adapter\r\n load_result = set_peft_model_state_dict(self, adapters_weights, adapter_name=adapter_name)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py\", line 241, in set_peft_model_state_dict\r\n load_result = model.load_state_dict(peft_model_state_dict, strict=False)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 2152, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\nRuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:\r\n size mismatch for base_model.model.model.layers.0.mlp.gate_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 8]).\r\n size mismatch for base_model.model.model.layers.0.mlp.up_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 8]).\r\n. . .\r\n```\r\n\r\nThis issue is the same as reported in this ticket: https://github.com/huggingface/transformers/issues/28688 . Hence, you can close this issue if you don't want to keep duplicate tickets open.\r\nI'm trying to understand what's wrong with using fine tuned Mistral 7b as the base model. " ]
1,706
1,706
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. I fined tuned mistral 7b model with preference data 2. I ran DPO on the SFT model. 3. To merge my lora adaptors, I ran tthe following command: `python merge_peft_adaptors_gpu.py --base_model_name_or_path <> --peft_model_path <> --output_dir <> --safe_serialization` This is the `merge_peft_adaptors_gpu.py` script: ``` from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch import os import argparse def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--base_model_name_or_path", type=str) parser.add_argument("--peft_model_path", type=str) parser.add_argument("--output_dir", type=str) parser.add_argument("--device", type=str, default="auto") parser.add_argument("--safe_serialization", action="store_true") return parser.parse_args() #### def main(): args = get_args() if args.device == 'auto': device_arg = { 'device_map': 'auto' } else: device_arg = { 'device_map': { "": args.device} } print(f"Loading base model: {args.base_model_name_or_path}") base_model = AutoModelForCausalLM.from_pretrained( args.base_model_name_or_path, return_dict=True, torch_dtype=torch.float16, trust_remote_code=True, **device_arg ) #device = torch.device('cpu') #base_model.to(device) print(f"Loading PEFT: {args.peft_model_path}") model = PeftModel.from_pretrained(base_model, args.peft_model_path) print("Peft Model : ", model.device) print(f"Running merge_and_unload") model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(args.base_model_name_or_path) model.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization) tokenizer.save_pretrained(f"{args.output_dir}",max_shard_size='9GB',safe_serialization=args.safe_serialization) print(f"Model saved to {args.output_dir}") #### if __name__ == "__main__" : main() ``` 4. I get the below error: ``` Loading base model: /mnt/efs/data/tammosta/files_t/output_sft_32k Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:04<00:00, 1.40s/it] Loading PEFT: /mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k Traceback (most recent call last): File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 51, in <module> main() File "/mnt/efs/data/tammosta/scripts_hb/merge_peft_adaptors_gpu.py", line 38, in main model = PeftModel.from_pretrained(base_model, args.peft_model_path) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 352, in from_pretrained model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py", line 689, in load_adapter adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 270, in load_peft_weights adapters_weights = safe_load_file(filename, device=device) File "/opt/conda/envs/ml_v4/lib/python3.10/site-packages/safetensors/torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization ``` Any idea how to solve this? ### Expected behavior base model and peft model will be successfully merged.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28689/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/28688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28688/comments
https://api.github.com/repos/huggingface/transformers/issues/28688/events
https://github.com/huggingface/transformers/issues/28688
2,099,400,046
I_kwDOCUB6oc59Ik1u
28,688
OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json.
{ "login": "tamanna-mostafa", "id": 156403336, "node_id": "U_kgDOCVKGiA", "avatar_url": "https://avatars.githubusercontent.com/u/156403336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamanna-mostafa", "html_url": "https://github.com/tamanna-mostafa", "followers_url": "https://api.github.com/users/tamanna-mostafa/followers", "following_url": "https://api.github.com/users/tamanna-mostafa/following{/other_user}", "gists_url": "https://api.github.com/users/tamanna-mostafa/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamanna-mostafa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamanna-mostafa/subscriptions", "organizations_url": "https://api.github.com/users/tamanna-mostafa/orgs", "repos_url": "https://api.github.com/users/tamanna-mostafa/repos", "events_url": "https://api.github.com/users/tamanna-mostafa/events{/privacy}", "received_events_url": "https://api.github.com/users/tamanna-mostafa/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @tamanna-mostafa, thanks for raising this issue! \r\n\r\nCould you list the files saved under `/data/DPO_output_mistral_32k`? ", "@amyeroberts \r\nHi, thanks for your comment. Below is what I see when I run `ls `in the folder, `DPO_output_mistral_32k` :\r\n```\r\nubuntu@ip-172-31-8-218:/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k$ ls \r\nREADME.md adapter_model.safetensors checkpoint-100 checkpoint-300 checkpoint-500 checkpoint-700 global_step736 special_tokens_map.json tokenizer.model training_args.bin\r\nadapter_config.json added_tokens.json checkpoint-200 checkpoint-400 checkpoint-600 final_checkpoint latest tokenizer.json tokenizer_config.json zero_to_fp32.py\r\n```", "Could you share how you're loading the model? If you're using adapters, then I'd expect this pattern: \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel_id = \"{MISTRAL_CHECKPOINT}\"\r\ndpo_model_id = \"/data/DPO_output_mistral_32k\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\r\nmodel.load_adapter(dpo_model_id)\r\n```", "You mean how I'm loading the DPO model on docker? Here are the steps:\r\n```\r\nmodel=/data/DPO_output_mistral_32k\r\nvolume=/mnt/efs/data/tammosta/files_t:/data\r\nnum_shard=8\r\ndocker run --gpus all --shm-size 1g -p 172.31.8.218:80:80 -v $volume ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model --num-shard $num_shard --max-input-length 4095 --max-total-tokens 12000\r\n\r\n```\r\n\r\n\r\nJust in case, this is the command I ran for the DPO training:\r\n```\r\naccelerate launch --config_file ./accelerate_configs/ds_zero3.yaml rlhf_dpo.py \\\r\n--model_name_or_path=\"/mnt/efs/data/tammosta/files_t/output_sft_32k\" \\\r\n--output_dir=\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k\" \\\r\n--data_path=\"/mnt/efs/data/tammosta/files_t/DPO_data_rbs_clean_AIF.json\" \\\r\n--use_lamma2_peft_config False \\\r\n--beta 0.1 \\\r\n--optimizer_type adamw_hf \\\r\n--learning_rate 1e-6 \\\r\n--warmup_steps 50 \\\r\n--per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 \\\r\n--gradient_accumulation_steps 8 \\\r\n--lora_alpha 16 \\\r\n--lora_dropout 0.05 \\\r\n--lora_r 8 \\\r\n--max_prompt_length 2048 \\\r\n--max_length 4096 \\\r\n--num_train_epochs 4 \\\r\n--logging_steps 20 \\\r\n--save_steps 100 \\\r\n--save_total_limit 8 \\\r\n--eval_steps 50 \\\r\n--gradient_checkpointing True \\\r\n--report_to \"wandb\"\r\n```", "It uses adapters in DPO training.", "Is there anything wrong in the way I'm loading the model on docker?", "@tamanna-mostafa No, I don't think so. From the current error and the files in the model repo I currently think there's two possible causes: \r\n* How the model is being loaded in the `text_generation_server` package\r\n* How the model is being saved out in the `rlhf_dpo.py` script.\r\n\r\nFor a model with adapter weights, I'd expect the adapter weights repo to look something like this: https://huggingface.co/ybelkada/opt-350m-lora/tree/main\r\n\r\nCould you share the contents of the `adapter_config.json`? ", "@amyeroberts \r\n>How the model is being loaded in the text_generation_server package\r\n\r\nIn my understanding, I used the below steps to load the model (prior to running docker):\r\n\r\n```\r\nmodel=/data/DPO_output_mistral_32k\r\nvolume=/mnt/efs/data/tammosta/files_t:/data\r\n```\r\n\r\nHere is the contents of the `adapter_config.json`:\r\n\r\n```\r\n{\r\n \"alpha_pattern\": {},\r\n \"auto_mapping\": null,\r\n \"base_model_name_or_path\": \"/mnt/efs/data/tammosta/files_t/output_sft_32k\",\r\n \"bias\": \"none\",\r\n \"fan_in_fan_out\": false,\r\n \"inference_mode\": true,\r\n \"init_lora_weights\": true,\r\n \"layers_pattern\": null,\r\n \"layers_to_transform\": null,\r\n \"loftq_config\": {},\r\n \"lora_alpha\": 16.0,\r\n \"lora_dropout\": 0.05,\r\n \"megatron_config\": null,\r\n \"megatron_core\": \"megatron.core\",\r\n \"modules_to_save\": null,\r\n \"peft_type\": \"LORA\",\r\n \"r\": 8,\r\n \"rank_pattern\": {},\r\n \"revision\": null,\r\n \"target_modules\": [\r\n \"v_proj\",\r\n \"q_proj\",\r\n \"up_proj\",\r\n \"down_proj\",\r\n \"gate_proj\",\r\n \"o_proj\",\r\n \"k_proj\"\r\n ],\r\n \"task_type\": \"CAUSAL_LM\"\r\n```", "I'm also pasting the last 3 sections from the `rlhf_dpo.py` script. \r\n\r\n```\r\n # 5. initialize the DPO trainer\r\n dpo_trainer = DPOTrainer(\r\n model,\r\n model_ref,\r\n args=training_args,\r\n beta=script_args.beta,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n tokenizer=tokenizer,\r\n peft_config=peft_config,\r\n max_prompt_length=script_args.max_prompt_length,\r\n max_length=script_args.max_length,\r\n )\r\n\r\n # 6. train\r\n dpo_trainer.train()\r\n dpo_trainer.save_model(script_args.output_dir)\r\n\r\n # 7. save\r\n output_dir = os.path.join(script_args.output_dir, \"final_checkpoint\")\r\n dpo_trainer.model.save_pretrained(output_dir)\r\n```", "@amyeroberts \r\nHi, did you have a chance to take a look? thanks", "I'm going cc in @younesbelkada here, who knows more about the DPO trainer and expected values in the configs :) ", "Hi @tamanna-mostafa \r\nThanks for the issue! \r\nin order to run the trained adapter with TGI using Docker, you need to first merge the adapter weights into the base model, and push / save the merged weights somewhere either on the Hub or locally.\r\n\r\nBy merging the adapter weights you make sure to convert the trained model into a standalone transformers model so that it becomes compatible with TGI. Please see: https://huggingface.co/docs/peft/main/en/conceptual_guides/lora#merge-lora-weights-into-the-base-model to understand what merging means.\r\n\r\nTo merge the model, run:\r\n\r\n```python\r\nfrom peft import AutoPeftModelForCausalLM\r\n\r\nmodel = AutoPeftModelForCausalLM.from_pretrained(model_id)\r\nmodel = model.merge_and_unload()\r\n# at this point the model is a standalone transformers model\r\nmodel.push_to_hub(xxx)\r\n```", "Hi @younesbelkada \r\n\r\nThanks a lot for your comment. As the base model, I used mistral 7b that I fine-tuned with my own preference data. I ran the following code to merge:\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import PeftModel\r\nimport torch\r\n\r\n#base_model = \"/mnt/efs/data/tammosta/files_t/output_sft_32k\"\r\nbase_model = AutoModelForCausalLM.from_pretrained(\r\n \"/mnt/efs/data/tammosta/files_t/output_sft_32k\",\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n trust_remote_code=True,\r\n #**device_arg\r\n )\r\npeft_model_id = \"/mnt/efs/data/tammosta/files_t/DPO_output_32k_Test/final_checkpoint\"\r\nmodel = PeftModel.from_pretrained(base_model, peft_model_id)\r\nmerged_model = model.merge_and_unload()\r\nmerged_model.save_pretrained(\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k_merged\")\r\n```\r\nHowever, I'm getting the following error:\r\n```\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:01<00:00, 2.77it/s]\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/files_t/merge_peft_tammosta.py\", line 14, in <module>\r\n model = PeftModel.from_pretrained(base_model, peft_model_id)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 354, in from_pretrained\r\n model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/peft_model.py\", line 698, in load_adapter\r\n load_result = set_peft_model_state_dict(self, adapters_weights, adapter_name=adapter_name)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/utils/save_and_load.py\", line 241, in set_peft_model_state_dict\r\n load_result = model.load_state_dict(peft_model_state_dict, strict=False)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 2152, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\nRuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:\r\n size mismatch for base_model.model.model.layers.0.mlp.gate_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 16]).\r\n size mismatch for base_model.model.model.layers.0.mlp.up_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 16]).\r\n size mismatch for base_model.model.model.layers.0.mlp.down_proj.lora_A.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([16, 14336]).\r\n size mismatch for base_model.model.model.layers.1.mlp.gate_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 16]).\r\n . . .\r\n```\r\n\r\nIt looks the weights in the base model doesn't match with that in the PEFT model. Could you please suggest a possible way of debugging this?", "@tamanna-mostafa have you used DeepSpeed to train your adapters by any chance?", "Can you also try:\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import AutoPeftModelForCausalLM\r\nimport torch\r\n\r\npeft_model_id = \"/mnt/efs/data/tammosta/files_t/DPO_output_32k_Test/final_checkpoint\"\r\nmodel = AutoPeftModelForCausalLM.from_pretrained(peft_model_id, torch_dtype=torch.float16,)\r\nmodel = model.merge_and_unload()\r\nmodel.save_pretrained(\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k_merged\")\r\n```", "Hi @younesbelkada ,\r\nI used `DeepSpeed `to fine tune the mistral 7b (the base model).\r\nI used `accelerate launch` to train the DPO model (PEFT model).\r\nWhen I run the suggested code, I get:\r\n\r\n```\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:01<00:00, 2.64it/s]\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/files_t/merge_peft_tammosta_2.py\", line 6, in <module>\r\n model = AutoPeftModelForCausalLM.from_pretrained(peft_model_id, torch_dtype=torch.float16,)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/peft/auto.py\", line 115, in from_pretrained\r\n tokenizer_exists = file_exists(\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 110, in _inner_fn\r\n validate_repo_id(arg_value)\r\n File \"/opt/conda/envs/ml_v4/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 158, in validate_repo_id\r\n raise HFValidationError(\r\nhuggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/mnt/efs/data/tammosta/files_t/DPO_output_32k_Test/final_checkpoint'. Use `repo_type` argument if needed.\r\n```", "@tamanna-mostafa it seems that behavior is a duplicate of https://github.com/huggingface/peft/issues/1430 - can you try to pass a relative path instead and run the script from the final checkpoint folder ? I'll submit a fix on PEFT", "Using the relative path, I've the issue of `size mismatch`:\r\n```\r\n(ml_v2) ubuntu@ip-172-31-32-104:/mnt/efs/data/tammosta/files_t$ python hf_test_2.py\r\nLoading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:01<00:00, 2.18it/s]\r\nTraceback (most recent call last):\r\n File \"/mnt/efs/data/tammosta/files_t/hf_test_2.py\", line 6, in <module>\r\n model = AutoPeftModelForCausalLM.from_pretrained(peft_model_id, torch_dtype=torch.float16,)\r\n File \"/opt/conda/envs/ml_v2/lib/python3.10/site-packages/peft/auto.py\", line 127, in from_pretrained\r\n return cls._target_peft_class.from_pretrained(\r\n File \"/opt/conda/envs/ml_v2/lib/python3.10/site-packages/peft/peft_model.py\", line 354, in from_pretrained\r\n model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)\r\n File \"/opt/conda/envs/ml_v2/lib/python3.10/site-packages/peft/peft_model.py\", line 698, in load_adapter\r\n load_result = set_peft_model_state_dict(self, adapters_weights, adapter_name=adapter_name)\r\n File \"/opt/conda/envs/ml_v2/lib/python3.10/site-packages/peft/utils/save_and_load.py\", line 241, in set_peft_model_state_dict\r\n load_result = model.load_state_dict(peft_model_state_dict, strict=False)\r\n File \"/opt/conda/envs/ml_v2/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 2153, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\nRuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:\r\n size mismatch for base_model.model.model.layers.0.mlp.gate_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 16]).\r\n size mismatch for base_model.model.model.layers.0.mlp.up_proj.lora_B.default.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([14336, 16]).\r\n```\r\nI suspect it might be the PEFT config I'm using during DPO training. In the DPO training command of the mistral 7b SFT model, if I use the same PEFT config as that used for DPO-training LLAMA2 7b SFT model, then I don't have this `size mismatch` issue in the adaptor merge. \r\n\r\nHere's the PEFT config used for DOP-training LLAMA 2 7b SFT model:\r\n\r\n \r\n```\r\npeft_config = LoraConfig(\r\n r=script_args.lora_r,\r\n lora_alpha=script_args.lora_alpha,\r\n lora_dropout=script_args.lora_dropout,\r\n target_modules=[\r\n \"q_proj\",\r\n \"v_proj\",\r\n \"k_proj\",\r\n \"out_proj\",\r\n \"fc_in\",\r\n \"fc_out\",\r\n \"wte\",\r\n ],\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n )\r\n print(f\"peft_config: {peft_config}\")\r\n\r\n\r\n```\r\n\r\nWhat peft_config should I use to DPO-train a Mistral 7b SFT model? \r\nCan I use the same PEFT config as is used for DPO-training a LLAMA2 7b model (as pasted above)?\r\n", "@younesbelkada \r\nIt would be very helpful if you kindly share your thoughts on this.", "Hi @tamanna-mostafa \r\nI am going to cc @pacman100 as he is more familiar than I am with respect to interactions between DeepSpeed and PEFT" ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @SunMarc @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Fine-tuned the mistral 7b model with 32k preference data. 2. Ran DPO on the SFT output. 3. Ran the `docker run` command on the DPO output to host the model on docker so I can run inferences. ### Expected behavior Expected behavior was that docker will start running. However, I got this error instead: ``` 2024-01-24T20:31:06.334853Z ERROR text_generation_launcher: Error when initializing model Traceback (most recent call last): File "/opt/conda/bin/text-generation-server", line 8, in <module> sys.exit(app()) File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 311, in __call__ return get_command(self)(*args, **kwargs) File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 778, in main return _main( File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 216, in _main rv = self.invoke(ctx) File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 683, in wrapper return callback(**use_params) # type: ignore File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 83, in serve server.serve( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 207, in serve asyncio.run( File "/opt/conda/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete self.run_forever() File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 601, in run_forever self._run_once() File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once handle._run() File "/opt/conda/lib/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) > File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 159, in serve_inner model = get_model( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 129, in get_model config_dict, _ = PretrainedConfig.get_config_dict( File "/opt/conda/lib/python3.9/site-packages/transformers/configuration_utils.py", line 620, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/opt/conda/lib/python3.9/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict resolved_config_file = cached_file( File "/opt/conda/lib/python3.9/site-packages/transformers/utils/hub.py", line 400, in cached_file raise EnvironmentError( OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout 'https://huggingface.co//data/DPO_output_mistral_32k/None' for available files. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28688/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28687/comments
https://api.github.com/repos/huggingface/transformers/issues/28687/events
https://github.com/huggingface/transformers/pull/28687
2,099,153,324
PR_kwDOCUB6oc5lAAa8
28,687
[Whisper] Refactor forced_decoder_ids & prompt ids
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28687). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Still need to make sure that \"Whisper default language detection behavior\" stays the same. See: https://huggingface.co/openai/whisper-large-v3/discussions/71#65b78d5a34297095bcecfe78", "**Update**:\r\n\r\nThis PR now does two additonal things:\r\n- `generation_config` is always preferred over `config.json`\r\n- By default multilingual Whisper does language detection followed by transcription\r\n\r\ncc @sanchit-gandhi ", "- [x] TODO(Patrick) Make sure to run all slow tests" ]
1,706
1,706
1,706
MEMBER
null
# What does this PR do? This PR refactors `forced_decoder_ids` making sure that we now always pass prompted ids as `decoder_input_ids` into generate for Whisper. The whole idea of forcing ids instead of just passing them as initial tokens was a bad design choice and we should try to move away from it. In addition, Whisper prompting is improved by: - Not allowing `prompt_ids` to be passed as a numpy array - Enable `prompt_ids` for long-form generation with two modes: - a) prompt only the first segment - b) prompt every segment While a) is the only supported case in the original Whisper repo b) can be very useful as can be seen in the added slow test [here](https://github.com/huggingface/transformers/pull/28687/files#r1467376608). This is the final code PR regarding Whisper for Transformers. In the next weeks focus will be put on writing nice docs, tutorials and blog posts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28687/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28687", "html_url": "https://github.com/huggingface/transformers/pull/28687", "diff_url": "https://github.com/huggingface/transformers/pull/28687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28687.patch", "merged_at": 1706702527000 }
https://api.github.com/repos/huggingface/transformers/issues/28686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28686/comments
https://api.github.com/repos/huggingface/transformers/issues/28686/events
https://github.com/huggingface/transformers/pull/28686
2,099,095,224
PR_kwDOCUB6oc5k_zd4
28,686
Enable Gradient Checkpointing in Deformable DETR
{ "login": "FoamoftheSea", "id": 50897218, "node_id": "MDQ6VXNlcjUwODk3MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/50897218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FoamoftheSea", "html_url": "https://github.com/FoamoftheSea", "followers_url": "https://api.github.com/users/FoamoftheSea/followers", "following_url": "https://api.github.com/users/FoamoftheSea/following{/other_user}", "gists_url": "https://api.github.com/users/FoamoftheSea/gists{/gist_id}", "starred_url": "https://api.github.com/users/FoamoftheSea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FoamoftheSea/subscriptions", "organizations_url": "https://api.github.com/users/FoamoftheSea/orgs", "repos_url": "https://api.github.com/users/FoamoftheSea/repos", "events_url": "https://api.github.com/users/FoamoftheSea/events{/privacy}", "received_events_url": "https://api.github.com/users/FoamoftheSea/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that fix-copies made changes to DETA model because it has the \"# copied from\" comments over methods taken from Deformable DETR. If we don't want to change that model, we could instead remove those comments and revert the changes to that file.", "Hi @FoamoftheSea, thanks for working on this!\r\n\r\nThere's another open PR which addresses enabling gradient checkpointing for DETA -- #28615. What I'd suggest is removing the relevant `# Copied from` headers in DETA and the changes in its modeling file. \r\n\r\nThis way we can include both of your respective contributions without clashes.", "@amyeroberts All done!", "No problem! πŸ˜€", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28686). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Gradient Checkpointing is not currently supported by Deformable DETR, but with slight modifications I was able to get it working in both the encoder and decoder stages, which both independently led to noticeable reductions in VRAM usage during training. This makes a default Deformable DETR configuration trainable on a 4GB GPU. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28686", "html_url": "https://github.com/huggingface/transformers/pull/28686", "diff_url": "https://github.com/huggingface/transformers/pull/28686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28686.patch", "merged_at": 1706523041000 }
https://api.github.com/repos/huggingface/transformers/issues/28685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28685/comments
https://api.github.com/repos/huggingface/transformers/issues/28685/events
https://github.com/huggingface/transformers/issues/28685
2,099,080,596
I_kwDOCUB6oc59HW2U
28,685
torch.arange use should not use dtype=float for integer ranges, conflicts w/ DS `zero.Init()`
{ "login": "rwightman", "id": 5702664, "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rwightman", "html_url": "https://github.com/rwightman", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "organizations_url": "https://api.github.com/users/rwightman/orgs", "repos_url": "https://api.github.com/users/rwightman/repos", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "received_events_url": "https://api.github.com/users/rwightman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Code instances where this either definitely a concern, or likely (depending on ranges involved).\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/llama/modeling_llama.py#L130-L131\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/llama/modeling_llama.py#L140\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/llama/modeling_llama.py#L168\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/llama/modeling_llama.py#L195\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/examples/research_projects/bertabs/modeling_bertabs.py#L265-L266\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/codegen/modeling_codegen.py#L55-L59\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/conditional_detr/modeling_conditional_detr.py#L437-L455\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/conditional_detr/modeling_conditional_detr.py#L496-L509\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/ctrl/modeling_ctrl.py#L47-L60\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L494-L495\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L620-L621\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1542-L1543\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deprecated/transfo_xl/modeling_transfo_xl.py#L945-L946\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deta/modeling_deta.py#L404-L405\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deta/modeling_deta.py#L529-L530\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/deta/modeling_deta.py#L1453-L1454\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/detr/modeling_detr.py#L438-L439\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/falcon/modeling_falcon.py#L151-L152\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/falcon/modeling_falcon.py#L180\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/falcon/modeling_falcon.py#L208\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/fastspeech2_conformer/modeling_fastspeech2_conformer.py#L823-L826\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/fsmt/modeling_fsmt.py#L1349-L1351\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/funnel/modeling_funnel.py#L235-L267\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/fuyu/image_processing_fuyu.py#L687-L690\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L547\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L576\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L604\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py#L255\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/gptj/modeling_gptj.py#L60\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/idefics/modeling_idefics.py#L480\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/kosmos2/modeling_kosmos2.py#L776-L780\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/m2m_100/modeling_m2m_100.py#L114-\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mask2former/modeling_mask2former.py#L863-L864\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mask2former/modeling_mask2former.py#L2132-L2133\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/maskformer/modeling_maskformer.py#L1354-L1355\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mega/modeling_mega.py#L172-L174\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mistral/modeling_mistral.py#L109-L110\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mistral/modeling_mistral.py#L109-L110\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mixtral/modeling_mixtral.py#L202-L203\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/mpt/modeling_mpt.py#L69-L70\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/musicgen/modeling_musicgen.py#L129-L131\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/nezha/modeling_nezha.py#L153-L155\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/nllb_moe/modeling_nllb_moe.py#L167-L169\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/oneformer/modeling_oneformer.py#L2803-L2804\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/pegasus_x/modeling_pegasus_x.py#L112-L113\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/persimmon/modeling_persimmon.py#L61-L62\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/persimmon/modeling_persimmon.py#L90-L91\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/persimmon/modeling_persimmon.py#L118-L119\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/phi/modeling_phi.py#L99\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/phi/modeling_phi.py#L128\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/phi/modeling_phi.py#L156\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/qwen2/modeling_qwen2.py#L116\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py#L417\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/seamless_m4t/modeling_seamless_m4t.py#L1024-L1026\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py#L980-L982\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L133-L136\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/speecht5/modeling_speecht5.py#L316-L318\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/speecht5/modeling_speecht5.py#L406-L407\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/swin2sr/modeling_swin2sr.py#L293-L296\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/swinv2/modeling_swinv2.py#L449-L451\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/table_transformer/modeling_table_transformer.py#L374-L375\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/trocr/modeling_trocr.py#L88-L90\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py#L315\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L445-L448\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/xglm/modeling_xglm.py#L160-L162\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/xlnet/modeling_xlnet.py#L1023-L1038\r\nhttps://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/examples/research_projects/visual_bert/modeling_frcnn.py#L171-L192", "If we look at the original Llama code, this issue is avoided.\r\n\r\nhttps://github.com/facebookresearch/llama/blob/ef351e9cd9496c579bf9f2bb036ef11bdc5ca3d2/llama/model.py#L100-L104\r\n```python\r\n freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim))\r\n t = torch.arange(end, device=freqs.device) # type: ignore\r\n freqs = torch.outer(t, freqs).float() # type: ignore\r\n freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64\r\n```\r\n\r\nFor GPT-NeoX, it's also avoided, but problematic in transformers (see above).\r\n\r\nhttps://github.com/EleutherAI/gpt-neox/blob/63991555ec082c8f80c475f851d008193b10008c/megatron/model/positional_embeddings.py#L27-L34\r\n```\r\n t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq)\r\n sinusoid_inp = torch.einsum(\"i,j->ij\", t, self.inv_freq)\r\n if self.precision == torch.bfloat16:\r\n sinusoid_inp = sinusoid_inp.float()\r\n sin, cos = sinusoid_inp.sin(), sinusoid_inp.cos()\r\n if self.precision == torch.bfloat16:\r\n sin, cos = sin.bfloat16(), cos.bfloat16()\r\n emb = torch.cat((sin, cos), dim=-1)\r\n```", "I believe this is the problem being seen in this issue https://github.com/microsoft/DeepSpeed/issues/4932 and also seeing now this may be a dupe of https://github.com/huggingface/transformers/issues/28596\r\n", "Related to this possible concern with the zero.Init() overriding dtype for arange (and I did confirm this is a problem with a test bench), there's also an overlapping issue that's been brought up before in e.g. #25681 but I don't think fully addressed as that improvement focused on rescaling at runtime for larger seq len, this one is due to zero.Init() overriding the device arg for tensor creation fns and having the init done on a non-CPU device.\r\n\r\nWhen a library like DeepSpeed forces the calculation of the cached RoPe sin/cos/freq values onto the GPU it is wrong compared to the CPU calcs due to a rather nasty combo of floating point ops that differ enough to have a significant impact (div, pow, outer product, convert to low precision), ~5e-4 in float16 and 2e-3 eps for Llama. This results in model logit values differing by close to 1.0. This is with the calcs forced to float32 (so explicitly avoiding doing them in low precision), even doing the calculations in double precision is not enough to avoid problematic differences between GPU and CPU. \r\n\r\nThe only approach that seems viable is ensuring the init of those constants are always done on CPU (requires extra workarounds to prevent DeepSpeed from forcing onto GPU) and then at the very last step before they're used, do the cast to computation dtype. I trialed an approach that's related to an Eleuther workaround in their lib, but it likely has some breaking concerns with other use cases like tracing, etc.\r\nhttps://github.com/microsoft/DeepSpeed/issues/4932#issuecomment-1911277956\r\n\r\nEDIT: also think we should be forcing RoPE embeddings to be applied in float32 instead of default computation dtype. I think the original Llama is doing this but transformers is not.", "First things off: should we run RoPE in FP32? Are buffers a problem?\r\n\r\nI've done a quick perplexity benchmark of `meta-llama/Llama-2-7b-hf` with BF16 (from `main`) vs BF16 with RoPE being computed in FP32 without buffers (from [this commit](https://github.com/gante/transformers/commit/ade030977df3e2939556b4f9db317151bed7bbe4)), adapting the ppl benchmark scripts from [this comment](https://github.com/huggingface/transformers/pull/26933#issuecomment-1789646553). \r\n\r\nHere are the results:\r\n![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/f026d235-d074-445e-b088-110bec054f0b)\r\n\r\nWe can see a very tiny PPL upgrade, almost negligible. This comes at the expense of an equally small increase in GPU memory requirements.\r\n\r\n![plot_latency](https://github.com/huggingface/transformers/assets/12240844/bc9d87f8-c512-4a8b-8139-6eebf00fec87)\r\n\r\nOn the latency side, we see that going beyond the original context length is much more expensive -- the new `sin`/`cos` must to be computed in FP32, which is more expensive.\r\n\r\nπŸ‘‰ To me, this indicates that changing RoPE computations to FP32 is not worth it. Happy to do more experiments, if you suspect this logic may be flawed in some settings/other models πŸ€— (cc @ArthurZucker, we've chatted about this a few days ago)\r\n\r\nπŸ‘‰ @rwightman could this difference be more pronounced in DeepSpeed? I have no DeepSpeed experience. \r\n\r\n\r\n", "Regarding the Deepspeed init issues, going to open a PR to fix it πŸ’ͺ ", "@gante are you sure the perplexity test is representative of a wide enough range of use? In the particular users case, I was testing with their input vectors and the logits are significantly different with embeddings calc in bfloat16. Computing pos embeds in float32 on CPU and applying just the calculated embedding in lower precision wasn't too bad... \r\n\r\nSimilarly comparing the embedding values themselves, fully calculated on a different device, or calculated in low precision, the differences in the embedding floats is well beyond a range I'd be comfortable with... might be something we'd want to consider allowing the user to make their own tradeoffs via config... ", "@rwightman Absolutely not, the data/model space to test on is too large to measure! \r\n\r\nHowever, since the latency penalty of applying the change in all cases is non-negligible and exposing the ability to recompute the buffer in fp32 adds yet another flag, I'd like to have a reproducible example of a failure case in `transformers` -- at least to fully understand what's going on. That's why I added the note that I'm fully open to run more experiments, as long as I have some pointers.\r\n\r\nThere are many competing requests. Sadly, it's hard to find the time to do a proper deep dive to find a failure mode πŸ€— ", "@gante understood, with the cached values the runtime latency for the 'okay' case I described should be non-existent though... namely,\r\n1. calculate the values on the cpu, in float32 as a rule\r\n2. cast to the usage dtype, eg store sin/cos embeds in bfloat16 on the target device)\r\n\r\nThere would be no memory or runtime overhead (other than when a new seq length is switched to), but the pos embed values would be significantly closer to their intended values.", "@gante from my personal test, changing the inv_freq to float32 can increase performances on MMLU of about 20points.\r\nThere are a few things to test:\r\n- `inv_freq` is a buffer, it's saved, then casted to the `dtype` you ask. But if you load with `torch_dytpe` then it's going to be a float32 casted to the `torch_dytpe`. \r\n- cos and sin can be computed in float32 or query type. Computing in float prevents overflow (known) of rope, and mostly does not slow down\r\n- q and k can be upcasted to float32, sin and cos computed in float32 then cast back everything.\r\n- vs mixed precision where q and k are not upcasted, but sin and cos are in float32.\r\n\r\n\r\nI don't think perplexity is something we should ever use for these kind of tests, but rather proper generations / bench: \r\n- perplexity will be similar as this only affect the logits a tad bit \r\n- both generations will make sense. But the distribution are little by little shifted, and even small answers can be affected. ", "Also, perplexity is an average score, I'm not overly familiar with the typical test data, but I assume it's probably not pushing corner cases? well formed? \r\n\r\nWhat I was looking at comparing some forward pass outputs with different cpu vs gpu, bfloat16 vs float16 vs float32 precisions for computing those sin/cos embeds, the differences were significant. The logit (output of the model) differences could also be quite significant, but I was looking at worst case, not average logit diffs, the mean is pretty unintersting, most logits were close, but the worst ones were well outside the range I'd consider reasonable... and it's the worst case that cause networks to blow up...", "Perhaps my message above came across incorrectly πŸ˜… \r\n\r\nI trust what you wrote, that it should be computed in FP32. I meant that we should have a few concrete failure examples to a) test against and prevent regressions and b) document why we made modeling changes (especially ones that increase HW requirements). Since I had yet to come across this particular issue and wasn't aware of the type of numerical issue (systemic drift vs infrequent failure), a few extra pointers were needed to speed things up. A reproducible script would be even better. We do request this on our contributors' issues πŸ€— \r\n\r\nNow I have a more precise target, which facilitates search: infrequent large differences. I'm going to dig up a clear example and open a PR to add the corresponding RoPE fix and tests.", "An example of a failure case is below. \r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\nimport torch\r\n\r\nmodel_1 = AutoModelForCausalLM.from_pretrained(\r\n \"HuggingFaceM4/tiny-random-LlamaForCausalLM\",\r\n device_map=\"auto\",\r\n torch_dtype=torch.bfloat16,\r\n)\r\nmodel_2 = AutoModelForCausalLM.from_pretrained(\r\n \"HuggingFaceM4/tiny-random-LlamaForCausalLM\",\r\n device_map=\"auto\",\r\n).to(torch.bfloat16)\r\n\r\n# `torch_dtype=...` doesn't cast explicitly set types, `.to(...)` does\r\nassert model_1.model.layers[0].self_attn.rotary_emb.inv_freq.dtype == torch.float32\r\nassert model_2.model.layers[0].self_attn.rotary_emb.inv_freq.dtype == torch.bfloat16\r\n\r\n# sequence length smaller than the initialized length (2048) -> no problem\r\ninput_ids = torch.randint(0, 32000, (1, 1024)).to(\"cuda\")\r\nmodel_1_out = model_1(input_ids)\r\nmodel_2_out = model_2(input_ids)\r\nassert torch.allclose(model_1_out.logits, model_2_out.logits)\r\n\r\n# sequence length larger than the initialized length (2048) -> problem\r\n# why? larger than initialized length -> sin/cos have to be recomputed -> the different type of non-permanent buffers\r\n# will have an impact\r\ninput_ids = torch.randint(0, 32000, (1, 2049)).to(\"cuda\")\r\nmodel_1_out = model_1(input_ids)\r\nmodel_2_out = model_2(input_ids)\r\nassert torch.allclose(model_1_out.logits, model_2_out.logits)\r\n```\r\n\r\nIt is extremely easy to find the bug when `.to()` is used instead of `torch_dtype` -- but only after the original sequence length on `main`, due to the existing order of operations. Anything that fiddles with types at initialization time (like DeepSpeed) will run into the problem immediately, even before breaking the sequence length.\r\n\r\nThe same perplexity script can also find the problem, using the `.to()` method to cast the model:\r\n![plot_perplexity_vram](https://github.com/huggingface/transformers/assets/12240844/6749f4b5-c583-499d-8b94-5c7e68fc64be)\r\n\r\nπŸ‘‰ moving forward with the PR to ensure this OP stays in FP32 and these artefacts are no longer present ", "super nice πŸ€— " ]
1,706
1,706
null
NONE
null
### System Info Impacts many versions of transformers up to and including current. ### Who can help? @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use a number of transformers models that utilize arange for integer enumerations in the calculation of position embeddings with DeepSpeed zero.Init() and a low precision dtype (float16, bfloat16), and the generated embeddings will differ significantly from intended. Using Llama as an example `t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)` The inv_freq.dtype == float32. Single precision float can cover the required integer range for the enumeration (I believe it's in the 2k-8k range for Llama?). However, when DeepSpeed zero.Init is used the init function patching will override the float dtype passed in with a low precision float dtype, so float32 -> bfloat16 or float16. Thus the integer range that can be represented without significant loss drops down to 256 for bfloat16 or 2048 for float16. DeepSpeed's patching has an exception for integer dtype, it will not cast arange to the low precision float dtype if arange dtype is an int type. https://github.com/microsoft/DeepSpeed/blob/0dd0c615f8e6c7947ba81a4b0993284da5ec3209/deepspeed/runtime/zero/partition_parameters.py#L245-L246 ``` def zero_wrapper_for_fp_tensor_constructor(fn: Callable, target_fp_dtype: torch.dtype) -> Callable: def wrapped_fn(*args, **kwargs) -> Tensor: if kwargs.get("device", None) is None: kwargs['device'] = torch.device(get_accelerator().device_name(os.environ["LOCAL_RANK"])) tensor: Tensor = fn(*args, **kwargs) if tensor.is_floating_point(): tensor.data = tensor.data.to(target_fp_dtype) return tensor return wrapped_fn ``` torch.arange defaults to an integer dtype if start/end/step are ints. In this case though it's best to be explicit to make intent clear, we should explictly set dtype=torch.long (or torch.int64 depending on your tastes). Casting to float should be done after the arange. Additionally, in many position embedding calculation scenarios, it's best to try and keep the calculations in float32 as long as possible, doing final conversion to low precision type at the very end (if that's the dtype of inference or training). ### Expected behavior Use of torch.arange should explicitly set dtype=torch.long (or int64). Ex: for Llama, `t = torch.arange(self.max_seq_len_cached, device=device).type_as(self.inv_freq)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28685/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28685/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28684/comments
https://api.github.com/repos/huggingface/transformers/issues/28684/events
https://github.com/huggingface/transformers/pull/28684
2,098,841,913
PR_kwDOCUB6oc5k-7oe
28,684
[docs] Fix doc format
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
MEMBER
null
Closes a open `<hfoptions>` tag in the DeepSpeed docs :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28684", "html_url": "https://github.com/huggingface/transformers/pull/28684", "diff_url": "https://github.com/huggingface/transformers/pull/28684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28684.patch", "merged_at": 1706123940000 }
https://api.github.com/repos/huggingface/transformers/issues/28683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28683/comments
https://api.github.com/repos/huggingface/transformers/issues/28683/events
https://github.com/huggingface/transformers/issues/28683
2,098,818,468
I_kwDOCUB6oc59GW2k
28,683
Add option to suppress progress bar in train log output
{ "login": "cohml", "id": 62400541, "node_id": "MDQ6VXNlcjYyNDAwNTQx", "avatar_url": "https://avatars.githubusercontent.com/u/62400541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cohml", "html_url": "https://github.com/cohml", "followers_url": "https://api.github.com/users/cohml/followers", "following_url": "https://api.github.com/users/cohml/following{/other_user}", "gists_url": "https://api.github.com/users/cohml/gists{/gist_id}", "starred_url": "https://api.github.com/users/cohml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cohml/subscriptions", "organizations_url": "https://api.github.com/users/cohml/orgs", "repos_url": "https://api.github.com/users/cohml/repos", "events_url": "https://api.github.com/users/cohml/events{/privacy}", "received_events_url": "https://api.github.com/users/cohml/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "@cohml \r\nhave you tried adding the `disable_tqdm` parameter to your `TrainingArguments` ? \r\n\r\nyou can read more about it here : https://huggingface.co/docs/transformers/v4.37.1/en/main_classes/trainer#transformers.TrainingArguments.disable_tqdm", "Oh man, that's exactly what I was after! Thanks so much @not-lain for pointing it out! I will close this issue now." ]
1,706
1,706
1,706
NONE
null
### Feature request When training a transformer model with `transformers` - certainly using the `Trainer.train` API but probably also with other methods as well - a `tqdm`-style progress bar is printed to the screen. This is very useful when monitoring training in the terminal in real time. But it really messes up logging when this output is piped to a file. This is because the progress bar appears to use the carriage return character to give the illusion of refreshing the bar, but that character wreaks havoc when trying to `cat`, `less`, or `grep` through a file. Here's an example of what I mean: ```bash ❯ grep -c eval_mse train.log 85 ❯ grep eval_mse train.log 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 850/850 [1:41:01<00:00, 4.69s/it]{'eval_loss': 0.6620487570762634, 'eval_mse': 0.8136637373073005, 'eval_qwk': 0.7408296916568654, 'eval_runtime': 15.4491, 'eval_samples_per_second': 43.239, 'eval_steps_per_second': 2.719, 'epoch': 49.41} {'eval_loss': 0.6605485081672668, 'eval_mse': 0.812741345170533, 'eval_qwk': 0.7413115848125768, 'eval_runtime': 15.2233, 'eval_samples_per_second': 43.88, 'eval_steps_per_second': 2.759, 'epoch': 50.0}``` ``` This shows that my log file has 85 lines with the substring `eval_mse`, but when I try to view the individual lines themselves, the carriage returns eats almost all the output. Meanwhile, manually replacing those characters shows all the matches (only last 5 shown here for brevity): ```bash ❯ sed 's/\r/\n/g' train.log | grep eval_mse | nl | tail -5 81 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 820/850 [1:37:14<03:08, 6.27s/it]{'eval_loss': 0.6764867305755615, 'eval_mse': 0.8224881544548046, 'eval_qwk': 0.7378055733442015, 'eval_runtime': 14.7125, 'eval_samples_per_second': 45.404, 'eval_steps_per_second': 2.855, 'epoch': 47.65} 82 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 830/850 [1:38:24<02:07, 6.36s/it]{'eval_loss': 0.6555904746055603, 'eval_mse': 0.8096854324958218, 'eval_qwk': 0.7447225139461018, 'eval_runtime': 14.7862, 'eval_samples_per_second': 45.177, 'eval_steps_per_second': 2.84, 'epoch': 48.24} 83 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 840/850 [1:39:47<01:48, 10.82s/it]{'eval_loss': 0.6539692878723145, 'eval_mse': 0.8086836641571903, 'eval_qwk': 0.744001775888472, 'eval_runtime': 14.8626, 'eval_samples_per_second': 44.945, 'eval_steps_per_second': 2.826, 'epoch': 48.82} 84 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 850/850 [1:41:01<00:00, 4.69s/it]{'eval_loss': 0.6620487570762634, 'eval_mse': 0.8136637373073005, 'eval_qwk': 0.7408296916568654, 'eval_runtime': 15.4491, 'eval_samples_per_second': 43.239, 'eval_steps_per_second': 2.719, 'epoch': 49.41} 85 {'eval_loss': 0.6605485081672668, 'eval_mse': 0.812741345170533, 'eval_qwk': 0.7413115848125768, 'eval_runtime': 15.2233, 'eval_samples_per_second': 43.88, 'eval_steps_per_second': 2.759, 'epoch': 50.0} ``` This kind of thing has inconvenienced me many times. So it would be very nice and make the logging more readable when captured via a pipeline if this progress bar could be optionally disabled. ### Motivation The motivation for this feature comes from training scenarios where the logging output is captured in a file or other log stream that persists. The progress bar is hlepful for watching training in real time. However once an experiment is finished, if the logging output is stored for future consultation, the progress bar just becomes an obstacle to work around. So if users could opt out of it, that would be great. ### Your contribution I would love to contribute a solution here, but the `transformers` code base is so vast that I have no idea where to begin. If a core dev could provide some pointers to help get me started, that would be much appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28683/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28682/comments
https://api.github.com/repos/huggingface/transformers/issues/28682/events
https://github.com/huggingface/transformers/pull/28682
2,098,597,320
PR_kwDOCUB6oc5k-GPD
28,682
Add artifact name in jobs' step to maintain jobs and artifacts correspondence
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Before merge, I will change all workflow files that use `notification_service.py`", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28682). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "A follow up PR based on this one is #28773" ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? When our (actual) CI workflow files are called using `workflow_call` event by other workflows files, the job names will be concatenated like `Nightly CI / Model Test (models/bert, single-gpu)`. We currently have Nightly/Past/AMD CI using `workflow_call`. We will soon have to use it for daily CI too due to the 256 matrix jobs limit of GitHub Actions. So the (model test) job names in daily CI will become something like `Part (0) / Model Test (models/bert, single-gpu)` and nightly CI will have `Nightly CI / Part (0) / Model Test (models/bert, single-gpu)`. _This makes `utils/notification_service.py` more complex to handle the job names correctly to get job links_ **This PR implements a new approach to maintain the correspondence between jobs, links and artifacts, so `utils/notification_service.py` can have the necessary information more easily.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28682/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28682/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28682", "html_url": "https://github.com/huggingface/transformers/pull/28682", "diff_url": "https://github.com/huggingface/transformers/pull/28682.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28682.patch", "merged_at": 1706713097000 }
https://api.github.com/repos/huggingface/transformers/issues/28681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28681/comments
https://api.github.com/repos/huggingface/transformers/issues/28681/events
https://github.com/huggingface/transformers/pull/28681
2,098,331,447
PR_kwDOCUB6oc5k9L1D
28,681
Add back in generation types
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts uhmmm, I didn't think anyone would be importing these auxiliary types πŸ€” \r\n\r\nHow can we deprecate a type variable? I have no idea how to throw a warning if a user uses it as a type :D ", "@gante Hyrum's law strikes again! I didn't think they'd be used either, which is partly why I wasn't checking rigorously in the PR review. Normally, something like this would have minimal impact which would be easy to remedy on the user side, and removing would be OK. However in this case the issue had a big impact, with lots of activity in the issue, which is why we decided to add back in. \r\n\r\nI don't think there's any easy way to deprecate a type unfortunately. Or, at least I don't know and wasn't able to find one. ", "@amyeroberts note to self: don't create auxiliary types ☠️ " ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? #28494 removed some custom types in the `generation.utils` module. This has caused downstream issues in other libraries, notably Coqui-TTS c.f #28649 This PR adds them back in so they're still importable. cc @gante for reference when you're back
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28681", "html_url": "https://github.com/huggingface/transformers/pull/28681", "diff_url": "https://github.com/huggingface/transformers/pull/28681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28681.patch", "merged_at": 1706107050000 }
https://api.github.com/repos/huggingface/transformers/issues/28680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28680/comments
https://api.github.com/repos/huggingface/transformers/issues/28680/events
https://github.com/huggingface/transformers/pull/28680
2,098,278,568
PR_kwDOCUB6oc5k9ATB
28,680
fix: readme
{ "login": "ThibaultLengagne", "id": 11950126, "node_id": "MDQ6VXNlcjExOTUwMTI2", "avatar_url": "https://avatars.githubusercontent.com/u/11950126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThibaultLengagne", "html_url": "https://github.com/ThibaultLengagne", "followers_url": "https://api.github.com/users/ThibaultLengagne/followers", "following_url": "https://api.github.com/users/ThibaultLengagne/following{/other_user}", "gists_url": "https://api.github.com/users/ThibaultLengagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThibaultLengagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThibaultLengagne/subscriptions", "organizations_url": "https://api.github.com/users/ThibaultLengagne/orgs", "repos_url": "https://api.github.com/users/ThibaultLengagne/repos", "events_url": "https://api.github.com/users/ThibaultLengagne/events{/privacy}", "received_events_url": "https://api.github.com/users/ThibaultLengagne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28680/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28680", "html_url": "https://github.com/huggingface/transformers/pull/28680", "diff_url": "https://github.com/huggingface/transformers/pull/28680.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28680.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28679/comments
https://api.github.com/repos/huggingface/transformers/issues/28679/events
https://github.com/huggingface/transformers/issues/28679
2,098,184,627
I_kwDOCUB6oc59D8Gz
28,679
GPT2 after few finetune epochs starts to generate sequence of only EOS tokens
{ "login": "tempdeltavalue", "id": 36921178, "node_id": "MDQ6VXNlcjM2OTIxMTc4", "avatar_url": "https://avatars.githubusercontent.com/u/36921178?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tempdeltavalue", "html_url": "https://github.com/tempdeltavalue", "followers_url": "https://api.github.com/users/tempdeltavalue/followers", "following_url": "https://api.github.com/users/tempdeltavalue/following{/other_user}", "gists_url": "https://api.github.com/users/tempdeltavalue/gists{/gist_id}", "starred_url": "https://api.github.com/users/tempdeltavalue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tempdeltavalue/subscriptions", "organizations_url": "https://api.github.com/users/tempdeltavalue/orgs", "repos_url": "https://api.github.com/users/tempdeltavalue/repos", "events_url": "https://api.github.com/users/tempdeltavalue/events{/privacy}", "received_events_url": "https://api.github.com/users/tempdeltavalue/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "@amyeroberts \r\nOk, thank you \r\n(sorry)", "![vvvvvvvvv](https://github.com/huggingface/transformers/assets/36921178/9553ed9c-6be0-4238-b7df-11f0a57ff009)\r\n\r\nprobably it's a some kind of bug (I can silence this left padding warning but it doesn't change anything some inputs just generates eos eos eos ...)" ]
1,706
1,706
null
NONE
null
### System Info I get output like this: <|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Check this ipynb https://github.com/tempdeltavalue/temp_l/blob/main/finetune_seq2seq.ipynb ### Expected behavior I expect the model should returns something
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28679/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/28678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28678/comments
https://api.github.com/repos/huggingface/transformers/issues/28678/events
https://github.com/huggingface/transformers/pull/28678
2,098,138,688
PR_kwDOCUB6oc5k8hsy
28,678
use scaled_dot_product_attention
{ "login": "lintangsutawika", "id": 5774558, "node_id": "MDQ6VXNlcjU3NzQ1NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/5774558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lintangsutawika", "html_url": "https://github.com/lintangsutawika", "followers_url": "https://api.github.com/users/lintangsutawika/followers", "following_url": "https://api.github.com/users/lintangsutawika/following{/other_user}", "gists_url": "https://api.github.com/users/lintangsutawika/gists{/gist_id}", "starred_url": "https://api.github.com/users/lintangsutawika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lintangsutawika/subscriptions", "organizations_url": "https://api.github.com/users/lintangsutawika/orgs", "repos_url": "https://api.github.com/users/lintangsutawika/repos", "events_url": "https://api.github.com/users/lintangsutawika/events{/privacy}", "received_events_url": "https://api.github.com/users/lintangsutawika/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,706
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28678/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28678", "html_url": "https://github.com/huggingface/transformers/pull/28678", "diff_url": "https://github.com/huggingface/transformers/pull/28678.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28678.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28677/comments
https://api.github.com/repos/huggingface/transformers/issues/28677/events
https://github.com/huggingface/transformers/issues/28677
2,097,482,345
I_kwDOCUB6oc59BQpp
28,677
Cannot find checkpoint during Trainer._load_best_model when using deepspeed
{ "login": "nathan-az", "id": 42650258, "node_id": "MDQ6VXNlcjQyNjUwMjU4", "avatar_url": "https://avatars.githubusercontent.com/u/42650258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nathan-az", "html_url": "https://github.com/nathan-az", "followers_url": "https://api.github.com/users/nathan-az/followers", "following_url": "https://api.github.com/users/nathan-az/following{/other_user}", "gists_url": "https://api.github.com/users/nathan-az/gists{/gist_id}", "starred_url": "https://api.github.com/users/nathan-az/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nathan-az/subscriptions", "organizations_url": "https://api.github.com/users/nathan-az/orgs", "repos_url": "https://api.github.com/users/nathan-az/repos", "events_url": "https://api.github.com/users/nathan-az/events{/privacy}", "received_events_url": "https://api.github.com/users/nathan-az/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing this.\r\n\r\nThe docs are clear that the issue is [save_only_model](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_only_model): `Note that when this is true, you won’t be able to resume training from checkpoint`.\r\n\r\nIt would be nice to have a version of the model checkpointing that saves the model files using a distributed equivalent of `save_pretrained` which retains the best model, without the training information, but either way this does not appear to currently be a bug." ]
1,706
1,706
1,706
NONE
null
### System Info ``` - `transformers` version: 4.37.0 - Platform: Linux-6.2.0-1017-aws-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ``` Note the above was run in a container on a different instance from the job compute, but with the same docker image. ### Who can help? @pacman100 @muellerzr ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Came across this using the SFT script in the [alignment handbook](https://github.com/huggingface/alignment-handbook). I can add more information but I think the relevant info is as follows: In terms of trainer args: ```yaml load_best_model_at_end: true num_train_epochs: 40 output_dir: /local_disk0/hf/outputs overwrite_output_dir: true resume_from_checkpoint: false save_on_each_node: true save_only_model: true save_steps: 1 save_strategy: "epoch" save_total_limit: 5 ``` Note that I am attempting to only save 5 models, but to keep track of the best model and load it at the end for saving. I also set `save_only_model` to `true` as I don't currently care to be able to actually load a checkpoint for continued training, and suspect this is the problem. Note that the output directory `/local_disk0/hf/outputs` is a directory path that _exists_ on each node, but is _not_ a shared filesystem/NFS (so each node contains its information in that path). My setup is distributed multi-gpu and multi-node via pdsh. ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_multinode_launcher: pdsh deepspeed_hostfile: {TRAIN_DIR}/hostfile deepspeed_config_file: {CONFIG_FILE} zero3_init_flag: true distributed_type: DEEPSPEED ``` I've tried to clean up the stacktrace, since I'm getting multiple (it appears to be one per rank) ``` File "/databricks/python3/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 315, in train main() File "/local_disk0/.ephemeral_nfs/training/alignment-handbook/scripts/run_sft.py", line 164, in main output = super().train(*args, **kwargs) train_result = trainer.train(resume_from_checkpoint=checkpoint)output = super().train(*args, **kwargs) File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train output = super().train(*args, **kwargs) File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 1972, in _inner_training_loop self._load_best_model() File "/databricks/python3/lib/python3.10/site-packages/transformers/trainer.py", line 2167, in _load_best_model deepspeed_load_checkpoint(self.model_wrapped, self.state.best_model_checkpoint) File "/databricks/python3/lib/python3.10/site-packages/transformers/integrations/deepspeed.py", line 408, in deepspeed_load_checkpoint raise ValueError(f"Can't find a valid checkpoint at {checkpoint_path}") ValueError: Can't find a valid checkpoint at /local_disk0/hf/outputs/checkpoint-29 ``` ### Expected behavior I expect the model to be loaded at the end, simply so that it can be saved (i.e. the motivation is to save the model with the best eval metric for use during inference). The error message indicates the `checkpoint` could not be found. I suspect that maybe it is expecting a full checkpoint including parameters, optimiser states, etc. so that training can continue, but am unsure. If this is the cause, this might be more of a feature request? Since it would be good to have a way to keep track of and save just the parameters of the iteration with the best eval metric.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28677/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28676/comments
https://api.github.com/repos/huggingface/transformers/issues/28676/events
https://github.com/huggingface/transformers/pull/28676
2,097,473,712
PR_kwDOCUB6oc5k6P-N
28,676
fix(tokenization): `encode` should remove leading batch axis for all types of single batch to keep consistent.
{ "login": "scruel", "id": 16933298, "node_id": "MDQ6VXNlcjE2OTMzMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scruel", "html_url": "https://github.com/scruel", "followers_url": "https://api.github.com/users/scruel/followers", "following_url": "https://api.github.com/users/scruel/following{/other_user}", "gists_url": "https://api.github.com/users/scruel/gists{/gist_id}", "starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scruel/subscriptions", "organizations_url": "https://api.github.com/users/scruel/orgs", "repos_url": "https://api.github.com/users/scruel/repos", "events_url": "https://api.github.com/users/scruel/events{/privacy}", "received_events_url": "https://api.github.com/users/scruel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ArthurZucker, as we discussed in issue #28635, instead of let `encode` directly support multiple batches, this PR is main to fix the error of existing behaviour for a single batch. Let's call this as the first step for achieving your plan.", "I may have few questions for the next step of your plan to deprecate some of the APIs to let `decode` supports both single and batch processing, along with some questions about this repo's code style, which place should I post them to ask? Or may I reach your team on your public discord server? Thanks.", "We can keep the discussion on the related issue πŸ€— ", "Ok, this PR can oly solve partial sections of the related issue and have some mistakes, so I closed it, will create another after I done." ]
1,706
1,707
1,706
CONTRIBUTOR
null
# What does this PR do? `encode` should remove leading batch axis for all types to keep consistent with `decode` method. Fixes #28635 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28676/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28676", "html_url": "https://github.com/huggingface/transformers/pull/28676", "diff_url": "https://github.com/huggingface/transformers/pull/28676.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28676.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28675/comments
https://api.github.com/repos/huggingface/transformers/issues/28675/events
https://github.com/huggingface/transformers/issues/28675
2,097,325,509
I_kwDOCUB6oc59AqXF
28,675
Swinv2ForImageClassification often outputs NaN at initialization
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[]
1,706
1,706
null
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py from transformers import Swinv2Config, Swinv2ForImageClassification import torch with torch.inference_mode(): cfg = Swinv2Config(image_size=56) model = Swinv2ForImageClassification(cfg) out = model(torch.rand(1, 3, 56, 56)) out.logits ``` ### Expected behavior Should not output NaN; throw an error if the implementation doesn't currently support a certain combination of architectural hyperparameters
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28675/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28674/comments
https://api.github.com/repos/huggingface/transformers/issues/28674/events
https://github.com/huggingface/transformers/issues/28674
2,097,305,722
I_kwDOCUB6oc59Alh6
28,674
Can not execute example in idefics-9b-instruct
{ "login": "ppsmk388", "id": 60417397, "node_id": "MDQ6VXNlcjYwNDE3Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/60417397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ppsmk388", "html_url": "https://github.com/ppsmk388", "followers_url": "https://api.github.com/users/ppsmk388/followers", "following_url": "https://api.github.com/users/ppsmk388/following{/other_user}", "gists_url": "https://api.github.com/users/ppsmk388/gists{/gist_id}", "starred_url": "https://api.github.com/users/ppsmk388/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ppsmk388/subscriptions", "organizations_url": "https://api.github.com/users/ppsmk388/orgs", "repos_url": "https://api.github.com/users/ppsmk388/repos", "events_url": "https://api.github.com/users/ppsmk388/events{/privacy}", "received_events_url": "https://api.github.com/users/ppsmk388/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @ppsmk388, thanks for raising an issue! \r\n\r\nI'm able to load the processor without issue. As the error indicates, this might be due to a connection issue. Could you re-run and try again? ", "Thank you, it is a network problem", "@ppsmk388 Thanks for confirming! " ]
1,706
1,706
null
NONE
null
### System Info tokenizers-0.15.1 transformers-4.37.0 python3.8.10 Linux ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I run example in https://huggingface.co/HuggingFaceM4/idefics-9b-instruct ``` import torch from transformers import IdeficsForVisionText2Text, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b" model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) processor = AutoProcessor.from_pretrained(checkpoint) # We feed to the model an arbitrary sequence of text strings and images. Images can be either URLs or PIL Images. prompts = [ [ "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", "In this picture from Asterix and Obelix, we can see" ], ] # --batched mode inputs = processor(prompts, return_tensors="pt").to(device) # --single sample mode # inputs = processor(prompts[0], return_tensors="pt").to(device) # Generation args bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, bad_words_ids=bad_words_ids, max_length=100) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i, t in enumerate(generated_text): print(f"{i}:\n{t}\n") ``` I got: ``` OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like HuggingFaceM4/idefics-9b-instruct is not the path to a directory containing a file named processor_config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode' ``` in ``` `processor = AutoProcessor.from_pretrained(checkpoint) ``` ### Expected behavior Successful code execution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28674/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28673/comments
https://api.github.com/repos/huggingface/transformers/issues/28673/events
https://github.com/huggingface/transformers/pull/28673
2,097,044,260
PR_kwDOCUB6oc5k41uZ
28,673
Phi-2 requires a disabled autocast in attention layer
{ "login": "gugarosa", "id": 4120639, "node_id": "MDQ6VXNlcjQxMjA2Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gugarosa", "html_url": "https://github.com/gugarosa", "followers_url": "https://api.github.com/users/gugarosa/followers", "following_url": "https://api.github.com/users/gugarosa/following{/other_user}", "gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions", "organizations_url": "https://api.github.com/users/gugarosa/orgs", "repos_url": "https://api.github.com/users/gugarosa/repos", "events_url": "https://api.github.com/users/gugarosa/events{/privacy}", "received_events_url": "https://api.github.com/users/gugarosa/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thanks for the PR! We are not super super fans of context managers for such things. TBH it's not that bad! cc @amyeroberts what's your take? \r\n", "Thanks for adding this fix @gugarosa! \r\n\r\nI don't mind this too much, it's pretty clean and simple :) Let's get @younesbelkada's opinion on whether this will break any other assumptions about weight loading in the library and possible alternatives", "No problems, thanks everyone for looking at it! Hopefully this is a one-time behavior and we will never see it again on future models πŸ™ ", "Hi @gugarosa , seems like we are still having loss issue: https://github.com/huggingface/transformers/issues/28488#issuecomment-1940449050\r\n\r\n**Update**: Ignore my comment - Apparently, my new installation of transformers didn't with your changes so same loss curves are expected. I tried to rerun training with changes in your PR and training failed: \r\n```\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 16, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 16, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/home/minimalist/work/projects/transformers/src/transformers/models/phi/modeling_phi.py\", line 318, in forward\r\n query_states = self.q_proj(hidden_states)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/minimalist/miniconda3/envs/axolotl_Feb12/lib/python3.9/site-packages/torch/nn/modules/linear.py\", line 114, in forward\r\n return F.linear(input, self.weight, self.bias)\r\nRuntimeError: mat1 and mat2 must have the same dtype, but got Float and BFloat16\r\n```" ]
1,706
1,707
null
CONTRIBUTOR
null
# What does this PR do? Phi-2 has an attention overflow issue, and since the model weights were released with a MIT license, there is no short-term solution in replacing them (re-training the model). Therefore, the only solution we could find to cover all corner cases regarding the overflow, is to also disable the autocast in the attention layer. This update follows the current [model file](https://huggingface.co/microsoft/phi-2/blob/main/modeling_phi.py) we have on `microsoft/phi-2` repository. Additionally, it follows the [previous solution](https://huggingface.co/microsoft/phi-2/blob/834565c23f9b28b96ccbeabe614dd906b6db551a/modeling_phi.py#L347) we had done before the Phi integration. Please let me know if we can think of any different solutions, or if there is anything else we can do. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @susnato @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28673/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28673", "html_url": "https://github.com/huggingface/transformers/pull/28673", "diff_url": "https://github.com/huggingface/transformers/pull/28673.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28673.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28672/comments
https://api.github.com/repos/huggingface/transformers/issues/28672/events
https://github.com/huggingface/transformers/issues/28672
2,096,918,999
I_kwDOCUB6oc58_HHX
28,672
GPT2 cannot be used with device_map='auto'; Report "found at least two devices"
{ "login": "haobozhang", "id": 56833210, "node_id": "MDQ6VXNlcjU2ODMzMjEw", "avatar_url": "https://avatars.githubusercontent.com/u/56833210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haobozhang", "html_url": "https://github.com/haobozhang", "followers_url": "https://api.github.com/users/haobozhang/followers", "following_url": "https://api.github.com/users/haobozhang/following{/other_user}", "gists_url": "https://api.github.com/users/haobozhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/haobozhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haobozhang/subscriptions", "organizations_url": "https://api.github.com/users/haobozhang/orgs", "repos_url": "https://api.github.com/users/haobozhang/repos", "events_url": "https://api.github.com/users/haobozhang/events{/privacy}", "received_events_url": "https://api.github.com/users/haobozhang/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Finally, the checkpoint got completed after 2 hours on an Instance with 8 TPU cores, 350GB RAM, and 98 vCPUs. I was wondering, how can I make it faster. I understand that the library download the model params from TPU to CPU, and then save them. \r\n\r\nAm I missing some params that can make it faster to take checkpoints?", "Hi @haobozhang, thanks for raising this issue and apologies for the delay. \r\n\r\nPinging @muellerzr and @pacman100, as this seems to be an issue with weight offloading. " ]
1,706
1,707
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.9.18 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction A simple reproducer here: ```python from transformers import GPT2LMHeadModel # create a sample input: batch_ids = { 'input_ids': torch.tensor([[312, 134, 56, 712, 351, 89, 63, 550, 971, 2]]), 'attention_mask': torch.tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), } gpt2_large = GPT2LMHeadModel.from_pretrained('gpt2-large', cache_dir='./cache_dir', device_map='auto') gpt2 = GPT2LMHeadModel.from_pretrained('gpt2', cache_dir='./cache_dir', device_map='auto') loss_gpt2_large = gpt2_large(**batch_ids, labels=batch_ids['input_ids']).loss loss_gpt2 = gpt2(**batch_ids, labels=batch_ids['input_ids']).loss ``` ### Expected behavior It works well to generate `loss_gpt2_large`, but it will report error when generating `loss_gpt2`: `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)` I am not sure why this behaves differently with the same model class. Could you please provide any comments on this? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28672/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28671/comments
https://api.github.com/repos/huggingface/transformers/issues/28671/events
https://github.com/huggingface/transformers/issues/28671
2,096,875,487
I_kwDOCUB6oc58-8ff
28,671
Issue with finetuning Mixtral w/ deepspeed after new release
{ "login": "sam-h-bean", "id": 43734688, "node_id": "MDQ6VXNlcjQzNzM0Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/43734688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-h-bean", "html_url": "https://github.com/sam-h-bean", "followers_url": "https://api.github.com/users/sam-h-bean/followers", "following_url": "https://api.github.com/users/sam-h-bean/following{/other_user}", "gists_url": "https://api.github.com/users/sam-h-bean/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-h-bean/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-h-bean/subscriptions", "organizations_url": "https://api.github.com/users/sam-h-bean/orgs", "repos_url": "https://api.github.com/users/sam-h-bean/repos", "events_url": "https://api.github.com/users/sam-h-bean/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-h-bean/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sam-h-bean, thanks for raising an issue! \r\n\r\nSo that we can best help you, can you make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and include all important information such as: \r\n* The running environment: run `transformers-cli env` in the terminal and copy-paste the output. For example 'latest' isn't descriptive - it could mean dev branch or 4.37\r\n* A minimal code snippet we can run to reproduce the error\r\n* Full details of the error encountered, including full traceback\r\n\r\ncc @ArthurZucker @pacman100 ", "Have same issue with latest transformers, how did you resolve the issue\r\n" ]
1,706
1,706
1,706
CONTRIBUTOR
null
### System Info transformers: latest env: ray + deepspeed on k8s There seems to be an issue with Mixtral on the latest transformers release that manifests like ``` RuntimeError: Detected mismatch between collectives on ranks. Rank 4 is running collective: CollectiveFingerPrint(SequenceNumber=724883, OpType=_ALLGATHER_BASE, TensorShape=[4194305], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=724883, OpType=_ALLGATHER_BASE, TensorShape=[699051], TensorDtypes=BFloat16, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))).Collectives differ in the following aspects: Tensor Tensor shapes: 4194305vs 699051 ``` When I pin transformers to 4.36.2 the issue goes away. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Finetune mixtral w/ deepspeed + accelerate my ds_config is like so ```json { "fp16": { "enabled": false }, "bf16": { "enabled": true }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": false }, "overlap_comm": true, "contiguous_gradients": true, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "gather_16bit_weights_on_model_save": true, "round_robin_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 10, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false, "comms_logger": { "enabled": true, "verbose": true, "prof_all": true, "debug": true } } ``` 2. ### Expected behavior The tensors are the correct shape
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28671/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28670/comments
https://api.github.com/repos/huggingface/transformers/issues/28670/events
https://github.com/huggingface/transformers/issues/28670
2,096,803,973
I_kwDOCUB6oc58-rCF
28,670
OSError: Can't load tokenizer for fine-tuned model
{ "login": "ccruttjr", "id": 146245010, "node_id": "U_kgDOCLeFkg", "avatar_url": "https://avatars.githubusercontent.com/u/146245010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccruttjr", "html_url": "https://github.com/ccruttjr", "followers_url": "https://api.github.com/users/ccruttjr/followers", "following_url": "https://api.github.com/users/ccruttjr/following{/other_user}", "gists_url": "https://api.github.com/users/ccruttjr/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccruttjr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccruttjr/subscriptions", "organizations_url": "https://api.github.com/users/ccruttjr/orgs", "repos_url": "https://api.github.com/users/ccruttjr/repos", "events_url": "https://api.github.com/users/ccruttjr/events{/privacy}", "received_events_url": "https://api.github.com/users/ccruttjr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, you mentioned that:\r\n>The directory for the saved model is ./saved/ and the file contents are config.json generation_config.json model-00001-of-00002.safetensors model-00002-of-00002.safetensors model.safetensors model.safetensors.index.json \r\n\r\nthere does not seem to be any tokenizer files here no? ", "Yep. Realized fairly quickly I needed to add something like that\r\n\r\n```python\r\n if accelerator.is_main_process:\r\n print(\"saving tokenizer\")\r\n # Saving the tokenizer\r\n tokenizer.save_pretrained(save_location)\r\n print(\"saved tokenizer\")\r\n```\r\nseems to be resolved" ]
1,706
1,706
1,706
NONE
null
### System Info **I promise you this issue isn't as long as it seems.** (It's long because I included a lot of context below just in case it was needed) Hello! I fine-tuned a the gpt2-xl model on some custom data and saved the model. The directory for the saved model is `./saved/` and the file contents are `config.json generation_config.json model-00001-of-00002.safetensors model-00002-of-00002.safetensors model.safetensors model.safetensors.index.json`. Let me know if there's more information I can provide other than what's below. When attempting to use the fine-tuned model for text generation, I ran into an error running this: ```python model = AutoModelForCausalLM.from_pretrained(model_path) ``` getting ``` Traceback (most recent call last): File "/home/username/NCAI/inference.py", line 40, in <module> main() File "/home/username/NCAI/inference.py", line 32, in main model, tokenizer = load_model(model_path) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/username/NCAI/inference.py", line 9, in load_model model = AutoModelForCausalLM.from_pretrained(model_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3371, in from_pretrained with safe_open(resolved_archive_file, framework="pt") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization ``` I changed how I loaded up the model via ```python config = AutoConfig.from_pretrained(model_path) model = AutoModelForCausalLM.from_config(config) ``` which the script got through! Which is why I didn't put that issue in the title. But... I ran into an error right after with this: ```python tokenizer = AutoTokenizer.from_pretrained(model_path) # as well as this tokenizer = GPT2Tokenizer.from_pretrained(model_path) ``` giving ``` Traceback (most recent call last): File "/home/username/NCAI/inference.py", line 40, in <module> main() File "/home/username/NCAI/inference.py", line 32, in main model, tokenizer = load_model(model_path) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/username/NCAI/inference.py", line 12, in load_model tokenizer = AutoTokenizer.from_pretrained(model_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/username/miniconda3/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 805, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/daimyollc/miniconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2012, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for './saved/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './saved/' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. ``` For some extra info, here is my config.json and generation_config.json ```json { "_name_or_path": "gpt2-xl", "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 1600, "n_head": 25, "n_inner": null, "n_layer": 48, "n_positions": 1024, "output_past": true, "reorder_and_upcast_attn": false, "resid_pdrop": 0.1, "scale_attn_by_inverse_layer_idx": false, "scale_attn_weights": true, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 50 } }, "torch_dtype": "float32", "transformers_version": "4.36.2", "use_cache": true, "vocab_size": 50257 } ``` ```json { "_from_model_config": true, "bos_token_id": 50256, "eos_token_id": 50256, "transformers_version": "4.36.2" } ``` Here is the code that fine-tuned and saved the new model ```python # 1. Have Transformer's determine the best tokenizer for the given model # 2. Convert XML to readable dataset. Have the first GPU run it first so multiple GPUs aren't trying to edit the XML at # the same time # 3. Set the max length and padding of each eConsult and how wewant to tokenize the dataset # 4. Split dataset into training dataset and eval 80/20 # 5. Distribute tokenized datasets across multiple GPUs as to not run out of memory # 6. Create/return dataloader with the given data for the trainer to use def get_dataloaders(accelerator: Accelerator, batch_size, model_name, data_location): # 1 tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token # 2 with accelerator.main_process_first(): dataset = Dataset.from_pandas(process_dataset(data_location)) # 3 def tokenize_function(examples): return tokenizer(examples["conversation"], padding="max_length", truncation=True, max_length=256) with accelerator.main_process_first(): tokenized_dataset = dataset.map(tokenize_function, batched=True) tokenized_dataset.set_format( "torch", columns=["input_ids", "attention_mask"]) # 4 split_datasets = tokenized_dataset.train_test_split(test_size=0.2) tokenized_train_dataset = split_datasets["train"] tokenized_eval_dataset = split_datasets["test"] # 5 train_sampler = DistributedSampler( tokenized_train_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=True ) eval_sampler = DistributedSampler( tokenized_eval_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=False ) # 6 train_dataloader = DataLoader( tokenized_train_dataset, batch_size=batch_size, drop_last=True, sampler=train_sampler ) eval_dataloader = DataLoader( tokenized_eval_dataset, batch_size=batch_size*2, drop_last=(accelerator.mixed_precision == "fp8"), sampler=eval_sampler ) return train_dataloader, eval_dataloader # 1. Initialize accelerator with mixed percision and define training parameters via arguments given in command line # 2. Sets seed (if given as a command line argument) for reproducability # 3. Get dataloaders # 4. Initialize more training perameters and "prepare"/optimize them via Accelerate # 5. Train/fine-tune model with new data & set parameters using FSDP # 6. Evaluate quality of trainer for that epoch # 7. Have the first GPU save the newly fine-tuned dataset def training_function(args): # 1 accelerator = Accelerator(mixed_precision=args.mixed_precision) lr = args.lr num_epochs = args.num_epochs batch_size = args.batch_size num_warmup_steps = args.num_warmup_steps # 2 if args.seed: set_seed(args.seed) # 3 train_dataloader, eval_dataloader = get_dataloaders( accelerator, batch_size, args.model_name, args.data_location) # 4 # Instantiate the model (we build the model here so that the seed also control new weights initialization) model = AutoModelForCausalLM.from_pretrained(args.model_name) model = accelerator.prepare(model) optimizer = AdamW(params=model.parameters(), lr=lr) # Instantiate scheduler lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=(len(train_dataloader) * num_epochs), ) # Prepare everything # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the # prepare method. optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # Initialize logging variables total_train_loss = 0 total_eval_loss = 0 # 5 # Now we train the model for epoch in range(num_epochs): model.train() total_train_loss = 0 for batch in tqdm(train_dataloader, desc="Training"): with accelerator.accumulate(model): # Process the batch inputs = {k: v.to(accelerator.device) for k, v in batch.items()} if "labels" not in inputs: inputs["labels"] = inputs["input_ids"] outputs = model(**inputs) loss = outputs.loss total_train_loss += loss.item() accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() accelerator.wait_for_everyone() # 6 # Evaluation loop after each training epoch model.eval() total_eval_loss = 0 for batch in tqdm(eval_dataloader, "Evaluating"): with torch.no_grad(): inputs = {k: v.to(accelerator.device) for k, v in batch.items()} if "labels" not in inputs: inputs["labels"] = inputs["input_ids"] outputs = model(**inputs) loss = outputs.loss total_eval_loss += loss.item() # Log the average losses avg_train_loss = total_train_loss / len(train_dataloader) avg_eval_loss = total_eval_loss / len(eval_dataloader) print( f"Epoch: {epoch}, Average Training Loss: {avg_train_loss}, Average Evaluation Loss: {avg_eval_loss}") accelerator.wait_for_everyone() # 7 accelerator.wait_for_everyone() accelerator.print("saving") accelerator.unwrap_model(model).save_pretrained( "./saved_1000", is_main_process=accelerator.is_main_process, save_function=accelerator.save, state_dict=accelerator.get_state_dict(model), ) def main(): args = parse_args() training_function(args) if __name__ == "__main__": start = time() main() print(f"Total Execution Time: {time() - start} seconds") ``` ``` $ transformers-cli env - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: FSDP - mixed_precision: fp16 - use_cpu: False - debug: False - num_processes: 6 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - fsdp_config: { 'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP',' 'fsdp_backward_prefetch': 'BACKWARD_PRE', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 'FULL_SHARD', 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': True } - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```bash $ # create XML file with data we wanna use $ conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia $ conda install transformers accelerate dataset $ pip instal bs4 pandas tqdm $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh $ # https://developer.nvidia.com/cuda-zone $ # ran fine tuning file $ python inference.py ``` ### Expected behavior For this ```python tokenizer = GPT2Tokenizer.from_pretrained(model_path) ``` to not fail!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28670/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28669/comments
https://api.github.com/repos/huggingface/transformers/issues/28669/events
https://github.com/huggingface/transformers/pull/28669
2,096,778,888
PR_kwDOCUB6oc5k37a3
28,669
Use save_safetensor to disable safe serialization for XLA
{ "login": "jeffhataws", "id": 56947987, "node_id": "MDQ6VXNlcjU2OTQ3OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffhataws", "html_url": "https://github.com/jeffhataws", "followers_url": "https://api.github.com/users/jeffhataws/followers", "following_url": "https://api.github.com/users/jeffhataws/following{/other_user}", "gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions", "organizations_url": "https://api.github.com/users/jeffhataws/orgs", "repos_url": "https://api.github.com/users/jeffhataws/repos", "events_url": "https://api.github.com/users/jeffhataws/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffhataws/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts @muellerzr @Narsil will you be able to back-port to 4.37? This is needed for Neuron SDK." ]
1,706
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Safetensor serialization is now default but not yet supported by XLA. This change uses save_safetensor argument to disable safe serialization for XLA as a workaround until XLA catches up. Fixes # (issue) https://github.com/huggingface/transformers/issues/28438 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @muellerzr and @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28669", "html_url": "https://github.com/huggingface/transformers/pull/28669", "diff_url": "https://github.com/huggingface/transformers/pull/28669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28669.patch", "merged_at": 1706097465000 }
https://api.github.com/repos/huggingface/transformers/issues/28668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28668/comments
https://api.github.com/repos/huggingface/transformers/issues/28668/events
https://github.com/huggingface/transformers/pull/28668
2,096,654,889
PR_kwDOCUB6oc5k3f6k
28,668
Add W2V2 example to CTC training readme
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,706
1,706
null
COLLABORATOR
null
# What does this PR do? This PR adds a W2V2-Bert training example config to the CTC folder. This might be a bit light, I can add another training config example on TIMIT or turkish CV tomorrow if needed. cc @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28668/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28668", "html_url": "https://github.com/huggingface/transformers/pull/28668", "diff_url": "https://github.com/huggingface/transformers/pull/28668.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28668.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28667/comments
https://api.github.com/repos/huggingface/transformers/issues/28667/events
https://github.com/huggingface/transformers/pull/28667
2,096,522,393
PR_kwDOCUB6oc5k3CxJ
28,667
ENH: added new output_logits option to generate function
{ "login": "mbaak", "id": 11329693, "node_id": "MDQ6VXNlcjExMzI5Njkz", "avatar_url": "https://avatars.githubusercontent.com/u/11329693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbaak", "html_url": "https://github.com/mbaak", "followers_url": "https://api.github.com/users/mbaak/followers", "following_url": "https://api.github.com/users/mbaak/following{/other_user}", "gists_url": "https://api.github.com/users/mbaak/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbaak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbaak/subscriptions", "organizations_url": "https://api.github.com/users/mbaak/orgs", "repos_url": "https://api.github.com/users/mbaak/repos", "events_url": "https://api.github.com/users/mbaak/events{/privacy}", "received_events_url": "https://api.github.com/users/mbaak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @mbaak πŸ‘‹ Thank you for opening the PR πŸ€— \r\n\r\nI haven't seen a request for this feature (or I have no recollection of it). Personally, I am also not very convinced that it is very useful -- at the end of the day, the model will select the next token _after_ applying the logits processors, if there are any. Since merging this PR would increase the complexity of our codebase AND I'm not convinced, I won't be merging it for now.\r\n\r\nHowever, I may have a poor perception of this feature's usefulness. I'm going to do my standard bargain: if this comment gets 20 reactions, it means there are users looking for this feature, in which case I will add the feature πŸ€— (Whoever does the 20th reaction, please tag me)", "Hi @gante,\r\n\r\nLet me try to explain the use-case better, my description was a bit short and perhaps unclear. \r\n(Indeed for next-token-generation the raw logits are not relevant.) \r\n\r\nI'm using a RAG setup for question-answering on large documents. (For example, a document describes a project, and a question can be: what's the location of the project?) As causallm model I'm using llama2 (13B) with default settings, meaning with logit processing and warping turned on. The answers are generated as usual, so based on the processed logits to generate the best possible next tokens.\r\n\r\nHowever, these answers need to be reviewed for mistakes/hallucinations. So we want a confidence score for each answer - one that helps in the review. A causallm model does not provide that of course, just generated tokens, so we got to be creative and came up with a possible solution. \r\n\r\n(We find this setup with an LLM is significantly more accurate than a RAG using dedicated bert-based QA models, which do provide scores.) \r\n(One can prompt a causallm model to generate a confidence score, but it seems that only works for O(100)B parameter models, not for smaller ones.)\r\n\r\nWe do the following: \r\nAfter each answer the same instantiated model is queried again and asked: \"Given the context is the provided answer correct? Yes or No.\" We are then interested in the relative probability: P(Yes|text) / (P(No|text) + P(Yes|text)). \r\nInterestingly, and perhaps surprisingly, in practice this ratio turns out to be a pretty reliable confidence score, ie. closer to zero is more inaccurate and closer to one is more accurate. \r\n\r\nBut for this to work we need the unprocessed logits. B/c after warping (and using this query) normally only one token remains, yes or no, which has the (renormalized) probability 1, so the confidence score is always 0 or 1. That does not help us.\r\n\r\nSo I'm using a causallm model in a somewhat unconventional way. But one that I believe is very useful, in the sense that it provides a (much needed!) functionality that is otherwise missing for RAGs (as far as I have seen).\r\n\r\nHope this makes it more clear! πŸ™‚\r\n\r\nThere may be a different way to do this, but I'm afraid I am not aware of it. Hence the PR for unprocessed logits, which for the reason above I think this would a useful addition to have. ", "@gante This feature was requested earlier: https://github.com/huggingface/transformers/issues/17521", "@sbrugman Thanks for the reminder. \r\n@gante Does that change you mind? :-) (In the other thread you are okay with exactly this.) ", "This would be very beneficial for my current project, I would love this PR to be merged. ", "upvote", "It seems I was wrong, and several people do want it :) Reverting my decision, I'll review the PR so as to include the feature.\r\n\r\n@mbaak indeed, with sampling the `top_k` argument (active by default) erases a substantial part of the logits signal.\r\n\r\n@vwxyzjn given your comment in the other issue, this might be useful to you :)", "Thanks for the comments! I'll pick it up.", "@gante I've implemented your feedback, the code should be good to go I think!", "@mbaak To make our CI go green, you will need to:\r\n1. rebase with the latest `main`\r\n2. run `make fixup` on your terminal, within the transformers folder\r\n3. force push the changes", "@gante Done!", "@mbaak there seems a missing `output_logits` somewhere in the code, CI is complaining :D", "@gante Fyi I'm working on fixing the tests. (By default they're skipped locally, that's why I missed them earlier.)", "@gante I fixed the CI tests (I had only run the generation/ tests locally, not the model ones), and have implemented @amyeroberts' feedback. I think it's good to go now!", "@mbaak thank you for iterating and making the library richer πŸ’› " ]
1,706
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? output_logits option behaves like output_scores, but returns the raw, unprocessed prediction logit scores, ie. the values before they undergo logit processing and/or warping. The latter happens by default for the regular output scores. It's useful to have the unprocessed logit scores in certain circumstances. For example, unprocessed logit scores are very useful with causallm models when one wants to determine the probability of a certain answer, e.g. when asking a question with a yes/no answer. In that case getting the next-token probabilities of both "yes" and "no" (and/or their relative ratio) is of interest for classification. The reason for getting these _before_ logit processing and/or warping is b/c a) that can change the probabilities or b) reject the tokens of interest / reduce the number of tokens to just 1. In practice this can be used to generate confidence / classification scores when eg. using causallm models for question-answering tasks. Query your language model with: "Is the {statement} correct? Answer yes or no:", take the raw logit scores and softmax them, and calculate the score: prob(yes) / (prob(yes) + prob(no)) to get a useful classification score. For an example use-case see paper TabLLM: Few-shot Classification of Tabular Data with Large Language Models by Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. https://arxiv.org/abs/2210.10723 In addition: - added dedicated unit test: tests/generation/test_utils/test_return_unprocessed_logit_scores which tests return of logics with output_logits=True in generation. - set output_logits=True in all other generation unit tests, that also have output_scores=True. Fixes # (issue) NA ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. (Yes, I've seen it discussed but now cannot refind the link.) - [ X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ X ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - generate: @gante - text models: @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28667/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28667", "html_url": "https://github.com/huggingface/transformers/pull/28667", "diff_url": "https://github.com/huggingface/transformers/pull/28667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28667.patch", "merged_at": 1708364057000 }
https://api.github.com/repos/huggingface/transformers/issues/28666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28666/comments
https://api.github.com/repos/huggingface/transformers/issues/28666/events
https://github.com/huggingface/transformers/pull/28666
2,096,483,980
PR_kwDOCUB6oc5k26Q0
28,666
Improve Backbone API docs
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts I don't have write access in this repository, can you merge this? ", "@merveenoyan Yep! " ]
1,706
1,706
1,706
CONTRIBUTOR
null
I improved the wording of the Backbone API docs and added a new illustration.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28666/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28666", "html_url": "https://github.com/huggingface/transformers/pull/28666", "diff_url": "https://github.com/huggingface/transformers/pull/28666.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28666.patch", "merged_at": 1706183519000 }
https://api.github.com/repos/huggingface/transformers/issues/28665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28665/comments
https://api.github.com/repos/huggingface/transformers/issues/28665/events
https://github.com/huggingface/transformers/pull/28665
2,096,464,018
PR_kwDOCUB6oc5k21yg
28,665
Remove deprecated eager_serving fn
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,706
1,706
1,706
MEMBER
null
The `eager_serving` method on our TF models was deprecated some time ago, and can now be removed - it was never part of the public API anyway! EDIT: Throwing in a quick fix to the nearby `input_signature` docstring while I'm here
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28665", "html_url": "https://github.com/huggingface/transformers/pull/28665", "diff_url": "https://github.com/huggingface/transformers/pull/28665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28665.patch", "merged_at": 1706028787000 }
https://api.github.com/repos/huggingface/transformers/issues/28664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28664/comments
https://api.github.com/repos/huggingface/transformers/issues/28664/events
https://github.com/huggingface/transformers/pull/28664
2,096,319,075
PR_kwDOCUB6oc5k2WrA
28,664
Introduce AcceleratorConfig dataclass
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" } ]
closed
false
null
[]
[ "@amyeroberts tests finally pass after losing my mind on imports πŸ™Œ ", "Hi @muellerzr , could you please get this PR merged? We need the PR to get the accelerate test on TPU v3 to succeed again. Thanks.", "As a general note, immediately after the next accelerate release I'll make a follow-up PR utilizing the config class seen here so users won't have annoying FutureWarning's https://github.com/huggingface/accelerate/pull/2441 (No big logic difference, just shove it all into the config rather than the Accelerator based on the accelerate version)", "For the doc tests, there was a recent commit merged to main, which should hopefully resolve this. \r\n\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28664). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,706
1,708
1,707
CONTRIBUTOR
null
# What does this PR do? This PR centralizes all arguments for the `Accelerator` not covered by `fsdp_config` and `deepspeed_config` into a singular dataclass that users can pass in as a json file or through raw CLI param args. I *think* I have the CLI args configured right? But I'm not 100% sure. Advice on how to check that would be appreciated! Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28664/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28664", "html_url": "https://github.com/huggingface/transformers/pull/28664", "diff_url": "https://github.com/huggingface/transformers/pull/28664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28664.patch", "merged_at": 1707923889000 }
https://api.github.com/repos/huggingface/transformers/issues/28663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28663/comments
https://api.github.com/repos/huggingface/transformers/issues/28663/events
https://github.com/huggingface/transformers/issues/28663
2,096,313,865
I_kwDOCUB6oc588zYJ
28,663
How to set stopping criteria in model.generate() when a certain word appear
{ "login": "pradeepdev-1995", "id": 41164884, "node_id": "MDQ6VXNlcjQxMTY0ODg0", "avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pradeepdev-1995", "html_url": "https://github.com/pradeepdev-1995", "followers_url": "https://api.github.com/users/pradeepdev-1995/followers", "following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}", "gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions", "organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs", "repos_url": "https://api.github.com/users/pradeepdev-1995/repos", "events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}", "received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports." ]
1,706
1,706
null
NONE
null
### Feature request stopping criteria in model.generate() when a certain word appear The word I need to stop the generation when found is : [/SENTENCE] But the model doesn't generate the word itself, instead, it generates the subwords [ [/,SEN,TE,NC,E] ] like this . corresponding ids from the tokenizer are, ( Id and subword word) 28792 => [ 28748 => / 28759 => SEN 2654 => TE 1197 => NC 28793 => E] so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. ### Motivation stopping criteria in model.generate() when a certain word appear The word I need to stop the generation when found is : [/SENTENCE] But the model doesn't generate the word itself, instead, it generates the subwords [ [/,SEN,TE,NC,E] ] like this . corresponding ids from the tokenizer are, ( Id and subword word) 28792 => [ 28748 => / 28759 => SEN 2654 => TE 1197 => NC 28793 => E] so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. ### Your contribution stopping criteria in model.generate() when a certain word appear The word I need to stop the generation when found is : [/SENTENCE] But the model doesn't generate the word itself, instead, it generates the subwords [ [/,SEN,TE,NC,E] ] like this . corresponding ids from the tokenizer are, ( Id and subword word) 28792 => [ 28748 => / 28759 => SEN 2654 => TE 1197 => NC 28793 => E] so how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28663/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28662/comments
https://api.github.com/repos/huggingface/transformers/issues/28662/events
https://github.com/huggingface/transformers/issues/28662
2,096,100,605
I_kwDOCUB6oc587_T9
28,662
Training of GPT2 hang during Checkpoint stage
{ "login": "jchauhan", "id": 74857, "node_id": "MDQ6VXNlcjc0ODU3", "avatar_url": "https://avatars.githubusercontent.com/u/74857?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchauhan", "html_url": "https://github.com/jchauhan", "followers_url": "https://api.github.com/users/jchauhan/followers", "following_url": "https://api.github.com/users/jchauhan/following{/other_user}", "gists_url": "https://api.github.com/users/jchauhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/jchauhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchauhan/subscriptions", "organizations_url": "https://api.github.com/users/jchauhan/orgs", "repos_url": "https://api.github.com/users/jchauhan/repos", "events_url": "https://api.github.com/users/jchauhan/events{/privacy}", "received_events_url": "https://api.github.com/users/jchauhan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Would recommend you to check this #26724 and try the solution, might be that or if the saving does not work, concurrency there. Code was recently changed cc @muellerzr πŸ€— " ]
1,706
1,706
null
NONE
null
### System Info **Env** ``` - `transformers` version: 4.38.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: TPU - Using distributed or parallel set-up in script?: xla_spwn script GCP TPU v2.8 Architecture ``` **Libraries installed** ``` absl-py 2.1.0 accelerate 0.26.1 aiohttp 3.9.1 aiosignal 1.3.1 annotated-types 0.6.0 asttokens 2.4.1 async-timeout 4.0.3 attrs 23.2.0 bitsandbytes 0.42.0 cachetools 5.3.2 certifi 2023.11.17 charset-normalizer 3.3.2 cloud-tpu-client 0.10 datasets 2.16.1 decorator 5.1.1 deepspeed 0.13.0 dill 0.3.7 evaluate 0.4.1 exceptiongroup 1.2.0 executing 2.0.1 filelock 3.13.1 frozenlist 1.4.1 fsspec 2023.10.0 google-api-core 1.34.0 google-api-python-client 1.8.0 google-auth 2.26.2 google-auth-httplib2 0.2.0 googleapis-common-protos 1.62.0 hjson 3.1.0 httplib2 0.22.0 huggingface-hub 0.20.3 idna 3.6 install 1.3.5 ipython 8.20.0 jedi 0.19.1 Jinja2 3.1.3 joblib 1.3.2 libtpu-nightly 0.1.dev20230825+default loralib 0.1.2 MarkupSafe 2.1.4 matplotlib-inline 0.1.6 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 networkx 3.2.1 ninja 1.11.1.1 numpy 1.26.3 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.18.1 nvidia-nvjitlink-cu12 12.3.101 nvidia-nvtx-cu12 12.1.105 oauth2client 4.1.3 packaging 23.2 pandas 2.2.0 parso 0.8.3 peft 0.7.2.dev0 pexpect 4.9.0 pillow 10.2.0 pip 21.2.3 prompt-toolkit 3.0.43 protobuf 3.20.3 psutil 5.9.8 ptyprocess 0.7.0 pure-eval 0.2.2 py-cpuinfo 9.0.0 pyarrow 15.0.0 pyarrow-hotfix 0.6 pyasn1 0.5.1 pyasn1-modules 0.3.0 pydantic 2.5.3 pydantic_core 2.14.6 Pygments 2.17.2 pynvml 11.5.0 pyparsing 3.1.1 python-dateutil 2.8.2 pytz 2023.3.post1 PyYAML 6.0.1 regex 2023.12.25 requests 2.31.0 responses 0.18.0 rsa 4.9 safetensors 0.4.2 scikit-learn 1.4.0 scipy 1.12.0 setuptools 57.4.0 six 1.16.0 sklearn 0.0 stack-data 0.6.3 sympy 1.12 threadpoolctl 3.2.0 tokenizers 0.15.1 torch 2.1.2 torch-xla 2.1.0 torchvision 0.16.2 tqdm 4.66.1 traitlets 5.14.1 transformers 4.38.0.dev0 triton 2.1.0 typing_extensions 4.9.0 tzdata 2023.4 uritemplate 3.0.1 urllib3 2.1.0 wcwidth 0.2.13 xxhash 3.4.1 yarl 1.9.4 ``` **Command** ### Who can help? text models: @ArthurZucker and @younesbelkada trainer: @muellerzr and @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Procure a GCP TPU v2.8 VM 2. Setup Transformer in a virtual env 3. run the training command similar to below ``` export PJRT_DEVICE=TPU python ./transformers/examples/pytorch/xla_spawn.py --num_cores 8 ./transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path "gpt2" \ --train_file data.txt \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --do_train \ --output_dir my-gpt \ --overwrite_output_dir \ --log_level debug \ --save_steps 1000 \ --cache_dir ./cache/ \ --num_train_epochs 40 ``` ### Expected behavior The trained model and checkpoint should be complete within a reasonable time of 15 mins. The training takes 5 mins however, checkpointing and saving model does not complete
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28662/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28661/comments
https://api.github.com/repos/huggingface/transformers/issues/28661/events
https://github.com/huggingface/transformers/pull/28661
2,095,853,854
PR_kwDOCUB6oc5k0xC-
28,661
[`Backbone`] Use `load_backbone` instead of `AutoBackbone.from_config`
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28661). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker This is just internal. We still want to be able to load a backbone from the config. In fact, this still happens inside `load_backbone`. `load_backbone` just enables us to pass the model's config directly, and then figures out how to load the model based on what's there e.g. a backbone checkpoint or a config. " ]
1,706
1,706
1,706
COLLABORATOR
null
# What does this PR do? Uses `load_backbone` in place of `AutoBackbone.from_config` in the modeling files. This is the first part of a series of changes to enable loading timm or transformers models with the same call i.e. removing the if/else structure [we see in models like DETR](https://github.com/huggingface/transformers/blob/8278b1538ecc89dad8ebca510a31a86bc8645edb/src/transformers/models/detr/modeling_detr.py#L345). This forms part of the work to be able to load pretrained backbones from timm or transformers interchangeably into a new model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28661", "html_url": "https://github.com/huggingface/transformers/pull/28661", "diff_url": "https://github.com/huggingface/transformers/pull/28661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28661.patch", "merged_at": 1706633649000 }